Category Archives: Miscellaneous

GPU Zen == Two Cups of Joe

The book GPU Zen is out. Go get it. This is a book edited by Wolfgang Engel and is essentially the successor to the GPU Pro series. Github code is here.

(Update: there’s a call for participation for GPU Zen 2.)

Full disclosure: I edited one small section, on VR. I forget if I am getting paid in some way. I probably get a free copy if I ask nicely. But, the e-book version is $9.99! So I simply bought one. Not $89.99, as books of this type usually are (even for the electronic edition), but rather the price of two coffees.

It’s a Kindle book. Unlike the GPU Pro and ShaderX books, it’s self-published. It’s mostly meant as an e-book, though you can buy a paperback edition if you insist.

So what’s in the book? Seventeen articles on interactive rendering techniques, in good part by practitioners, and nine have associated code. The book’s 360 pages. As you probably know from similar books, for any given person, most of the articles will be of mild interest at best. There will be a few that are worth knowing about. Then there will be one or two that are gold, that help with something you didn’t know about, or didn’t know how to do, or heard of but didn’t want to reinvent.

For example, Graham Wihlidal’s article “Optimizing the Graphics Pipeline with Compute” is a much-expanded and in-depth explanation of work that was presented at GDC 2016. Trying to figure out his work from the slideset is, I guess, theoretically possible. Now you don’t have to, as Graham lays it all out, along with other insights since his presentation, in a 44 page article. At $89.99, I might want to read it but would think twice before getting it (and I have done so in the past – some books of this type I still haven’t bought, if only one article is of interest).

The detailed explanation of XPerf, ETW, and GPUView in the article by James Hughes et alia on VR performance tuning might instead be the one you find invaluable. Or the two articles on SSAO, or one on bokeh depth-of-field, or – well, you get the idea. Or possibly none of them, in which case you’re out a small bit of cash.

For the whole table of contents, just go to the book’s listing and click on the cover to “Look Inside.”

Me, I usually prefer books made of atoms. But for technical works, if I have to choose, I’m happier overall with electronic versions. For these “collections of articles” books in particular, the big advantage of the e-book is searchability. No more “I vaguely recalls it’s in this book somewhere, or maybe one of these three books…” and spending a half-hour flipping through them all. Just search for a term or some particular word and go. Oh, one other cute thing: you can loan the e-book to someone else for 14 days (just once, total, I think).

At $9.99, it’s a minimal-brainer. Order by midnight tomorrow and you’ll get the Ginzu knife set with it. I’ll try to avoid being too much of a huckster here, but honestly, so cheap – you’d have money left for Pete Shirley’s ray tracing e-books, along with Morgan McGuire’s Graphics Codex. I like low-cost and worthwhile. Addendum: if you do buy the paperback, the Kindle “Matchbook” price for the e-book is $2.99. Which is how I think reality should be: buy the expensive atoms one, get the e-book version for a little more, vs. paying full price for each.

7 Colossal Things for May 15, 2017

Haven’t done any “seven things” for a year, so it’s time. This one will just be stuff from the wonderful site This Is Colossal, dedicated to odd ways of making art.

No deep “man’s inhumanity to man” art-with-a-capital-A here, but rather some lovely samples from this wonderful site.

 

Everything is Triangles

I was entertained to see that the new NVIDIA HQ is triangle inspired. Great quote from an interesting article about new technology company offices:

 

“At this point I’m kind of over the triangle shape, because we took that theme and beat it to death,” admits John O’Brien, the company’s head of real estate, who pointedly vetoed a colleague’s recent suggestion to offer triangle-shaped water bottles in the cafeteria.

High Performance Graphics 2017 Call for Participation

The High-Performance Graphics 2017 conference call for participation is here.

Summary: deadline for papers is Friday April 21st. Conference itself is Friday-Sunday, July 28-30, colocated with SIGGRAPH in Los Angeles.

For me, this is one of the two great conferences each year for interactive rendering related papers (SIGGRAPH’s papers selection, for whatever reasons, seems to have mostly moved on to other things).

Real-World Sampling “Artifact”

[A repost, due to WordPress weirdness – sorry about that. Note to self: don’t paste images into WordPress, always upload and insert them.]

I’m seeing this more and more in my neighborhood in the evening:

shadows
It’s the shadow of a tree on pavement, superimposed 3 times. It’s because they’ve been installing new LED streetlights with 3 bulbs.

Hard for me to photograph the light source well, but a reflection of some sort in the camera shows the three bulbs in the upper right:

light

It’s like the artifacts you see when anyone tries to approximate an area light with point lights. So with the advent of LEDs, I guess we won’t need light area sampling algorithms as much?

Maybe one for the Real Artifacts gallery.

WebGL 2: New Features

by Shuai Shao
g
ithub repo for article here

Last time we showed how to deal with issues porting a WebGL 1 engine to WebGL 2. In this article, we will talk about what new features come with WebGL 2 and what cool things can we do with them.

New features

Multisampled Renderbuffers

Previously, if we wanted antialiasing we would either have to render it to the default backbuffer or perform our own post-process AA (such as FXAA or SMAA) on content rendered to a texture.

Now, with Multisampled Renderbuffers, we can use the general rendering pipeline in WebGL to provide multisampled antialiasing (MSAA):

pre-z pass –> rendering pass to FBO –> postprocessing pass –> render to window

renderbufferStorageMultisample is the relevant function here.

var colorRenderbuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, colorRenderbuffer);
gl.renderbufferStorageMultisample(gl.RENDERBUFFER, 4, gl.RGBA8, FRAMEBUFFER_SIZE.x, FRAMEBUFFER_SIZE.y);

gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffers[FRAMEBUFFER.RENDERBUFFER]);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, colorRenderbuffer);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);

Pay attention to the fact that the multisample renderbuffers cannot be directly bound to textures, but they can be resolved to single-sample textures using the blitFramebuffer call. This is a new feature in WebGL 2 as well, and is used like this:

var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, FRAMEBUFFER_SIZE.x, FRAMEBUFFER_SIZE.y, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.bindTexture(gl.TEXTURE_2D, null);

gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffers[FRAMEBUFFER.COLORBUFFER]);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);

// ...

// After drawing to the multisampled renderbuffers
gl.bindFramebuffer(gl.READ_FRAMEBUFFER, framebuffers[FRAMEBUFFER.RENDERBUFFER]);
gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, framebuffers[FRAMEBUFFER.COLORBUFFER]);
gl.clearBufferfv(gl.COLOR, 0, [0.0, 0.0, 0.0, 1.0]);
gl.blitFramebuffer(
    0, 0, FRAMEBUFFER_SIZE.x, FRAMEBUFFER_SIZE.y,
    0, 0, FRAMEBUFFER_SIZE.x, FRAMEBUFFER_SIZE.y,
    gl.COLOR_BUFFER_BIT, gl.NEAREST
);

3D Texture

The first thing that comes to mind with 3D textures is volumetric effects, such as fire, smoke, light rays, realistic fog, etc. Now we can bring these features into our WebGL engine. In addition, 3D textures can be used to store medical data such as MRI and CT scans, and are useful when implementing cross-sectioning. 3D textures can also improve performance by using them to cache light for real-time global illumination.

WebGL 2 support for 3D textures is as good as that for 2D textures. We have fast access speed and native tri-linear interpolation.

The code for setting up a 3D texture usually has a 2D texture counterpart.

Texture 2D Texture 3D
texImage2D texImage3D
texSubImage2D texSubImage3D
copyTexImage2D copyTexImage3D
compressedTexImage2D compressedTexImage3D
compressedTexSubImage2D compressedTexSubImage3D
texStorage2D texStorage3D

There are certain elements that do not match exactly. For example, since we have one more dimension, we will have depthzoffset, and TEXTURE_WRAP_T for 3D textures. Also, the internal format and type combinations are not 100% matched.

The sampler used in shaders is sampler3D instead of sampler2D.

Here’s an example setup:

var texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_3D, texture);
gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_BASE_LEVEL, 0);
gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MAX_LEVEL, Math.log2(SIZE));
gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);

gl.texImage3D(
    gl.TEXTURE_3D,  // target
    0,              // level
    gl.R8,        // internalformat
    SIZE,           // width
    SIZE,           // height
    SIZE,           // depth
    0,              // border
    gl.RED,         // format
    gl.UNSIGNED_BYTE,       // type
    data            // pixel
    );

gl.generateMipmap(gl.TEXTURE_3D);

Last but not the least, the 2D Texture Array concept is available with the 3D Texture feature. That is, multiple 2D textures can be stored in an array that can be accessed. It has its own sampler: sampler2DArray, but it shares the texImage3D GL functions. Here’s an example call:

gl.texImage3D(
    gl.TEXTURE_2D_ARRAY,
    0,
    gl.RGBA,
    IMAGE_SIZE.width,
    IMAGE_SIZE.height,
    NUM_IMAGES,
    0,
    gl.RGBA,
    gl.UNSIGNED_BYTE,
    pixels
);

Uniform Buffer

Setting uniforms for shaders is often a considerable amount of the time spent by an engine. Take the Cesium Globe as an example. For regular draw calls, uniform4fv is within the top 5 GL functions taking the most execution time. Also, the sum of all uniform[i]fv and uniformMatrix[i]fv calls is nearly 2.5% of all execution time. That’s quite a large percentage. We always have to call them to update uniform values each frame. What’s more, it can be annoying that we have to make duplicated uniform calls for one same uniform object shared by several shaders.

Now the Uniform buffer object may bring us a boost in performance by allowing us to store blocks of uniforms in buffers stored on the GPU, just like vertex/index buffers. This can make switching between sets of uniforms faster. Additionally, uniform buffers can be shared by multiple programs at the same time.

Those are quite a few benefits. But, with so many improvements, the setup routine is about to change a lot. We will have a basic setup example first, and then look at something that might need your attention.

var uniformPerSceneLocation = gl.getUniformBlockIndex(program, 'PerScene');
gl.uniformBlockBinding(program, uniformPerSceneLocation, 2);
//...
var material = new Float32Array([
    0.1, 0.0, 0.0,  0.0,
    0.0, 0.5, 0.0,  0.0,
    0.0, 0.0, 0.5,  0.0,
    128.0, 0.0, 0.0, 0.0
]);
var uniformPerSceneBuffer = gl.createBuffer();
gl.bindBuffer(gl.UNIFORM_BUFFER, uniformPerSceneBuffer);
gl.bufferData(gl.UNIFORM_BUFFER, material, gl.STATIC_DRAW);
gl.bindBuffer(gl.UNIFORM_BUFFER, null);
//...
// Render
gl.bindBufferBase(gl.UNIFORM_BUFFER, 2, uniformPerSceneBuffer);

The first thing that may confuse you is the layout standard (we will focus on std140 here). You can always find the details inOpenGL ES 3.00 Spec Page 68.

One thing that I really want you to notice is:

when the data member is a three-component vector with components consuming N basic machine units, the base alignment is 4N

And:

If the member is a structure, the base alignment of the structure is N, where N is the largest base alignment value of any of its members, and rounded up to the base alignment of a vec4.

Here’s an example:

struct Material
{
    vec3 ambient;
    vec3 diffuse;
    vec3 specular;
    float shininess;
};

uniform PerScene
{
    Material material;
} u_perScene;
var material = new Float32Array([
    0.1, 0.0, 0.0,  0.0,
    0.0, 0.5, 0.0,  0.0,
    0.0, 0.0, 0.5,  0.0,
    128.0, 0.0, 0.0, 0.0
]);

Here we have vec3, so our data should add a 0 for each vec3 for alignment. Also, since we use a struct to wrap our data, it should be rounded to a multiple of vec4. That’s why we have 3 extra zeroes after the float shininess.

Another concern is about updating the uniform block. There are several different approaches that can get us there. However, their performance can vary. It’s pretty tricky to make the most of our uniform block.

Here are some detailed discussions on Stack Overflow and gamedev.net.

But, basically, we can use gl.bufferSubData to copy the updated typedArray into the uniform buffers.

Sync Objects

Sync objects can be used to synchronize execution between the GL server and the client, which gives you more control over GPU by letting you set a fence to inform the GPU to wait until a set of GL operations have finished. Sync objects are more efficient than gl.finish.

We can get more accurate benchmarks with sync objects. In addition, applications such as image manipulation, where data of each frame comes from the CPU, will benefit from this degree of control.

Query Objects

This operation is very useful when we want to do occlusion testing. We can know how many geometries are actually drawn by performing a gl.ANY_SAMPLES_PASSED query around a set of draw calls. We can use these queries and so get rid of specialized picking method code.

Keep in mind that these queries are asynchronous. A query’s result is never available in the same frame that the query is issued. This is different from OpenGL ES 3 where query result may be available in the same frame. It’s an application portability concern.

gl.beginQuery(gl.ANY_SAMPLES_PASSED, query);
gl.drawArraysInstanced(gl.TRIANGLES, 0, 3, 2);
gl.endQuery(gl.ANY_SAMPLES_PASSED);
//...
(function tick() {
    if (!gl.getQueryParameter(query, gl.QUERY_RESULT_AVAILABLE)) {
        // A query's result is never available in the same frame
        // the query was issued.  Try in the next frame.
        requestAnimationFrame(tick);
        return;
    }

    var samplesPassed = gl.getQueryParameter(query, gl.QUERY_RESULT);
    gl.deleteQuery(query);
})();

Sampler Objects

In WebGL 1 texture image data and sampling information (which tells GPU how to read the image data) are both stored in texture objects. It can be annoying when we want to read from the same texture twice but with a different method (say, linear filtering vs nearest filtering) because we need to have two texture objects. With sampler objects, we can separate these two concepts. We can have one texture object and two different sampler objects. This will result in a change in how our engine organize textures.

Here’s an example:

var samplerA = gl.createSampler();
gl.samplerParameteri(samplerA, gl.TEXTURE_MIN_FILTER, gl.NEAREST_MIPMAP_NEAREST);
gl.samplerParameteri(samplerA, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.samplerParameteri(samplerA, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.samplerParameteri(samplerA, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);

var samplerB = gl.createSampler();
gl.samplerParameteri(samplerB, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
gl.samplerParameteri(samplerB, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.samplerParameteri(samplerB, gl.TEXTURE_WRAP_S, gl.MIRRORED_REPEAT);
gl.samplerParameteri(samplerB, gl.TEXTURE_WRAP_T, gl.MIRRORED_REPEAT);

// ...

gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindSampler(0, samplerA);

gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindSampler(1, samplerB);

Transform Feedback

Transform feedback allows the output of the vertex shader to be captured in a buffer object. This is useful for particle systems and simulation that perform on the GPU without any CPU intervention.

In WebGL 1, when we want to implement such feature, usually a texture storing the states of particles is inevitable. Two textures, to be precise, storing states from previous frame and current frame, and ping-pong between them.

Here’s an example of the WebGL 1 approach (from toji’s WebGL Particles take 2).

In the first-pass fragment shader, do the simulation, and store the position results in a texture.

// First pass - Fragment Shader
uniform sampler2D tPositions;
varying vec2 vUv;
// ...
vec4 runSimulation(vec4 pos) {
    // simulation
    // ...
    return pos;
}

void main() {
    vec4 pos = texture2D( tPositions, vUv );
    //...
    pos = runSimulation(pos);

    // Write new position out
    gl_FragColor = pos;
}

And then we use this position texture as an input for our second pass vertex shader.

// Second pass - Vertex Shader
attribute vec3 position;
uniform float pointSize;
uniform sampler2D map;
varying vec2 vUv;

//...

void main() {
    vUv = position.xy + vec2( 0.5 / width, 0.5 / height );
    vec3 color = texture2D( map, vUv ).rgb;
    gl_PointSize = pointSize;
    gl_Position = projectionMatrix * modelViewMatrix * vec4( color, 1.0);
}

What follows is how we do it in WebGL 2. With Transform feedback, we can discard the fragment shader in step 1, as well as the texture. We write the output (position) of the vertex shader in step 1 to the vertex attribute array input of step 2. (In practice, you still need a placeholder trivial fragment shader for the first step to correctly compile the program.)

// First pass - Vertex Shader
// ...
out vec3 v_position;
void main() {
    // ...
    v_position = u_projMatrix * u_modelViewMatrix * vec4(a_position, 1.0);
}
// Second pass - Vertex Shader
in vec3 a_position;
void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4( a_position, 1.0 );
    // ...
}

And here’s how we bind the buffers (from WebGL2SamplesPack):

var transformFeedback = gl.createTransformFeedback();
var varyings = ['v_position', /*...*/];
gl.transformFeedbackVaryings(programTransform, varyings, gl.SEPARATE_ATTRIBS);
// ...
gl.bindBuffer(gl.ARRAY_BUFFER, particleVBOs[i][Particle.POSITION]);
gl.bufferData(gl.ARRAY_BUFFER, particlePositions, gl.STREAM_COPY);
// ...
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, transformFeedback);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, particleVBOs[(currentSourceIdx + 1) % 2][Particle.POSITION]);
gl.enable(gl.RASTERIZER_DISCARD);   // we are not drawing
gl.beginTransformFeedback(gl.POINTS);

gl.drawArrays(gl.POINTS, 0, NUM_PARTICLES);

gl.endTransformFeedback();
gl.disable(gl.RASTERIZER_DISCARD);

gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, null);

A set of new texture features:

Here is a list of the new texture features in WebGL 2.

    • sRGB textures
      Allow the application to perform gamma-correct rendering.
gl.texImage2D(
    gl.TEXTURE_2D,
    0, // Level of details
    gl.SRGB8, // Format
    gl.RGB,
    gl.UNSIGNED_BYTE, // Size of each channel
    image
);

The sRGB texture will be automatically converted to linear space when being fetched in the shader. For physically-based rendering and other operations we normally want to deal with colors in a linear space, not in display space.

  • Vertex texture for:
    • terrain
    • water
    • skeleton animation
  • Texture LOD

The texture LOD parameter is used to determine which mipmap to fetch from; it can now be clamped. The base and maximum mipmap level can both be set as clamps. This allows mipmap streaming, i.e., loading only the mipmap levels currently needed. This is very useful for a WebGL environment, where textures are downloaded via a network.

gl.texParameterf(gl.TEXTURE_2D, gl.TEXTURE_MIN_LOD, 0.0);
gl.texParameterf(gl.TEXTURE_2D, gl.TEXTURE_MAX_LOD, 10.0);
  • ETC2/EAC texture compression

A mandatory supported feature, compressed textures have obvious transmission time savings.

gl.compressedTexImage2D(
    gl.TEXTURE_2D, 
    0, 
    gl.COMPRESSED_RGBA8_ETC2_EAC, 
    IMAGE_SIZE.width, 
    IMAGE_SIZE.height, 
    0, 
    pixels
);
  • Integer textures
  • Non-Power-of-Two Texture
    • texturing video
    • 2D Sprite
  • Floating point textures
    • half-float: High dynamic range imaging
    • full-float: Variance shadow maps soft shadow
    • a feature coming together with floating point textures is floating point renderbuffer (also with multisample support).
  • Seamless cube map

Cube map is already available in WebGL 1. What’s new in WebGL 2 is that the cube map is seamless (and is always seamless, unlike in OpenGL where you can set it). With this feature we are free from using hacks to get rid of the artifacts near the borders.

  • A set of additional texture formats
textureFormats[TextureTypes.RGB] = {
    internalFormat: gl.RGB,
    format: gl.RGB,
    type: gl.UNSIGNED_BYTE
};

textureFormats[TextureTypes.RGB8] = {
    internalFormat: gl.RGB8,
    format: gl.RGB,
    type: gl.UNSIGNED_BYTE
};

textureFormats[TextureTypes.RGB16F] = {
    internalFormat: gl.RGB16F,
    format: gl.RGB,
    type: gl.HALF_FLOAT
};

textureFormats[TextureTypes.RGBA32F] = {
    internalFormat: gl.RGBA32F,
    format: gl.RGBA,
    type: gl.FLOAT
};

textureFormats[TextureTypes.R16F] = {
    internalFormat: gl.R16F,
    format: gl.RED,
    type: gl.HALF_FLOAT
};

textureFormats[TextureTypes.RG16F] = {
    internalFormat: gl.RG16F,
    format: gl.RG,
    type: gl.HALF_FLOAT
};

textureFormats[TextureTypes.RGBA] = {
    internalFormat: gl.RGBA,
    format: gl.RGBA,
    type: gl.UNSIGNED_BYTE
};

textureFormats[TextureTypes.RGB8UI] = {
    internalFormat: gl.RGB8UI,
    format: gl.RGB_INTEGER,
    type: gl.UNSIGNED_BYTE
};

textureFormats[TextureTypes.RGBA8UI] = {
    internalFormat: gl.RGBA8UI,
    format: gl.RGBA_INTEGER,
    type: gl.UNSIGNED_BYTE
};

New GLSL 3.00 ES Shader

And here comes our new shader: GLSL 3.00 ES! This new version brings in a bunch of new features that are not in GLSL 1.00. But the grammar changed at some point, so it can be quite painful converting over at the start.

Note that a shader in GLSL 1.00 is still fully supported in a WebGL 2 context. It’s only the GLSL 3.00 ES grammar that doesn’t have backwards compatibility with GLSL 1.00. Only when a #version 300 es tag is added at the top of the shaders will the GLSL 3.00 ES version be turned on.

We will quickly list here a bunch of new features and new built-in functions in GLSL 3.00 ES.

  • Layout qualifiers

Vertex shader inputs can now be declared with layout qualifiers to explicitly bind the location in the shader source without requiring making gl.getAttribLocation calls. These are declared like this:

#version 300 es
#define POSITION_LOCATION 0
#define TEXCOORD_LOCATION 4
// ...
layout(location = POSITION_LOCATION) in vec2 position;
layout(location = TEXCOORD_LOCATION) in vec2 texcoord;
var vertexPosLocation = 0; // set with GLSL layout qualifier
gl.enableVertexAttribArray(vertexPosLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuffer);
gl.vertexAttribPointer(vertexPosLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);

var vertexTexLocation = 4; // set with GLSL layout qualifier
gl.enableVertexAttribArray(vertexTexLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexTexBuffer);
gl.vertexAttribPointer(vertexTexLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);

The same applies to fragment shader outputs. Layout qualifiers can also be used to control the memory layout for uniform blocks.

  • Non-square matrix

Quite straightforward. One use case is replacing a 4×4 affine matrix where the last row is (0, 0, 0, 1) with a 4×3 matrix.

  • Full integer support

Built-in functions can now take an integer as an input variable.

  • Flat/smooth interpolators

We can now explicitly declare flat interpolators to have flat shading.

  • Centroid sampling

This is used to avoid rendering artifacts when multisampling. Read this article for more details. Here is a WebGL 2 Sample of centroid sampling.

  • New built-in functions

Some very handy functions such as textureOffset, texelFetch, dFdx, textureGrad, textureLOD, etc. You can always find the complete lists in the GLSL 3.00 ES Spec

  • gl_InstanceID and gl_VertexID

These allow identification of instances and vertices within the shader.

  • Other function name changes

For example, texture2D(sampler2D sampler, vec2 coord), textureCube(samplerCube sampler, vec3 coord), (and texture3D if there is any in our old version) are now replaced with a set of overrided functions of texture

gvec4 texture (gsampler2D sampler, vec2 P [, float bias] )
gvec4 texture (gsampler3D sampler, vec3 P [, float bias] )
gvec4 texture (gsamplerCube sampler, vec3 P [, float bias] )

Credits

WebGL 2 Basics

This guest blog post is by Shuai Shao, a Masters student at UPenn under Patrick Cozzi. After hearing the announcement at SIGGRAPH, I was asking around for someone to write a “basics of WebGL 2” article and Patrick got Shuai involved. If you’re reading this any time after October 2016, see his Github repo for the latest version of this article, with any corrections folded in since then (we encourage you to contribute to it).

WebGL 2 is coming! Google Chrome just announced at SIGGRAPH 2016 that 100% of the WebGL 2 conformance suite is passing (on the first configurations).

If I have an engine that works well in WebGL 1, how do I move to WebGL 2? Things to consider:

  • What has to be changed?
  • What can be done in a better way?
  • What new features and functionalities can I add to my engine?

In this article we are focused on the first question. We discuss the main promoted features, which are supported by extensions in WebGL 1 that are part of the core of WebGL 2 and thus cannot be accessed in the old manner, along with some other compatibility issues.

You can find answers to the other two questions in our next article, which focuses on introducing new features.

In the future you may want some complete working sample code for reference, instead of just code snippets. WebGL 2 Samples pack is a resource you’ll find useful.

That’s enough for an intro. First of all, let’s get WebGL 2 working on your machine.

How do I start using WebGL 2?

Get a WebGL 2 Implementation (Browser)

You may have seen this before, let’s just hit the main points:

Get a WebGL 2 Context

Programmers always try to support as many browsers as possible. So do I. On top the WebGL 1 version of getContext, we will first try to access WebGL 2. If this fails, then drop back to WebGL 1. Here’s an example dervived from the Cesium WebGL engine:

var defaultToWebgl2 = false;

var webgl2Supported = (typeof WebGL2RenderingContext !== 'undefined');
var webgl2 = false;
var gl;

if (defaultToWebgl2 && webgl2Supported) {
    gl = canvas.getContext('webgl2', webglOptions);
    if (gl) {
        webgl2 = true;
    }
}
if (!gl) {
    gl = canvas.getContext('webgl', webglOptions);
}
if (!gl) {
    throw new Error('The browser supports WebGL, but initialization failed.');
}

Promoted Features

Some of the new WebGL 2 features are already available in WebGL 1 as extensions. However, these features will be part of the core spec in WebGL 2, which means support is guaranteed. In this first blog entry we are going to focus on these promoted features, together with potential compatibility issues they may cause.

First let’s find if there’s a way to change fewest existing WebGL 1 code using the extension to make it work correctly with a WebGL 2 context.

We may find that in some cases (instancing and VAO), it’s only the function we are calling that changes from the extension version to core version, while the parameters and pipeline don’t change. We used to call fooEXT, now we simply switch to foo.

Thanks to Javascript’s neat support of function objects, one solution is that we can create a function handler at startup, assigned with either the extension version from WebGL 1 or the core version from WebGL 2. Within the rest of the code we call this function handler.

if (!webgl2) {
    vaoExt = gl.getExtension("OES_vertex_array_object");
    //...
    gl.createVertexArray = vaoExt.createVertexArrayOES;
    //...
}

Yet this method can fail when changes are made in the shader (e.g., MRT). We still need to take a close look at each of these promoted features. So now let’s take a look at how the code changes for each of them.

Multiple Render Targets

MRT is a commonly used extension for deferred rendering, OIT, single-pass picking, etc.

WebGL 1

For MRT we used the WEBGL_draw_buffers extension as a work-around to write g-buffers in a single pass. Though it is widely supported (currently 57%+ browsers, according to WebGL stats), the extension-style code isn’t as clean as WebGL 2:

var ext = gl.getExtension('WEBGL_draw_buffers');
if (!ext) {
  // ...
}

We then bind multiple textures, tx[] in the example below, to different framebuffer color attachments.

var fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT0_WEBGL, gl.TEXTURE_2D, tx[0], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT1_WEBGL, gl.TEXTURE_2D, tx[1], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT2_WEBGL, gl.TEXTURE_2D, tx[2], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT3_WEBGL, gl.TEXTURE_2D, tx[3], 0);

Next we map the color attachments to draw buffer slots that the fragment shader will write to using gl_FragData.

ext.drawBuffersWEBGL([
  ext.COLOR_ATTACHMENT0_WEBGL, // gl_FragData[0]
  ext.COLOR_ATTACHMENT1_WEBGL, // gl_FragData[1]
  ext.COLOR_ATTACHMENT2_WEBGL, // gl_FragData[2]
  ext.COLOR_ATTACHMENT3_WEBGL  // gl_FragData[3]
]);

Also, an extra flag is needed in the shader:

#extension GL_EXT_draw_buffers : require
precision highp float;
// ...
void main() {
    gl_FragData[0] = vec4( v_position.xyz, 1.0 );
    gl_FragData[1] = vec4( v_normal.xyz, 1.0 );
    gl_FragData[2] = texture2D( u_colmap, v_uv );
    gl_FragData[3] = texture2D( u_normap, v_uv );
}

WebGL 2

For MRT our code becomes neat and clean in WebGL 2.

gl.framebufferTexture2D(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, tex[0], 0);
gl.framebufferTexture2D(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT1, gl.TEXTURE_2D, tex[1], 0);
gl.framebufferTexture2D(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT2, gl.TEXTURE_2D, tex[2], 0);

defines an array of buffers into which outputs will be written. Draw by:

gl.drawBuffers( [gl.COLOR_ATTACHMENT0, gl.COLOR_ATTACHMENT1, gl.COLOR_ATTACHMENT2] );

Instead of mapping color attachments to the draw buffer, we directly use multiple out variables in the fragment shader. This code actually benefits from the new GLSL 3.0 ES, which we will discuss later in another blog post. However, using out itself is straightforward.

#version 300 es
precision highp float;
layout(location = 0) out vec4 gbuf_position;
layout(location = 1) out vec4 gbuf_normal;
layout(location = 2) out vec4 gbuf_colmap;
layout(location = 3) out vec4 gbuf_normap;
//...
void main()
{
    gbuf_position = vec4( v_position.xyz, 1.0 );
    gbuf_normal = vec4( v_normal.xyz, 1.0 );
    gbuf_colmap = texture2D( u_colmap, v_uv );
    gbuf_normap = texture2D( u_normap, v_uv );
}

Additionally, since Texture 2D Array is now available, we can choose to render to different layers of an array of texture 2d’s instead of separate 2d textures.

gl.framebufferTextureLayer(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, texture, 0, 0);
gl.framebufferTextureLayer(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT1, texture, 0, 1);
gl.framebufferTextureLayer(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT2, texture, 0, 2);

Instancing

Instancing is a great performance booster for certain types of geometry, especially objects with many instances but without many vertices. Good examples are grass and fur. Instancing avoids the overhead of an individual API call per object, while minimizing memory costs by avoiding storing geometric data for each separate instance.

Instancing is exposed through the ANGLE_instanced_arrays extension in WebGL 1 (92%+ support). Now with WebGL 2 we can simply use drawArraysInstanced or drawArraysInstanced for the draw calls.

gl.drawArraysInstanced(gl.TRIANGLES, 0, 3, 2);

There is a new built-in variable (GLSL 3.0 ES) in the vertex shader called gl_InstanceID that can help with the draw instance call. For example, we can use this to assign each instance with a separate color.

// Vertex Shader
flat out int in instance
// ...
void main() {
    instance = gl_InstanceID;
}
// Fragment Shader
uniform Material {
    vec4 diffuse[NUM_MATERIALS];
} material;
flat in int instance;   // `flat` is a must for a int varying, plus we don't want the instance id to be interpolated
// ...
void main() {
    color = material.diffuse[instance % NUM_MATERIALS];
}

Vertex Array Object

VAO is very useful in terms of engine design. It allows us to store vertex array states for a set of buffers in a single, easy to manage object. It is exposed through the OES_vertex_array_object extension in WebGL 1 (89%+).

WebGL 1 with extension WebGL 2
createVertexArrayOES createVertexArray
deleteVertexArrayOES deleteVertexArray
isVertexArrayOES isVertexArray
bindVertexArrayOES bindVertexArray

An example:

var vertexArray = gl.createVertexArray();
gl.bindVertexArray(vertexArray);

// set vertex array states
var vertexPosLocation = 0; // set with GLSL layout qualifier
gl.enableVertexAttribArray(vertexPosLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuffer);
gl.vertexAttribPointer(vertexPosLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// ...

gl.bindVertexArray(null);

// ...

// render
gl.bindVertexArray(vertexArray);
gl.drawArrays(gl.TRIANGLES, 0, 6);

Shader Texture LOD

The Shader Texture LOD Bias control makes mipmap level control simpler for glossy environment effects in physically based rendering. This functionality is exposed through the EXT_shader_texture_lod extension in WebGL 1 (71%+).

vec4 texture2DLodEXT(sampler2D sampler, vec2 coord, float lod)

Now as part of core, the lodBias can be passed as an optional parameter to texture

gvec4 texture (gsampler2D sampler, vec2 P [, float bias] )

Fragment Depth

The fragment shader can explicitly set the depth value for the current fragment. This operation can be expensive because it can cause the early-z optimization to be disabled. However, it is needed in cases where the z-depth is modified on the fly.

This functionality is exposed through the EXT_frag_depth extension in WebGL 1 (66%+).

out float gl_FragDepth;

More details can be found in the GLSL 3.0 ES Spec.

Other compatibility issues

Look here for more information: WebGL 2 Spec Ch4.1

Credits

Humblebrag on the third anniversary of the MOOC

I just realized today that it’s been three years since the 3D Interactive Graphics MOOC came out. It’s still chugging along, surprisingly enough, getting about 35 signups a day. Of course, completion rates are a small fraction of that – I’d like to know myself what it is. Me, I’m still answering questions on the discussion board.

It’s been a good week for positive comments from people taking the course. Who wouldn’t like reading posts such as this, “It is incredibly easy to learn and the examples are vivid and awesome. I wish the professors at my university were like you!” Which I take as more a reflection of the level of teaching at that (unnamed) university – I know there are more engaging and dynamic teachers than me. The takeaway is that videos and demonstrations that are just a link away can offer a fair bit, just as films have some advantages compared to live performances. Integrating these newer technologies into the classroom is the exciting challenge.

Another person gave praise to my short dot product explanation videos, even adding links to them on Wikipedia’s dot product page (which I just edited, removing my name). Looking at those videos now, hey, they’re pretty good! Here’s one showing how the dot product and cosine are related. Find the others here.

Remember how three years ago MOOCs were going to destroy the university system, and that everyone would get a cheap college education? The reality is that MOOCs are inexpensive (usually free) distance-learning systems for relatively well-off, educated people out of school who want to study a specific topic. You also have to be quite self-motivated to plow through a course, since the usual external motivators of a college education – getting a degree, keeping the parents happy, getting your money’s worth, and staying in school for the parties – are all missing.

I’d like to see are more graphics MOOCs beyond Ed Angel’s and mine. But the reality is that MOOCs are expensive and whether there’s a viable business plan blah blah blah.

Whatever the case, knowing that I’ve been able to help a number of people get some understanding of this great field of ours has been an unalloyed joy. Honestly, working on this course has been a lovely and lucky opportunity for me, and one of the best things I’ve done with my life.

Don’t be mean

[Some on Twitter noted that I should be using milliseconds instead of FPS. This kind of misses the point, but let’s avoid distractions, here’s the article with that change. The sad part is that you then miss my hilarious joke about how I use FPS in the article, because if I used SPF you’d think I was talking about tanning. Which makes me think of another joke about rendering cows and the time it then takes to tan their hides. I’m full of great dad jokes.]

I think I’m reading “The Economist” too much, as I keep trying to come up with punny article titles. Sorry.

So, how do you measure a representative value for milliseconds per frame?

I don’t care about the mechanics, which timer call you use, etc. Just assume you successfully start timer/end timer and get some length of time in milliseconds for the frame. What do you do with these timings?

I usually see things such as an average, or a running average (average of last 20 or 50 or 100 or whatever frame times). I think this is mostly bad. As someone pointed out, almost everyone has more than the average number of legs. I find the same: in a given run there can sometimes be some frames where things noticeably slow down for whatever reason, some load on the computer. What you’re often trying to measure (as a graphics developer) is the performance of the rendering system itself, not the computer’s overall performance.

So, I currently use one of these two, or both: shortest time, or median time, over whatever set of frame times I have. Both have their uses. Shortest time is justifiable (to me, at least) because, assuming you have a very fine-grained timer, your best time is in some sense the “purest” measurement of the time a frame takes. Whatever other processes in your system are slowing down the other frames isn’t your concern. The timer doesn’t lie, you really did go that fast for one frame.

The other measure I’m OK with is the median. If your benchmarking system is going through a series of different frames (an animation or simulation is running, or the camera is orbiting, etc.), then grabbing the median frame is good. Choosing it instead of the average then doesn’t give so much weight to outliers. Better yet, graph the results and see whether the outliers are consistent.

Update: A number of game and VR developers pointed out that their major interest is maximum frame time. Makes sense: for a good experience (especially with VR) you don’t want to drop below your target of 30 FPS, 60 FPS, or 90 FPS.

My point is that the average, the mean, is not so good: often external slowdowns throw off the average enough and at random enough intervals that the average is very noisy and so, pretty useless. Taking the median, the central time of the sorted set, cuts out much of this variance, making each sample have an equal effect on the result.

Anyway, that’s where I’m at with benchmarking. What do you do? Comment here, tweet-reply, or email me at erich@acm.org and I’ll summarize.

p.s. pro tip: walk through your rendering pipeline every once in awhile, watching each step. It’s hard to really know where the time goes without doing so. I did this last week while looking at another bug and found a little logic error was causing a certain path to always do an additional post-process when it usually wasn’t needed. Free performance boost with a two-line fix! But, not something discoverable by benchmarking, because the variance is too much to notice “just” a few frames of difference.

This happens every few years. My favorite lucky find was around 15 years ago, walking through code in an established project and seeing that it was rendering twice for each time it displayed. A one-line change gave us 2x performance.

Seven Things for April 6, 2016

Let’s get visual. Last in the series, for now.