Author Archives: Eric

IKEA Reality

I ran across this article from 2014, which is a worthwhile read about IKEA’s transition from real-world photography to virtual. It had an interesting quote:

…the real turning point for us was when, in 2009, they called us and said, “You have to stop using CG. I’ve got 200 product images and they’re just terrible. You guys need to practise more.” So we looked at all the images they said weren’t good enough and the two or three they said were great, and the ones they didn’t like were photography and the good ones were all CG! Now, we only talk about a good or a bad image – not what technique created it.”

Debugging WebGL with SpectorJS

by Sebastien Vandenberghe

With the emerging number of experiences built using WebGL, and all the improvements made in the WebVR/AR space, it is critical to have efficient debugging tools. Whether you are just starting out or are already an experienced developer of 3D applications with WebGL, you likely know how tools can be important for productivity. Looking for such tools, you probably came across Patrick Cozzi’s blog post highlighting the most common ones. Unfortunately, many of these tools are no longer compatible with your project, due to missing WebGL2 features or extensions, such as draw buffers, 3D textures, and so on.

As a core contributor to BabylonJS, working at the engine level, on a daily basis I need to see the entire creation of frames, including all the available information from the WebGL state (Depth, Stencil, Blend, etc.) as well as the list of commands along with their arguments. In order to optimize the engine, I also need information and statistics about memory, draw calls, and primitives. These desires were a big motivation for me to develop SpectorJS. And as we love the WebGL community we decided to make it an Open Source Project, compatible with all existing WebGL 3D engines.

At the end of this walkthrough, you will be able to easily capture and inspect any WebGL frames rendered in your favorites applications. If you have any issues, do not hesitate to report them on Github. To stay informed of all the new features, follow us at @SpectorJS.

 

Table of Contents

Installation

Always looking to save time, the tool is directly available as a browser extension: ChromeFirefox – (more browsers are coming soon)

Embedding the library in your application or side-loading the extension are also possible. More information can be found on Github.

Basic Usage

Once installed, you can now on navigate to any website using WebGL, such as the Babylon JS playground, and you will notice the extension Icon turning red in the toolbar.

This highlights the presence of a canvas with a 3D context in the page or its embedded IFrames. Pressing the toolbar button reloads the page and the icon turns green, as Spector is now ready to capture. During the refresh Spector injects additional debug code that collects state and command information, along with other statistics.

Note: We do not enable it by default, so as to not interfere with any WebGL program unless explicitly requested.

Clicking this green button will display a popup helping you to capture frames.

Following the on-screen instructions and clicking the red circle will trigger a capture. If a canvas is selected, you can also, in this menu, pause or play  the rendered canvas frame by frame. Once the capture has been completed, a result panel will be displayed containing all the information you may need.

The bottom of the menu helps capturing what is happening during the page load on the first canvases present in the document. You can easily choose the number of commands to capture, as well as specify whether or not you would like to capture transient context (context created in the first canvas, even if not part of the DOM).

 

Note: A few reasons might prevent you capturing the context, the main one being that nothing is rendered if the scene is fully static. If this happens, moving the camera after pressing the capture button should be enough to start the capture.

Note: As collecting the information is pretty expensive, the capture may take a long time and you might have to press wait... a few times when the browser notifies you that the page is unresponsive. Unfortunately, we cannot work around this, as the capture needs to happen synchronously during the execution of your code. Without a synchronous capture, the rest of your code continues to react to external events with potential side effects on the capture.

Capture View

On the left side of the screen are displayed all the different visual state changes happening during the creation of the frame. They are displayed alongside their target frame buffer information. This helps to quickly understand how the frame has been built during troubleshooting sessions. Selecting one of the pictures automatically selects the command associated with it. The visual capture handles all the possible renderable outputs such as cube textures, 3D textures, draw buffers, render target texture, render buffers and so on.

The central panel is the commands panel. It displays the list of commands that were executed on the captured context during the frame. These are displayed chronologically. A color code is used to highlight issues and identify draw calls:

  • Orange Background: The selected command.
  • Blue Background: Draw Calls or Clear commands.
  • Green Command Name: Valid Commands (changing state to a new value).
  • Orange Command Name: Redundant Commands (meaning the value applied is the same as the current one which is useful to optimize a WebGL application)
  • Red Command Name: Deprecated WebGL Commands.

Selecting a command leads to display on the right side all of its detailed information including the command name, arguments, and JavaScript call stack. If a draw call has been selected, the various states involved in this call are all available. This is usually a pretty long list of information, as the captures contains the exhaustive list of states, attachments, programs, shaders, attributes, VAOs, uniforms, UBOs, transform feedbacks, and their attached properties. From this panel, the shader source code is also available from the program information, by following the Click to open link:

This opens a beautified view of the shader code, helping to ensure the defines and the code itself are as expected:

Note: Some information might be empty if there is an issue in the engine. For instance, unbound textures might lead to empty uniform information for the sampler. This is usually an interesting warning and more analytics are in progress to help highlight such use cases better.

A few other views are available for each capture.

Init and End State

Once a capture is open, the top command bar includes links to the initial and final state of the capture. This is useful to see how is the context was before the capture and at the end, to help deal with issues happening between frames, for instance.

Context and Frame Information

Commonly there are issues in WebGL applications related to either the canvas or the context setup. To be sure the current setup is correct, the information panel displays all the queryable information. This also contains statistics about the captured frame such as memory information, number of calls of each command, and drawn primitive information.

Sharing Captures

Since we often collaborate with others on projects or use multiple platforms, it is critical to be able to save and share captures. To do this, you can simply navigate to the Captures link in the menu, where all the captures of the session have been stored. Clicking on the floppy icon (nostalgia FTW) downloads the captured JSON file.

To open and view this file, Drag and Drop it on the Extension popup or the Capture list dedicated area. This feature can save a lot of time troubleshooting customer or cross-platform issues.

How to Compare Captures

As it is needed more often than anybody would like, comparing captures after an engine change is a must-have. A full capture comparison is currently under development, but in the meantime, captures can at least be put in different tabs of the browser, making it easier to check differences.

Checking the box in the popup menu forces the next capture to open in a new tab:

Custom Data

Displaying custom information is a nice trick to quickly identify the relationship between a material and its shader or between a mesh and its buffers. Adding custom data to the capture is achievable by adding a special field named __SPECTOR_Metadata to any WebGLObject. Once the field has been set, any command relying on this object displays the related metadata in the property panel.

javascript var cubeVerticesColorBuffer = gl.createBuffer();
cubeVerticesColorBuffer.__SPECTOR_Metadata = { name: "cubeVerticesColorBuffer" };

This enables the visibility of the custom name “cubeVerticesColorBuffer” in the capture Metadata wherever the buffer is in use.

Extension Control

Another interesting feature is the ability to drive the extension by code. Once the extension is enabled, from your browser’s dev tools, or even your code, you can call the following APIs on “spector.”:

  • captureNextFrame(obj: HTMLCanvasElement | RenderingContext) : Call to begin a capture of the next frame of a specific canvas or context.
  • startCapture(obj: HTMLCanvasElement | RenderingContext, commandCount: number) : Start a capture on a specific canvas or context. The capture will stop once it reaches the number of commands specified as a parameter, or after 10 seconds.
  • stopCapture(): ICapture : Stop the current capture and returns the result in JSON. It displays the result if the UI has been displayed. This returns undefined if the capture has not been completed or did not find any commands.
  • setMarker(marker: string) : Adds a marker that is displayed in the capture, helping you analyze the results.
  • clearMarker() : Clears the current marker from the capture for any subsequent calls.

The “spector” object is available on the window for this purpose.

This can be a tremendous help to capture the creation of your shadow maps, for instance. This can also be used to trigger a capture based on a user interaction or to set markers in your code to better analyse the capture.

The following example could be introduced safely in your code:

if (spector) {
    spector.setMarker("Shadow map creation");
}
[your shadow creation code]
if (spector) {
    spector.clearMarker();
}

Using the Standalone Version

If you prefer to use the library in your own application you can find it available on npm: spectorjs

Going Further

This extension being pretty new and under active development, a few features have been discussed for the next releases:

  • Capture Comparison
  • Image Comparison
  • Remote Debugging
  • Shader Editor
  1. Website
  2. Github
  3. Roadmap
  4. Report Issues
  5. Twitter @SpectorJS

I would like to particularly thank Eric Haines for the time spent to review the article, knowing the challenge it represents considering my English 🙂

GPU Zen == Two Cups of Joe

The book GPU Zen is out. Go get it. This is a book edited by Wolfgang Engel and is essentially the successor to the GPU Pro series. Github code is here.

(Update: there’s a call for participation for GPU Zen 2.)

Full disclosure: I edited one small section, on VR. I forget if I am getting paid in some way. I probably get a free copy if I ask nicely. But, the e-book version is $9.99! So I simply bought one. Not $89.99, as books of this type usually are (even for the electronic edition), but rather the price of two coffees.

It’s a Kindle book. Unlike the GPU Pro and ShaderX books, it’s self-published. It’s mostly meant as an e-book, though you can buy a paperback edition if you insist.

So what’s in the book? Seventeen articles on interactive rendering techniques, in good part by practitioners, and nine have associated code. The book’s 360 pages. As you probably know from similar books, for any given person, most of the articles will be of mild interest at best. There will be a few that are worth knowing about. Then there will be one or two that are gold, that help with something you didn’t know about, or didn’t know how to do, or heard of but didn’t want to reinvent.

For example, Graham Wihlidal’s article “Optimizing the Graphics Pipeline with Compute” is a much-expanded and in-depth explanation of work that was presented at GDC 2016. Trying to figure out his work from the slideset is, I guess, theoretically possible. Now you don’t have to, as Graham lays it all out, along with other insights since his presentation, in a 44 page article. At $89.99, I might want to read it but would think twice before getting it (and I have done so in the past – some books of this type I still haven’t bought, if only one article is of interest).

The detailed explanation of XPerf, ETW, and GPUView in the article by James Hughes et alia on VR performance tuning might instead be the one you find invaluable. Or the two articles on SSAO, or one on bokeh depth-of-field, or – well, you get the idea. Or possibly none of them, in which case you’re out a small bit of cash.

For the whole table of contents, just go to the book’s listing and click on the cover to “Look Inside.”

Me, I usually prefer books made of atoms. But for technical works, if I have to choose, I’m happier overall with electronic versions. For these “collections of articles” books in particular, the big advantage of the e-book is searchability. No more “I vaguely recalls it’s in this book somewhere, or maybe one of these three books…” and spending a half-hour flipping through them all. Just search for a term or some particular word and go. Oh, one other cute thing: you can loan the e-book to someone else for 14 days (just once, total, I think).

At $9.99, it’s a minimal-brainer. Order by midnight tomorrow and you’ll get the Ginzu knife set with it. I’ll try to avoid being too much of a huckster here, but honestly, so cheap – you’d have money left for Pete Shirley’s ray tracing e-books, along with Morgan McGuire’s Graphics Codex. I like low-cost and worthwhile. Addendum: if you do buy the paperback, the Kindle “Matchbook” price for the e-book is $2.99. Which is how I think reality should be: buy the expensive atoms one, get the e-book version for a little more, vs. paying full price for each.

7 Colossal Things for May 15, 2017

Haven’t done any “seven things” for a year, so it’s time. This one will just be stuff from the wonderful site This Is Colossal, dedicated to odd ways of making art.

No deep “man’s inhumanity to man” art-with-a-capital-A here, but rather some lovely samples from this wonderful site.

 

Everything is Triangles

I was entertained to see that the new NVIDIA HQ is triangle inspired. Great quote from an interesting article about new technology company offices:

 

“At this point I’m kind of over the triangle shape, because we took that theme and beat it to death,” admits John O’Brien, the company’s head of real estate, who pointedly vetoed a colleague’s recent suggestion to offer triangle-shaped water bottles in the cafeteria.

High Performance Graphics 2017 Call for Participation

The High-Performance Graphics 2017 conference call for participation is here.

Summary: deadline for papers is Friday April 21st. Conference itself is Friday-Sunday, July 28-30, colocated with SIGGRAPH in Los Angeles.

For me, this is one of the two great conferences each year for interactive rendering related papers (SIGGRAPH’s papers selection, for whatever reasons, seems to have mostly moved on to other things).

Moving Graphics Research into Development

guest post by Patrick Cozzi, @pjcozzi

[This is a eat your veggies/floss your teeth type of article. Nothing revolutionary; rather, good advice if you don’t already know it, and better advice if you do and need a reminder or may have missed a trick. My only “eat with your mouth closed” addition would be, “add comments as you go,” mainly because I’m wading through someone’s poorly-commented code this week. Your bonus payoff for reading this article through is seeing some nice visualizations at the end. – Eric]

The Penn graphics students I work with on MS thesis, senior design, and GPU course projects and my colleagues working on Cesium are all implementing fairly recent graphics research.

This article presents tips for implementing research that I have learned through hands-on development and through mentoring students and practitioners. There seems to be a huge difference in productivity depending how how we navigate papers and how we approach implementing them.

Implementation is a great way to generate new ideas, but this article is not specifically about generating new research; it is about utilizing existing research to solve a particular problem.

Finding Papers

A quick Google search usually provides prominent papers. I also check Ke-Sen Huang’s website, which has papers from SIGGRAPH, I3D, Eurographics, and several other conferences.

Once you have a good paper, finding more is easy:

  • Follow the references backwards to the seminal work.
  • Go to each author’s website and institute’s website and check their publications. For example, for point clouds, I like the work by Enrico Gobbetti at CRS4.
  • Search for the paper on Google Scholar and trace the most prominent papers that cite it. Google Scholar is also useful for searching papers published in the past n years, which is great for culling old papers, e.g., CLOD terrain algorithms that are no longer appropriate for today’s GPUs.
  • Ask for recommended papers on twitter, seriously.

Quickly identifying and avoiding irrelevant papers is key to staying focused in the right direction.

How to Read a Paper

Skim it first

Assuming I have some understanding of the topic, it takes me about three hours to review an eight-page paper submission for a conference or journal.

When I’m not reviewing for a committee, and instead looking for papers on a particular topic, I don’t read a paper that carefully on the first pass. When I first started reading papers, I spent too much time reading papers that were tangential. This lead to a lot of wasted time going down dead-end paths.

Instead, I suggest reviewing the figures and reading the Abstract, Introduction, and Results sections before dedicating time to a complete read. Also check out the video, demo, and source code if available. You may quickly find that the approach won’t work for you because, for example, it is not fast enough for real-time, relies on features not supported by your target graphics API, relies on an expensive preprocess step, etc. With that said, reading related though tangential papers, if you have the time, still generates potentially useful ideas.

Understand the previous work

If the paper appears relevant, but I don’t have the background to fully understand it, I try to find the seminal work reference in the Previous Work section and read it. Google Scholar can help here since it will report how many times a paper was cited, a useful measure but not ground truth. If you follow the preview work far enough, you may end up with a paper written in the 1970s or 80s, which are fun to read for their simplicity (by today’s standards) and influence. For example, enjoy Particle Systems – A Technique for Modeling a Class of Fuzzy Objects, 1983, by William Reeves.

Survey papers and the Previous Work chapters in PhD/MS theses are also great places to look for background. They distill down each relevant paper to its essence and give a framework for the subject. For example, A Developer’s Survey of Polygonal Simplification Algorithms (2001, David Luebke) and Technical Strategies for Massive Model Visualization (2008, Enrico Gobbetti, Dave Kasik, and Sung eui Yoon) lead to the bulk of the work I read for my MS thesis.

Iterate

Once I’ve found a paper that I think I want to implement, I often need to read it – or at least parts of it – multiple times to gain a solid understanding.

I interleave reading with implementation. Reading deepens my understanding to help me code, and coding deepens my understanding to help me read.

If you have the luxury of no other outside work, you might start the morning coding without even checking email, then check email after lunch, and then spend the afternoon reading so you have fresh ideas for coding the next morning. You’ll quickly have more ideas than time. Choose carefully and keep a record of those not yet examined. Often, when I go back at look at my notes, I am happy that I didn’t spend time on many of the ideas that, in retrospect, would not have been as impactful.

Reach out

Paper authors are often easily accessible via email or twitter. Ask them a specific question that shows you’ve done your research, and they are likely to reply. After all, they are interested in the same topic as you. They know their work very well; one time, an author found a bug in our translucency implementation just by looking at a screenshot!

How to implement research

The following advice applies to coding in general, but I think it is particularly relevant to implementing graphics research with non-trivial data structures and algorithms.

Start small and iterate

Don’t implement the whole paper at once. Implement the smallest useful – or even not so useful – feature, verify that it works, and build on it, verifying the results each step of the way. Get something working first, then make it fast and robust.

Verify, verify, verify. Double check the code flow in the debugger, measure the performance early, and test with simple scenarios before complex ones. When the students in our GPU course implement a rasterizer, they start with a triangle model, then a box, and then the COLLADA duck.

Implementing an out-of-core spatial data structure? Start with an in-core one. Implementing a complex GPU algorithm? Perhaps starting with a CPU implementation first is useful and gives us something to benchmark against.

As the code starts to stabilize, add unit tests. For this type of work, I don’t add unit tests too soon since they would break often.

Report statistics

At the start, take the time to add code to report key statistics about the algorithm.

For example, in the out-of-core spatial data structures we use for streaming massive 3D models, we track the number of nodes in memory, nodes visited, nodes rendered, number of pending network requests, number of received requests that are processing, etc.

Watching these stats gives us a very quick indicator if things are working properly. When I wrote the cache replacement algorithm to unload nodes from memory, I first added the relevant stats reporting so I could monitor them during development. I also started with a super-simple test case with a cache size of 1 or 2 tiles.

Test parameters

Also at the start, make it simple to tune key parameters. If your using JavaScript, dat.GUI makes it really easy to map a UI to variables.

Tuning parameters is great for understanding an algorithm, testing our implementation’s robustness, and performance testing, e.g., quickly seeing how changing the number of dynamic lights impacts a deferred shading engine.

[Note: the full-sized images can be downloaded from the article’s repo. – Eric]


Renderer with debug options to turn on/off different parts of the pipeline and debug views.

Visualize everything

As graphics developers, we love to see the results of our code. Debugging aids that visualize results are just as enjoyable, and can yield deeper insights and intuition. Some examples:

  • bounding volumes
  • wireframe
  • g-buffers in a deferred shader
  • tiles in a tile-based deferred shader
  • freeze frame to review culling results
  • shadow maps, including cascades

A couple of examples:

Left: Grand Canyon. Right: Wireframe showing skirts used to avoid cracks between tiles, how high frequency areas are more finely triangulated, and some sense of overdraw.

Left: View of Crater Lake (186 draw calls). Right: Freeze frame viewing tiles with their tile coordinates from a different perspective. Images from Graphics Tech in Cesium – The Graphics Stack.

Sometimes a graphics API debugging tool such as Renderdoc or WebGL Inspector is enough to review buffers, textures, shaders, etc. I also find engine-specific tools useful since they are higher-level, e.g., they may color objects based on a shadow-map cascade, whereas a graphics API tool may just show the shadow-map textures. Time spent on and using tools always pays for itself in fewer bugs, deeper performance insights, and creating screenshots for documentation and even twitter.

Debug visualizations are useful because when we can visualize something, an insight often becomes obvious. For example, look at how bad bounding spheres for Cesium’s terrain tiles are compared to oriented bounding boxes.

A visualization gives us an immediate sense, then our stats reporting gives precise numbers. For example, note how the visualization below complements the statistics for using fog to optimize terrain rendering by culling tiles in the far distant and increasing the geometric error for tiles in the mid-distance.

Write

I thought I knew a lot about virtual globe rendering until I tried to coauthor a book about it; 520 pages later, I knew the topic much better and had lots of new ideas. Whether it is a blog post, paper, or entire book, writing deepens our understanding and helps us generate new ideas. It also helps the field move forward as we build on each other’s work.

Real-World Sampling “Artifact”

[A repost, due to WordPress weirdness – sorry about that. Note to self: don’t paste images into WordPress, always upload and insert them.]

I’m seeing this more and more in my neighborhood in the evening:

shadows
It’s the shadow of a tree on pavement, superimposed 3 times. It’s because they’ve been installing new LED streetlights with 3 bulbs.

Hard for me to photograph the light source well, but a reflection of some sort in the camera shows the three bulbs in the upper right:

light

It’s like the artifacts you see when anyone tries to approximate an area light with point lights. So with the advent of LEDs, I guess we won’t need light area sampling algorithms as much?

Maybe one for the Real Artifacts gallery.

WebGL 2: New Features

by Shuai Shao
g
ithub repo for article here

Last time we showed how to deal with issues porting a WebGL 1 engine to WebGL 2. In this article, we will talk about what new features come with WebGL 2 and what cool things can we do with them.

New features

Multisampled Renderbuffers

Previously, if we wanted antialiasing we would either have to render it to the default backbuffer or perform our own post-process AA (such as FXAA or SMAA) on content rendered to a texture.

Now, with Multisampled Renderbuffers, we can use the general rendering pipeline in WebGL to provide multisampled antialiasing (MSAA):

pre-z pass –> rendering pass to FBO –> postprocessing pass –> render to window

renderbufferStorageMultisample is the relevant function here.

var colorRenderbuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, colorRenderbuffer);
gl.renderbufferStorageMultisample(gl.RENDERBUFFER, 4, gl.RGBA8, FRAMEBUFFER_SIZE.x, FRAMEBUFFER_SIZE.y);

gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffers[FRAMEBUFFER.RENDERBUFFER]);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, colorRenderbuffer);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);

Pay attention to the fact that the multisample renderbuffers cannot be directly bound to textures, but they can be resolved to single-sample textures using the blitFramebuffer call. This is a new feature in WebGL 2 as well, and is used like this:

var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, FRAMEBUFFER_SIZE.x, FRAMEBUFFER_SIZE.y, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.bindTexture(gl.TEXTURE_2D, null);

gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffers[FRAMEBUFFER.COLORBUFFER]);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);

// ...

// After drawing to the multisampled renderbuffers
gl.bindFramebuffer(gl.READ_FRAMEBUFFER, framebuffers[FRAMEBUFFER.RENDERBUFFER]);
gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, framebuffers[FRAMEBUFFER.COLORBUFFER]);
gl.clearBufferfv(gl.COLOR, 0, [0.0, 0.0, 0.0, 1.0]);
gl.blitFramebuffer(
    0, 0, FRAMEBUFFER_SIZE.x, FRAMEBUFFER_SIZE.y,
    0, 0, FRAMEBUFFER_SIZE.x, FRAMEBUFFER_SIZE.y,
    gl.COLOR_BUFFER_BIT, gl.NEAREST
);

3D Texture

The first thing that comes to mind with 3D textures is volumetric effects, such as fire, smoke, light rays, realistic fog, etc. Now we can bring these features into our WebGL engine. In addition, 3D textures can be used to store medical data such as MRI and CT scans, and are useful when implementing cross-sectioning. 3D textures can also improve performance by using them to cache light for real-time global illumination.

WebGL 2 support for 3D textures is as good as that for 2D textures. We have fast access speed and native tri-linear interpolation.

The code for setting up a 3D texture usually has a 2D texture counterpart.

Texture 2D Texture 3D
texImage2D texImage3D
texSubImage2D texSubImage3D
copyTexImage2D copyTexImage3D
compressedTexImage2D compressedTexImage3D
compressedTexSubImage2D compressedTexSubImage3D
texStorage2D texStorage3D

There are certain elements that do not match exactly. For example, since we have one more dimension, we will have depthzoffset, and TEXTURE_WRAP_T for 3D textures. Also, the internal format and type combinations are not 100% matched.

The sampler used in shaders is sampler3D instead of sampler2D.

Here’s an example setup:

var texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_3D, texture);
gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_BASE_LEVEL, 0);
gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MAX_LEVEL, Math.log2(SIZE));
gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);

gl.texImage3D(
    gl.TEXTURE_3D,  // target
    0,              // level
    gl.R8,        // internalformat
    SIZE,           // width
    SIZE,           // height
    SIZE,           // depth
    0,              // border
    gl.RED,         // format
    gl.UNSIGNED_BYTE,       // type
    data            // pixel
    );

gl.generateMipmap(gl.TEXTURE_3D);

Last but not the least, the 2D Texture Array concept is available with the 3D Texture feature. That is, multiple 2D textures can be stored in an array that can be accessed. It has its own sampler: sampler2DArray, but it shares the texImage3D GL functions. Here’s an example call:

gl.texImage3D(
    gl.TEXTURE_2D_ARRAY,
    0,
    gl.RGBA,
    IMAGE_SIZE.width,
    IMAGE_SIZE.height,
    NUM_IMAGES,
    0,
    gl.RGBA,
    gl.UNSIGNED_BYTE,
    pixels
);

Uniform Buffer

Setting uniforms for shaders is often a considerable amount of the time spent by an engine. Take the Cesium Globe as an example. For regular draw calls, uniform4fv is within the top 5 GL functions taking the most execution time. Also, the sum of all uniform[i]fv and uniformMatrix[i]fv calls is nearly 2.5% of all execution time. That’s quite a large percentage. We always have to call them to update uniform values each frame. What’s more, it can be annoying that we have to make duplicated uniform calls for one same uniform object shared by several shaders.

Now the Uniform buffer object may bring us a boost in performance by allowing us to store blocks of uniforms in buffers stored on the GPU, just like vertex/index buffers. This can make switching between sets of uniforms faster. Additionally, uniform buffers can be shared by multiple programs at the same time.

Those are quite a few benefits. But, with so many improvements, the setup routine is about to change a lot. We will have a basic setup example first, and then look at something that might need your attention.

var uniformPerSceneLocation = gl.getUniformBlockIndex(program, 'PerScene');
gl.uniformBlockBinding(program, uniformPerSceneLocation, 2);
//...
var material = new Float32Array([
    0.1, 0.0, 0.0,  0.0,
    0.0, 0.5, 0.0,  0.0,
    0.0, 0.0, 0.5,  0.0,
    128.0, 0.0, 0.0, 0.0
]);
var uniformPerSceneBuffer = gl.createBuffer();
gl.bindBuffer(gl.UNIFORM_BUFFER, uniformPerSceneBuffer);
gl.bufferData(gl.UNIFORM_BUFFER, material, gl.STATIC_DRAW);
gl.bindBuffer(gl.UNIFORM_BUFFER, null);
//...
// Render
gl.bindBufferBase(gl.UNIFORM_BUFFER, 2, uniformPerSceneBuffer);

The first thing that may confuse you is the layout standard (we will focus on std140 here). You can always find the details inOpenGL ES 3.00 Spec Page 68.

One thing that I really want you to notice is:

when the data member is a three-component vector with components consuming N basic machine units, the base alignment is 4N

And:

If the member is a structure, the base alignment of the structure is N, where N is the largest base alignment value of any of its members, and rounded up to the base alignment of a vec4.

Here’s an example:

struct Material
{
    vec3 ambient;
    vec3 diffuse;
    vec3 specular;
    float shininess;
};

uniform PerScene
{
    Material material;
} u_perScene;
var material = new Float32Array([
    0.1, 0.0, 0.0,  0.0,
    0.0, 0.5, 0.0,  0.0,
    0.0, 0.0, 0.5,  0.0,
    128.0, 0.0, 0.0, 0.0
]);

Here we have vec3, so our data should add a 0 for each vec3 for alignment. Also, since we use a struct to wrap our data, it should be rounded to a multiple of vec4. That’s why we have 3 extra zeroes after the float shininess.

Another concern is about updating the uniform block. There are several different approaches that can get us there. However, their performance can vary. It’s pretty tricky to make the most of our uniform block.

Here are some detailed discussions on Stack Overflow and gamedev.net.

But, basically, we can use gl.bufferSubData to copy the updated typedArray into the uniform buffers.

Sync Objects

Sync objects can be used to synchronize execution between the GL server and the client, which gives you more control over GPU by letting you set a fence to inform the GPU to wait until a set of GL operations have finished. Sync objects are more efficient than gl.finish.

We can get more accurate benchmarks with sync objects. In addition, applications such as image manipulation, where data of each frame comes from the CPU, will benefit from this degree of control.

Query Objects

This operation is very useful when we want to do occlusion testing. We can know how many geometries are actually drawn by performing a gl.ANY_SAMPLES_PASSED query around a set of draw calls. We can use these queries and so get rid of specialized picking method code.

Keep in mind that these queries are asynchronous. A query’s result is never available in the same frame that the query is issued. This is different from OpenGL ES 3 where query result may be available in the same frame. It’s an application portability concern.

gl.beginQuery(gl.ANY_SAMPLES_PASSED, query);
gl.drawArraysInstanced(gl.TRIANGLES, 0, 3, 2);
gl.endQuery(gl.ANY_SAMPLES_PASSED);
//...
(function tick() {
    if (!gl.getQueryParameter(query, gl.QUERY_RESULT_AVAILABLE)) {
        // A query's result is never available in the same frame
        // the query was issued.  Try in the next frame.
        requestAnimationFrame(tick);
        return;
    }

    var samplesPassed = gl.getQueryParameter(query, gl.QUERY_RESULT);
    gl.deleteQuery(query);
})();

Sampler Objects

In WebGL 1 texture image data and sampling information (which tells GPU how to read the image data) are both stored in texture objects. It can be annoying when we want to read from the same texture twice but with a different method (say, linear filtering vs nearest filtering) because we need to have two texture objects. With sampler objects, we can separate these two concepts. We can have one texture object and two different sampler objects. This will result in a change in how our engine organize textures.

Here’s an example:

var samplerA = gl.createSampler();
gl.samplerParameteri(samplerA, gl.TEXTURE_MIN_FILTER, gl.NEAREST_MIPMAP_NEAREST);
gl.samplerParameteri(samplerA, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.samplerParameteri(samplerA, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.samplerParameteri(samplerA, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);

var samplerB = gl.createSampler();
gl.samplerParameteri(samplerB, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
gl.samplerParameteri(samplerB, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.samplerParameteri(samplerB, gl.TEXTURE_WRAP_S, gl.MIRRORED_REPEAT);
gl.samplerParameteri(samplerB, gl.TEXTURE_WRAP_T, gl.MIRRORED_REPEAT);

// ...

gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindSampler(0, samplerA);

gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindSampler(1, samplerB);

Transform Feedback

Transform feedback allows the output of the vertex shader to be captured in a buffer object. This is useful for particle systems and simulation that perform on the GPU without any CPU intervention.

In WebGL 1, when we want to implement such feature, usually a texture storing the states of particles is inevitable. Two textures, to be precise, storing states from previous frame and current frame, and ping-pong between them.

Here’s an example of the WebGL 1 approach (from toji’s WebGL Particles take 2).

In the first-pass fragment shader, do the simulation, and store the position results in a texture.

// First pass - Fragment Shader
uniform sampler2D tPositions;
varying vec2 vUv;
// ...
vec4 runSimulation(vec4 pos) {
    // simulation
    // ...
    return pos;
}

void main() {
    vec4 pos = texture2D( tPositions, vUv );
    //...
    pos = runSimulation(pos);

    // Write new position out
    gl_FragColor = pos;
}

And then we use this position texture as an input for our second pass vertex shader.

// Second pass - Vertex Shader
attribute vec3 position;
uniform float pointSize;
uniform sampler2D map;
varying vec2 vUv;

//...

void main() {
    vUv = position.xy + vec2( 0.5 / width, 0.5 / height );
    vec3 color = texture2D( map, vUv ).rgb;
    gl_PointSize = pointSize;
    gl_Position = projectionMatrix * modelViewMatrix * vec4( color, 1.0);
}

What follows is how we do it in WebGL 2. With Transform feedback, we can discard the fragment shader in step 1, as well as the texture. We write the output (position) of the vertex shader in step 1 to the vertex attribute array input of step 2. (In practice, you still need a placeholder trivial fragment shader for the first step to correctly compile the program.)

// First pass - Vertex Shader
// ...
out vec3 v_position;
void main() {
    // ...
    v_position = u_projMatrix * u_modelViewMatrix * vec4(a_position, 1.0);
}
// Second pass - Vertex Shader
in vec3 a_position;
void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4( a_position, 1.0 );
    // ...
}

And here’s how we bind the buffers (from WebGL2SamplesPack):

var transformFeedback = gl.createTransformFeedback();
var varyings = ['v_position', /*...*/];
gl.transformFeedbackVaryings(programTransform, varyings, gl.SEPARATE_ATTRIBS);
// ...
gl.bindBuffer(gl.ARRAY_BUFFER, particleVBOs[i][Particle.POSITION]);
gl.bufferData(gl.ARRAY_BUFFER, particlePositions, gl.STREAM_COPY);
// ...
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, transformFeedback);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, particleVBOs[(currentSourceIdx + 1) % 2][Particle.POSITION]);
gl.enable(gl.RASTERIZER_DISCARD);   // we are not drawing
gl.beginTransformFeedback(gl.POINTS);

gl.drawArrays(gl.POINTS, 0, NUM_PARTICLES);

gl.endTransformFeedback();
gl.disable(gl.RASTERIZER_DISCARD);

gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, null);

A set of new texture features:

Here is a list of the new texture features in WebGL 2.

    • sRGB textures
      Allow the application to perform gamma-correct rendering.
gl.texImage2D(
    gl.TEXTURE_2D,
    0, // Level of details
    gl.SRGB8, // Format
    gl.RGB,
    gl.UNSIGNED_BYTE, // Size of each channel
    image
);

The sRGB texture will be automatically converted to linear space when being fetched in the shader. For physically-based rendering and other operations we normally want to deal with colors in a linear space, not in display space.

  • Vertex texture for:
    • terrain
    • water
    • skeleton animation
  • Texture LOD

The texture LOD parameter is used to determine which mipmap to fetch from; it can now be clamped. The base and maximum mipmap level can both be set as clamps. This allows mipmap streaming, i.e., loading only the mipmap levels currently needed. This is very useful for a WebGL environment, where textures are downloaded via a network.

gl.texParameterf(gl.TEXTURE_2D, gl.TEXTURE_MIN_LOD, 0.0);
gl.texParameterf(gl.TEXTURE_2D, gl.TEXTURE_MAX_LOD, 10.0);
  • ETC2/EAC texture compression

A mandatory supported feature, compressed textures have obvious transmission time savings.

gl.compressedTexImage2D(
    gl.TEXTURE_2D, 
    0, 
    gl.COMPRESSED_RGBA8_ETC2_EAC, 
    IMAGE_SIZE.width, 
    IMAGE_SIZE.height, 
    0, 
    pixels
);
  • Integer textures
  • Non-Power-of-Two Texture
    • texturing video
    • 2D Sprite
  • Floating point textures
    • half-float: High dynamic range imaging
    • full-float: Variance shadow maps soft shadow
    • a feature coming together with floating point textures is floating point renderbuffer (also with multisample support).
  • Seamless cube map

Cube map is already available in WebGL 1. What’s new in WebGL 2 is that the cube map is seamless (and is always seamless, unlike in OpenGL where you can set it). With this feature we are free from using hacks to get rid of the artifacts near the borders.

  • A set of additional texture formats
textureFormats[TextureTypes.RGB] = {
    internalFormat: gl.RGB,
    format: gl.RGB,
    type: gl.UNSIGNED_BYTE
};

textureFormats[TextureTypes.RGB8] = {
    internalFormat: gl.RGB8,
    format: gl.RGB,
    type: gl.UNSIGNED_BYTE
};

textureFormats[TextureTypes.RGB16F] = {
    internalFormat: gl.RGB16F,
    format: gl.RGB,
    type: gl.HALF_FLOAT
};

textureFormats[TextureTypes.RGBA32F] = {
    internalFormat: gl.RGBA32F,
    format: gl.RGBA,
    type: gl.FLOAT
};

textureFormats[TextureTypes.R16F] = {
    internalFormat: gl.R16F,
    format: gl.RED,
    type: gl.HALF_FLOAT
};

textureFormats[TextureTypes.RG16F] = {
    internalFormat: gl.RG16F,
    format: gl.RG,
    type: gl.HALF_FLOAT
};

textureFormats[TextureTypes.RGBA] = {
    internalFormat: gl.RGBA,
    format: gl.RGBA,
    type: gl.UNSIGNED_BYTE
};

textureFormats[TextureTypes.RGB8UI] = {
    internalFormat: gl.RGB8UI,
    format: gl.RGB_INTEGER,
    type: gl.UNSIGNED_BYTE
};

textureFormats[TextureTypes.RGBA8UI] = {
    internalFormat: gl.RGBA8UI,
    format: gl.RGBA_INTEGER,
    type: gl.UNSIGNED_BYTE
};

New GLSL 3.00 ES Shader

And here comes our new shader: GLSL 3.00 ES! This new version brings in a bunch of new features that are not in GLSL 1.00. But the grammar changed at some point, so it can be quite painful converting over at the start.

Note that a shader in GLSL 1.00 is still fully supported in a WebGL 2 context. It’s only the GLSL 3.00 ES grammar that doesn’t have backwards compatibility with GLSL 1.00. Only when a #version 300 es tag is added at the top of the shaders will the GLSL 3.00 ES version be turned on.

We will quickly list here a bunch of new features and new built-in functions in GLSL 3.00 ES.

  • Layout qualifiers

Vertex shader inputs can now be declared with layout qualifiers to explicitly bind the location in the shader source without requiring making gl.getAttribLocation calls. These are declared like this:

#version 300 es
#define POSITION_LOCATION 0
#define TEXCOORD_LOCATION 4
// ...
layout(location = POSITION_LOCATION) in vec2 position;
layout(location = TEXCOORD_LOCATION) in vec2 texcoord;
var vertexPosLocation = 0; // set with GLSL layout qualifier
gl.enableVertexAttribArray(vertexPosLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuffer);
gl.vertexAttribPointer(vertexPosLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);

var vertexTexLocation = 4; // set with GLSL layout qualifier
gl.enableVertexAttribArray(vertexTexLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexTexBuffer);
gl.vertexAttribPointer(vertexTexLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);

The same applies to fragment shader outputs. Layout qualifiers can also be used to control the memory layout for uniform blocks.

  • Non-square matrix

Quite straightforward. One use case is replacing a 4×4 affine matrix where the last row is (0, 0, 0, 1) with a 4×3 matrix.

  • Full integer support

Built-in functions can now take an integer as an input variable.

  • Flat/smooth interpolators

We can now explicitly declare flat interpolators to have flat shading.

  • Centroid sampling

This is used to avoid rendering artifacts when multisampling. Read this article for more details. Here is a WebGL 2 Sample of centroid sampling.

  • New built-in functions

Some very handy functions such as textureOffset, texelFetch, dFdx, textureGrad, textureLOD, etc. You can always find the complete lists in the GLSL 3.00 ES Spec

  • gl_InstanceID and gl_VertexID

These allow identification of instances and vertices within the shader.

  • Other function name changes

For example, texture2D(sampler2D sampler, vec2 coord), textureCube(samplerCube sampler, vec3 coord), (and texture3D if there is any in our old version) are now replaced with a set of overrided functions of texture

gvec4 texture (gsampler2D sampler, vec2 P [, float bias] )
gvec4 texture (gsampler3D sampler, vec3 P [, float bias] )
gvec4 texture (gsamplerCube sampler, vec3 P [, float bias] )

Credits

Minecon 2016 Report

Say what? Minecon is a convention for Minecraft, so why in this blog? Well, I was invited to be on a panel about 3D printing Minecraft models, since I wrote Mineways (which gets a crazy 600 downloads a day; beats me who all these people are. I think it’s a case of 600 download it, 60 try it, 6 try it more than once, 0.6 become real users).

David Ng in the Mineways hat

David Ng in the Mineways hat

It was a bit odd going to this convention, especially since it was at the Anaheim Convention Center, where I was just two months ago for SIGGRAPH. This convention is the same size as SIGGRAPH, 12,000 attendees plus panelists, staff, exhibitors, etc. One organizer said a total of 14K attended. Of course, the tickets for Minecon sold out in 6 minutes – there are over 100 million copies of Minecraft sold and a lot of fanatical users out there. It’s only two days long, it uses somewhat less (and sometimes more) of the convention center’s space, and the median age of an attendee is probably around 12. My photo album is here, note in particular this one, where just about everyone is in one very long room – that’s something I don’t see at SIGGRAPH.

Not SIGGRAPH

Not SIGGRAPH

It’s not all just kids and pixelated blocks. There were a few good general technical talks about VR, video techniques, voxel modeling methods, etc. For example, John Carmack and others from Oculus spoke about the challenges of porting Minecraft to the Rift. Scattered throughout this presentation are some interesting bits about the user experience. Some clever ideas such as the person jumping a short distance actually gets a view that travels in a straight line, though his friends see him jumping in a parabola (parabolas upset the stomach more). Playing the ambient music actually helps stave off motion sickness for awhile, or so they think. They have a “chill out” feature that lets you leave the action and hang out in a quiet virtual room, looking at the game through a 2D window. Various other things. They spent 6 months of polishing the VR version before releasing it (despite a fair bit of pressure to get it out the door in a minimal-port form). Best/ickiest Carmack quote, “Don’t push it. We don’t need to be cleaning up sick in the demo room.” Honestly, an interesting session, with hints of the political pressure on the team. I’m impressed that they were able to take the time to polish the experience, given how an early release of the game’s port would undoubtedly help drive sales.

BlockWorks had a session on how they did their voxel modeling, with some slides of their incredible constructions. Seriously, click that link. It was interesting in that each artist tends to have a specialty – architecture, organics, mechanicals, etc. They use the free Chunky path tracer (great tool! I’ve played with it.) along with traditional renderers such as Cinema 4D. I would have liked to hear more about their custom voxel-based modeling tools, but alackaday, not much mention beyond WorldEdit and VoxelSniper. Also, they avoid using mesh voxelizers such as binvox and Qubicle. (Qubicle, by the way, looks like a nice package for All Things Voxel, including a mesh voxelizer than retains the color of the mesh.) Related, though not at Minecon, this article about RenderMan and Minecraft – a good detailed “how to” read.

You remember the Visible Human Project? I laughed when I saw this, and talked at length with its creator, Wizard Keen, aka Adam Clarke.

1:40 into The Torso

1:40 into The Torso

I also met some people working on Spigot, the unofficial mod platform for Minecraft. One explained to me in depth how Minecraft’s impressive terrain generation worked, as he had carefully decompiled it all in order to make a mod. Basically, there’s pass after pass of applying Perlin Noise in various ways. First the overall terrain, in a single block type. Then put biomes on. Then for each biome add the “topsoil layer” (e.g. grass and three dirt atop everything for the grasslands). Carve a bit more. Add tunnels separately. Add minerals. On and on. What was interesting to me was that there was this layered approach at all, that it wasn’t some giant single-pass function but each had its own function, e.g. add topsoil.

Other than the lead developer of Minecraft, YouTubers were the stars of the con. By some estimates, 15% of Youtube’s content is videogame related, and Minecraft dominates (GTA’s second). (Hey, before I did 12 steps for Minecraft, I made well over a hundred videos, mostly not worth watching. Here’s a reason why I love Minecraft.) The two main video editing packages used for Minecraft videos are Premier Pro & Vegas Pro. People uses FRAPS, OBS, and Action for video capture.

Tidbit: this Minecraft-related Hour of Code lesson from code.org is evidently extremely popular. Point kids at it, it uses a visual code editor (Blockly) to introduce “if” statements, loops, etc.

Entirely non-Minecraft, but I learned about it at Minecon: I’m probably the last person on Twitter to know about tweetdeck, which makes Twitter a bit more friendly.

My little talk at the panel went well, slides here. I particularly like that David Ng is using Minecraft with his students to build physical worlds.

Oh, and videos. I’d say the most amazing videos I saw were in the Rube Goldberg session; short session preview here, worth your while. The coaster rides by Nuropsych1 and others were astounding. If you want to veg out (and wait through ads), check the Dr. Who, GhostBusters, and Beetle Juice coasters, among others. Impressive builds, great lighting effects and optical illusions, lovely redstone (electronics) work, camera tricks, and it’s astounding it can all be done within Minecraft.

4:18 in to the Dr. Who coaster https://youtu.be/AKyt12Ezh3s?t=258

4:18 in with the Dr. Who coaster – click that link right now!