I got to attend HPG this year, which was a fun experience. At smaller, more focused conferences like EGSR and HPG you can actually meet all the other attendees. The papers are also more likely to be relevant than at SIGGRAPH, where the subject matter of the papers has become so broad that they rarely seem to relate to graphics at all.
I’ve written about the HGP 2009 papers twice before, but six of the papers lacked preprints and so it was hard to judge their relevance. With the proceedings, I can take a closer look. The “Configurable Filtering Unit” paper is now available on Ke-Sen Huang’s webpage, and the rest are available at the ACM digital library. The presentation slides for most of the papers (including three of these six) are available at the conference program webpage.
A Directionally Adaptive Edge Anti-Aliasing Filter – This paper describes an improved MSAA mode AMD has implemented in their drivers. It does not require changing how the samples are generated, only how they are resolved into final pixel colors; this technique can be implemented on any system (such as DX10.1-class PCs, or certain consoles) where shaders can access individual samples. In a nutshell, the technique inspects samples in adjacent pixels to more accurately compute edge location and orientation.
Image Space Gathering – This paper from NVIDIA describes a technique where sharp shadows and reflections are rendered into offscreen buffers, upon which an edge-aware blur operation (similar to a cross bilateral filter) is used to simulate soft shadows and glossy reflections. The paper was targeted for ray-tracing applications, but the soft shadow technique would work well with game rasterization engines (the glossy reflection technique doesn’t make sense for the texture-based reflections used in game engines, since MIP-mapping the reflection textures is faster and more accurate).
Scaling of 3D Game Engine Workloads on Modern Multi-GPU Systems – systems with multiple GPUs used to be extremely rare, but they are becoming more common (mostly in the form of multi-GPU cards rather than multi-card systems). This paper appears to do a through analysis of the scaling of game workloads on these systems, but the workloads used are unfortunately pretty old (the newest game analyzed was released in 2006).
Bucket Depth Peeling – I’m not a big fan of depth peeling systems, since they invest massive resources (rendering the scene multiple times) to solve a problem which is pretty marginal (order-independent transparency). This paper solves the multi-pass issue, but is profligate with a different resource – bandwidth. It uses extremely fat frame buffers (128 bytes per pixel).
CFU: Multi-purpose Configurable Filtering Unit for Mobile Multimedia Applications on Graphics Hardware – This paper proposes that hardware manufacturers (and API owners) add a set of extensions to fixed-function texture hardware. The extensions are quite useful, and enable accelerating a variety of applications significantly (around 2X). Seems like a good idea to me, but Microsoft/NVIDIA/AMD/etc. may be harder to convince…
Embedded Function Composition – The first two authors on this paper are Turner Whitted (inventor of hierarchical ray tracing) and Jim Kajiya (who defined the rendering equation). So what are they up to nowadays? They describe a hardware system where configurable hardware for 2D image operations is embedded in the display device, after the frame buffer output. The system is targeted to applications such as font and 2D overlays. The method in which operations are defined is quite interesting, resembling FPGA configuration more than shader programming.
Besides the papers, HPG also had two excellent keynotes. I missed Tim Sweeney’s keynote (the slides are available here), but I was able to see Larry Gritz’s keynote. The slides for Larry’s keynote (on high-performance rendering for film) are also available, but are a bit sparse, so I will summarize the important points.
Larry started by discussing the differences between film and game rendering. Perhaps the most obvious one is that games have fixed performance requirements, and quality is negotiable; film has fixed quality requirements, and performance is negotiable. However, there are also less obvious differences. Film shots are quite short – about 100-200 frames at most; this means that any precomputation, loading or overhead must be very fast since it is amortized over so few frames (it is rare that any precomputation or overhead from one shot can be shared with another). Game levels last for many tens of thousands of frames, so loading time is amortized more effiiciently. More importantly, those frames are multiplied by hundreds of thousands of users, so precomputation can be quite extensive and still pay off. Larry makes the point that comparing the 5-10 hours/frame which is typical of film rendering with the game frame rate (60 or 30 fps) is misleading; a fair comparison would include game scene loading times, tool precomputations, etc. The important bottleneck for film rendering (equivalent to frame rate for games) is artist time.
Larry also discussed why film rendering doesn’t use GPUs; the data for a single frame doesn’t fit in video memory, rooms full of CPU blades are very efficient (in terms of both Watts and dollars), and the programming models for GPUs have yet to stabilize. Larry then discussed the reasons that, in his opinion, ray tracing is better suited for film rendering than the REYES algorithm used in Pixar’s Renderman. As background, it should be noted that Larry presides over Sony Pictures Imageworks’ implementation of the Arnold ray tracing renderer which they are using to replace Renderman. An argument for replacing Renderman with a full ray-tracing renderer is especially notable coming from Larry Gritz; Larry was the lead architect of Renderman for some time, and has written one of the more important books popularizing it. Larry’s main points are that REYES has inherent inefficiencies, it is harder to parallelize, effects such as shadows and reflections require a hodgepodge of effects, and once global illumination is included (now common in Renderman projects) most of REYES inherent advantages go away. After switching to ray-tracing, SPI found that they need to render fewer passes, lighting is simpler, the code is more straightforward, and the artists are more productive. The main downside is that displacing geometric detail is no longer “free” as it was with REYES.
Finally, Larry discussed why current approaches to shader programming do not work that well with ray tracing; they have developed a new shading language which works better. Interestingly, SPI is making this available under an open-source license; details on this and other SPI open-source projects can be found here.
I had a chance to chat with Larry after the keynote, so I asked him about hybrid approaches that use rasterization for primary visibility, and ray-tracing for shadows, reflections, etc. He said such approaches have several drawbacks for film production. Having two different representations of the scene introduces the risk of precision issues and mismatches, rays originating under the geometry, etc. Renderers such as REYES shade on vertices, and corners and crevices are particularly bad as ray origins. Having to maintain what are essentially two seperate codebases is another issue. Finally, once you use GI then the primary intersections are a relatively minor part of the overall frame rendering time, so it’s not worth the hassle.
In summary, HPG was a great conference, well worth attending. Next year it will be co-located with EGSR. The combination of both conferences will make attendance very attractive, especially for people who are relatively close (both conferences will take place in Saarbrucken, Germany). In 2011, HPG will again be co-located with SIGGRAPH.
The Image Space Gathering paper is now available on NVIDIA’s website (you can find a link on Ke-Sen Huang’s HPG papers page).
See, I told you that RT was more promising than REYES. 😉
I talked to Larry after his keynote, trying to figure out to what extent their experience is applicable to games. My conclusion is; not much. Iteration time for lighting was perhaps their greatest benefit, which does not apply to games; any lighting technique fast enough to be used for final renders in games is fast enough for rapid iteration by lighting artists. We do have an issue with iteration when prelighting but we are talking about runtime algorithms here. There are other important differences between film and game rendering that apply here. All in all I think we are at least 10 years away from it making sense to use ray tracing rather than rasterization for games; hybrid approaches might start to show up a little faster than that.
Yes, you’re obviously correct in that Movies != Games. I do think your 10 year projection is a bit pessimistic, I believe it’ll happen sooner than that, especially given the way GPUs are improving year for year, and also given the rate of excellent work in the RTRT segment.
Pingback: Real-Time Rendering · Sony Pictures Imageworks open source projects