The Symposium on Interactive 3D Graphics and Games (I3D) has been a great little conference since its genesis in the mid-80s, featuring many influential papers over this period. You can think of it as a much smaller SIGGRAPH, focused on topics of interest to readers of this blog. This year, the I3D papers program is especially strong.
Most of the papers have online preprints (accessible from Ke-Sen Huang’s I3D 2010 paper page), so I can now do a proper survey. Unfortunately, I was able to read two of the papers only under condition of non-disclosure (Stochastic Transparency and LEAN Mapping). Both papers are very good; I look forward to being able to discuss them publicly (at the latest, when I3D starts on February 19th).
Other papers of interest:
- Fourier Opacity Mapping riffs off the basic concept of Variance Shadow Maps, Exponential Shadow Maps (see also here) and Convolution Shadow Maps. These techniques store a compact statistical depth distribution at each texel of a shadow map; here, the quantity stored is opacity as a function of depth, similarly to the Deep Shadow Maps technique commonly used in film rendering. This is applied to shadows from volumetric effects (such as smoke), including self-shadowing. This paper is particularly notable in that the technique it describes has been used in a highly regarded game (Batman: Arkham Asylum).
- Volumetric Obscurance improves upon the SSAO technique by making better use of each depth buffer sample; instead of treating them as point samples (with a simple binary comparison between the depth buffer and the sampled depth), each sample is treated as a line sample (taking full account of the difference between the two values). It is similar to a concurrently developed paper (Volumetric Ambient Occlusion); the techniques from either of these papers can be applied to most SSAO implementations to improve quality or increase performance. The Volumetric Obscurance paper also includes the option to extend the idea further and perform area samples; this can produce a simple crease shading effect with a single sample, but does not scale well to multiple samples.
- Spatio-Temporal Upsampling on the GPU – games commonly use cross-bilateral filtering to upsample quantities computed at low spatial resolutions. There have also been several recent papers about temporal reprojection (reprojecting values from previous frames for reuse in the current frame); Gears of War 2 used this technique to improve the quality of its ambient occlusion effects. The paper Spatio-Temporal Upsampling on the GPU combines both of these techniques, filtering samples across both space and time.
- Efficient Irradiance Normal Mapping – at GDC 2004, Valve introduced their “Irradiance Normal Mapping” technique for combining a low-resolution precomputed lightmap with a higher-resolution normal map. Similar techniques are now common in games, e.g. spherical harmonics (used in Halo 3), and directional lightmaps (used in Far Cry). Efficient Irradiance Normal Mapping proposes a new basis, similar to spherical harmonics (SH) but covering the hemisphere rather than the entire sphere. The authors show that the new basis produces superior results to previous “hemispherical harmonics” work. Is it better than plain spherical harmonics? The answer depends on the desired quality level; with four coefficients, both produce similar results. However, with six coefficients the new basis performs almost as well as quadratic SH (nine coefficients), making it a good choice for high-frequency lighting data.
- Interactive Volume Caustics in Single-Scattering Media – I see real-time caustics as more of an item to check off a laundry list of optical phenomena than something that games really need, but they may be important for other real-time applications. This paper handles the even more exotic combination of caustics with participating media (I do think participating media in themselves are important for games). From a brief scan of the technique, it seems to involve drawing lines in screen space to render the volumetric caustics. They do show one practical application for caustics in participating media – underwater rendering. If this case is important to your application, by all means give this paper a read.
- Parallel Banding Algorithm to Compute Exact Distance Transform with the GPU – I’m a big fan of Valve’s work on using signed distance fields to improve font rendering and alpha testing. These distance fields are typically computed offline (a process referred to as “computing a distance transform”, sometimes “an Euclidian distance transform”). For this reason, brute-force methods are commonly employed, though there has been a lot of work on more efficient algorithms. This paper gives a GPU-accelerated method which could be useful if you are looking to speed up your offline tools (or if you need to compute alpha silhouettes on the fly for some reason). Distance fields have other uses (e.g. collision detection), so there may very well be other applications for this paper. Notably, the paper project page includes links to source code.
- A Programmable, Parallel Rendering Architecture for Efficient Multi-Fragment Effects – one of the touted advantages of Larrabee was the promise of flexible graphics pipelines supporting stuff like multi-fragment effects (A-buffer-like things like order independent transparency and rendering to deep shadow maps). Despite a massive software engineering effort (and an instruction set tailored to help), Larrabee has not yet been able to demonstrate software rasterization and blending running at speeds comparable to dedicated hardware. The authors of this paper attempt to do the same on off-the-shelf NVIDIA hardware using CUDA – a very aggressive target! Do they succeed? it’s hard to say. They do show performance which is pretty close to the same scene rendering through OpenGL on the same hardware, but until I have time to read the paper more carefully (with an eye on caveats and limitations) I reserve judgment. I’d be curious to hear what other people have to say on this one.
- On-the-Fly Decompression and Rendering of Multiresolution Terrain (link is to an earlier version of the paper) – the title pretty much says it all. They get compression ratios between 3:1 and 12:1, which isn’t bad for on-the-fly GPU decompression. A lot of water has gone under the terrain rendering bridge since I last worked on one, so it’s hard for me to judge how it compares to previous work; if you’re into terrain rendering give it a read.
- Radiance Scaling for Versatile Surface Enhancement – this could be thought of as an NPR technique, but it’s a lot more subtle than painterly techniques. It’s more like a “hyper-real” or “enhanced reality” technique, like ambient occlusion (which darkens creases a lot more than a correct global illumination solution, but often looks better; 3D Unsharp Masking achieves a more extreme version of this look). Radiance Scaling for Versatile Surface Enhancement is a follow-on to a similar paper by the same authors, Light Warping for Enhanced Surface Depiction. Light warping changes illumination directions based on curvature, while radiance scaling scales the illumination instead, which enables cheaper implementations and increased flexibility. With some simplifications and optimizations, the technique should be fast enough for most games, making this paper useful to game developers trying to give their game a slightly stylized or “hyper-real” look.
- Cascaded Light Propagation Volumes for Real-time Indirect Illumination – this appears to be an updated (and hopefully extended) version of the CryEngine 3 technique presented by Crytek at a SIGGRAPH 2009 course (see slides and course notes). This technique, which computes dynamic approximate global illumination by propagating spherical harmonics coefficients through a 3D grid, was very well-received, and I look forward to reading the paper when it is available.
- Efficient Sparse Voxel Octrees – there has been a lot of excited speculation around raycasting sparse voxel octrees since John Carmack first hinted that the next version of id software‘s rendering engine might be based on this technology. A SIGGRAPH 2008 presentation by Jon Olick (then at id) raised the excitement further (demo video with unfortunate soundtrack here). The Gigavoxels paper is another example of recent work in this area. Efficient Sparse Voxel Octrees promises to extend this work in interesting directions (according to the abstract – no preprint yet unfortunately).
- Assisted Texture Assignment – the highly labor-intensive (and thus expensive) nature of art asset creation is one of the primary problems facing game development. According to its abstract (no preprint yet), this paper proposes a solution to part of this problem – assigning textures to surfaces. There is also a teaser posted by one of the authors, which looks promising.
- Epipolar Sampling for Shadows and Crepuscular Rays in Participating Media with Single Scattering – volumetric effects such as smoke, shafts of light (also called “god rays” or crepuscular rays) and volumetric shadows are important in film rendering, but usually missing (or coarsely approximated) in games. Unfortunately, nothing is known about this paper except its title and the identities of its authors. I’ll read it (and pass judgement on whether the technique seems practical) when a preprint becomes available (hopefully soon).
The remaining papers are outside my area of expertise, so it’s hard for me to judge their usefulness:
- Fast Continuous Collision Detection using Deforming Non-Penetration Filters
- Interactive Fluid-Particle Simulation using Translating Eulerian Grids
- Real-Time Multi-Agent Path Planning on Arbitrary Surfaces
- Learning Skeletons for Shape and Pose (there is previous work – anyone knows how it compares?)
- Frankenrigs: Building Character Rigs From Multiple Sources
- Synthesis and Editing of Personalized Stylistic Human Motion
- Interactive Painterly Stylization of Images, Videos and 3D Animations
- Simple Data-Driven Modeling of Brushes (the “brushes” are apparently paintbrushes, judging from the author’s earlier work)