The full list of papers accepted to SIGGRAPH Asia 2009 (with abstracts) is finally up on the conference website. As usual, Ke-Sen Huang is ahead of the curve; his SIGGRAPH Asia 2009 papers page already has preprint links for 54 of the 70 accepted papers.
Three of the papers I mentioned in my first SIGGRAPH Asia 2009 post have since made preprints available: RenderAnts: Interactive Reyes Rendering on GPUs, Debugging GPU Stream Programs Through Automatic Dataflow Recording and Visualization, and Real-Time Parallel Hashing on the GPU.
The Real-Time Rendering paper session is, of course, most likely to contain papers of interest to readers of this blog. The most interesting paper, Micro-Rendering for Scalable, Parallel Final Gathering, was already discussed in a previous blog post. Since then, I’ve noticed many similarities between the technique described in this paper and the point-based color bleeding technique Pixar implemented in RenderMan. This approach to GPU-accelerated global illumination looks very promising. The other three papers in the session are also of interest: Depth-of-Field Rendering with Multiview Synthesis describes a depth-of-field method which occupies an interesting middle ground between the very high quality (and expensive) multiview methods used in film production and the much cheaper (but low-quality) post-processing methods commonly used in games; after some scaling down and optimizing, it may be appropriate for some real-time applications. Similarly to reprojection papers discussed previously, the Amortized Supersampling paper reprojects samples from previous frames to increase quality. Here the goal is anti-aliasing procedural shaders, but the technique could be applied to other types of expensive shaders. The remaining paper from the Real-Time Rendering session, All-Frequency Rendering With Dynamic, Spatially Varying Reflectance, does not yet have a preprint. The short abstract from the conference page does sound intriguing: “A technique for real-time rendering of dynamic, spatially varying BRDFs with all-frequency shadows from environmental and point lights”. Hopefully a preprint will become available soon.
I typically don’t pay very close attention to offline rendering papers, but one in particular looks interesting: Adaptive Wavelet Rendering takes a novel approach to Monte-Carlo ray tracing by rendering into an image-space wavelet basis, instead of rendering into image pixels or samples. This enables them to significantly reduce the number os samples required in certain cases.
The paper Continuity Mapping for Multi-Chart Textures attempts to solve a problem of interest (fixing filtering discontinuities at UV chart seams) but the solution is overly complex for most applications. While the authors claim to address MIP-mapping, their solution does not work well with trilinear filtering since their data structures need to be accessed separately for each MIP-map level and the results blended. They also do not address issues relating to derivative computation. Since their technique requires lots of divergent branching, it is likely to run at low efficiency. This technique might make sense for some specialized applications, but I don’t expect to see it being used for game texture filtering.
There are also some interesting papers on non-rendering topics such as animation and model acquisition. All in all, a very strong papers program this year.