Category Archives: Resources

GPU REYES Implementation

Pixar’s Renderman rendering package is based on the REYES rendering pipeline (an acronym for the humble phrase “Render Everything You Ever Saw”). Most film studios use Pixar’s Renderman, and many others use renderers operating on similar principles. A close reading of the original REYES paper shows a pipeline which was designed to be extremely efficient (it had to be, to run on 1980’s hardware!) and produce very high quality images. I have long thought that this pipeline is a good fit for graphics hardware (given some minor changes or an increase in generality), and is perhaps a better fit to today’s dense scenes than the traditional triangle pipeline. A paper to be published in SIGGRAPH Asia this year describes a GPU implementation of the subdivision stages of the REYES pipeline, which is a key step towards a full GPU REYES implementation. They use CUDA for the subdivision stages, and then pass the resulting micropolygons to a traditional rendering pass. Although combining CUDA and traditional rendering in this manner introduces performance problems, newer APIs such as DX11 compute shaders have been designed to perform well under such conditions. Of course, this algorithm would be a great fit for Larrabee.

Anyone interested in the implementation details of the REYES algorithm should also read “How PhotoRealistic RenderMan Works”, which is available as a chapter in the book Advanced Renderman and in the SIGGRAPH 2000 Renderman course notes.

I found this paper on Ke-Sen Huang‘s SIGGRAPH Asia preprint page. Ke-Sen performs an invaluable service to the community by providing links to preprints of papers from all the major graphics-related conferences. This preprint page is all the more impressive when you realize that SIGGRAPH Asia has not even published a list of accepted papers yet!

Gamefest presentations and other links

Christer Ericson points out in a recent blog post that Microsoft has uploaded the Gamefest 2008 slides. These include a lot of relevant information, especially in relation to Direct3D 11. Christer’s post has many other links to interesting stuff – I particularly liked Iñigo Quilez’s slides on raycasting distance fields. Distance fields (sometimes referred to as Euclidean distance transforms, though that properly refers to the process of creating such a distance field) are very useful data structures. As Valve showed at SIGGRAPH last year, they can also be used for cheap vector shapes (the basic form of their technique is a better way to generate data for alpha testing, with zero runtime cost!).

Portal adds

No, not that Portal (which if you haven’t played, you should, even if you have no time; it’s short! For NVIDIA card owners the first slice is free). I’ve updated our portal page with a few additions.

New blogs added: Pandemonium, C0DE517E, Gates 381, GameDevKicks, Chris Hecker’s, and Beyond3D. Being a trailing-edge adoption kind of a guy (I’ve kept my Tivo 1 alive by replacing the disk drives three times so far, my cell phone’s $90 from Indonesia via eBay), I ignored blogs for the most part until last year, when I finally learned how simple it was to use an RSS reader (I like Google’s). My philosophy since then is that if a blog has any articles relevant to interactive rendering techniques, I’ll subscribe. Since most graphics blogs don’t post daily, traffic is low, so checking new postings takes a minute or two a day. That said, if I had to pick just one, it would probably be GameDevKicks, since it’s an aggregator, similar to Digg (though the low counts on the digs, excuse me, kicks, means that some things may fall through the cracks). This service means I’m off the hook in noting new articles on Gamasutra on this blog, since these usually get listed there.

Ogre Forums has been added to the list of developer sites. Ogre is a popular free game development platform. I can’t say I frequent the forum, but on the strength of this article on using the pixel shader to generate the illusion of geometry, there are obviously good things happening here.

The Unity Web Player Hardware Statistics page is similar to the well-known Steam survey, but for machines used by casual gamers.

A site that’s been around a long while and should have been on the portal from the start is the Virtual Terrain Project, a constantly-expanding repository of algorithms about and models of terrain, vegetation, natural phenomena, etc.

… and that’s it for now.

Drawing Silhouette Edges

With SIGGRAPH, the release of ShaderX^2 for free, and the publication of our own 3rd edition, there was much to report, but now things have settled down a bit. The bread and butter content of this blog is any new or noteworthy article or demo that is related to the field. The assumption is that not everyone is tracking all sources of information all the time.

So, if you don’t subscribe to Gamasutra’s free email newsletters, you wouldn’t know of this article: Inking the Cube: Edge Detection with Direct3D 10. It walks through the details of creating geometry for silhouette and crease edges using the geometry shader. To its credit, it also shows the problem with the basic approach: separate silhouette edges can have noticeable join and endcap gaps. One article that addresses this problem:

McGuire, Morgan, and John F. Hughes, “Hardware-Determined Feature Edges,” The 3rd International Symposium on Non-Photorealistic Animation and Rendering (NPAR 2004), pp. 35–47, June 2004.

One minor flaw in the Gamasutra article: the URL to Sarah Tariq’s presentation is broken (I’m writing Gamasutra to ask them to correct it), that article is here.

Disk-Based Global Illumination in RenderMan

In Section 9.1 of our book we discuss Bunnell’s disk-based approximation for computing dynamic ambient occlusion and indirect lighting, and mention that this technique was used by ILM when performing renders for the Pirates of the Caribbean films.

Recently, more details on this technique have appeared in a RenderMan Technical Memo called Point-Based Approximate Color Bleeding, available on Pixar’s publication page. Pixar have implemented an interesting global illumination algorithm based in part on Bunnell’s disk approximation, which is used for transfer over intermediate distances. Spherical harmonics are used to approximate distant transfer and ray-tracing is used for transfer between nearby points. This technique is now built into Pixar’s RenderMan and has been used in over 12 films to date, including Pixar’s own Wall-E. it is interesting to see a technique originating from real-time rendering used in film production; the opposite is much more usual. The paper is worth a close read – perhaps someone will close the loop by adapting some of Pixar’s enhancements into new real-time techniques.

Pixar’s publication page is a valuable resource. The papers span a quarter century, and most of them have been very influential in the field. The first seven papers gave us the Cook-Torrance BRDF, programmable shaders, distributed ray tracing, image compositing, stochastic sampling, percentage-closer filtering, and the REYES rendering architecture (upon which almost all film production renderers are based). the page includes many other important papers as well as SIGGRAPH course notes and Renderman Technical Memos.

SIGGRAPH 2008: Bilateral Filters

The class “A Gentle Introduction to Bilateral Filtering and its Applications” was very well-presented. Bilateral filters are edge-preserving smoothing filters that have many applications in rendering and computational photography. The basic concept was clearly explained, and then various variants, related techniques, optimized implementations and applications were discussed. The full slides as well as detailed course notes are available here. Currently they are from the SIGGRAPH 2007 course; I assume the 2008 slides will replace them soon.

A related technique, which appears to have some interesting advantages over the bilateral filter, was presented in a paper this year, titled “Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation”. It presents a novel edge-preserving smoothing operator based on weighted least squares optimization. The paper and various supplementary materials are available here.

SIGGRAPH 2008: Beyond Programmable Shading Class

This class was about non-traditional processing performed on GPUs, similar to GPGPU but for graphics. As we discuss in the “Futures” chapter at the end of our book, this is a particularly interesting direction of research and may well represent the future of rendering. The recent disclosures on Direct3D 11 Compute Shaders and Larrabee make this a particularly hot topic.

The full course notes are available at the course web site.

The talk by Jon Olick from id software was perhaps the most interesting. He discussed a sparse voxel octree data structure which is rendered directly using CUDA. This extends id’s megatexture idea to geometry and may very well find its way into id’s next engine in some form.

SIGGRAPH 2008: Advances in Real-Time Rendering in 3D Graphics and Games

I attended the “Advances in Real-Time Rendering in 3D Graphics and Games” class today at SIGGRAPH. This is the third year in a row Natasha Tatarchuk from AMD has organized this class. Each year different game developers as well as people from the AMD demo team are brought in to talk about graphics, and some of the best real-time stuff at SIGGRAPH in the last two years has been in this course.

Unfortunately, due to Little Big Planet crunch, Alex Evans from Media Molecule was unable to give his planned talk and a different speaker was brought in instead. This was a bit of a bummer since Alex’s SIGGRAPH 2006 talk was very good and I was hoping to hear more about his unorthodox take on real-time rendering.

The remaining talks were of high quality, including talks by the developers of games such as Halo 3, Starcraft 2 and Crysis. Unlike previous years, where it took many weeks for the course notes to be available online, the full course notes are already available at AMD’s Technical Publications page – check them out!