GPU REYES Implementation

Pixar’s Renderman rendering package is based on the REYES rendering pipeline (an acronym for the humble phrase “Render Everything You Ever Saw”). Most film studios use Pixar’s Renderman, and many others use renderers operating on similar principles. A close reading of the original REYES paper shows a pipeline which was designed to be extremely efficient (it had to be, to run on 1980’s hardware!) and produce very high quality images. I have long thought that this pipeline is a good fit for graphics hardware (given some minor changes or an increase in generality), and is perhaps a better fit to today’s dense scenes than the traditional triangle pipeline. A paper to be published in SIGGRAPH Asia this year describes a GPU implementation of the subdivision stages of the REYES pipeline, which is a key step towards a full GPU REYES implementation. They use CUDA for the subdivision stages, and then pass the resulting micropolygons to a traditional rendering pass. Although combining CUDA and traditional rendering in this manner introduces performance problems, newer APIs such as DX11 compute shaders have been designed to perform well under such conditions. Of course, this algorithm would be a great fit for Larrabee.

Anyone interested in the implementation details of the REYES algorithm should also read “How PhotoRealistic RenderMan Works”, which is available as a chapter in the book Advanced Renderman and in the SIGGRAPH 2000 Renderman course notes.

I found this paper on Ke-Sen Huang‘s SIGGRAPH Asia preprint page. Ke-Sen performs an invaluable service to the community by providing links to preprints of papers from all the major graphics-related conferences. This preprint page is all the more impressive when you realize that SIGGRAPH Asia has not even published a list of accepted papers yet!

At long last, in stock

Lately I’ve been looking at Amazon’s listing of our book daily, to see if it’s in stock. Finally, today, it is, for the first time ever, a mere 40 days after its release. Not our publisher’s fault at all (A.K. Peters rules, OK?), and the book’s not that popular (AFAIK); it evidently just takes awhile for the books delivered to percolate out into Amazon’s system. Amazon under-ordered, so I believe by the time the books they first ordered made it to the distribution centers, they were already sold out, making the book again out of stock. Lather, rinse, repeat. So maybe I should be sad that it’s now in stock.

Anyway, the amusing part of visiting each day has been looking at the discount given on the book. It’s nice to see a discount at all, as Amazon didn’t discount our previous book for the first few years. With the current 28% discount, it means our new edition is effectively $5 less than the previous edition’s original price. Which cheers me up, as I like to imagine that students are saving money; my older son will be in college next year, and any royalties I make from our book will effectively get recycled over the next four years in buying his texts. His one book for a summer course this year was a black & white softbound book, 567 pages, and cost an astounding (to me) $115, and that was “discounted” from $128.95. I’m now encouraging my younger son to skip college and go into the lucrative field of transistor repair.

Amazon’s discount has varied like a random walk among four values: 0%, 22%, 28%, and 33%. Originally, in July, it was list price, then the discount was set at 33% (so Amazon was paying more for the book than they were selling it for), then back to normal, then 33%. Around August 14th I started checking once a week or so and also looking at Associates sales (a program I recommend if you’re a book author, as it’s found money – it pays for this website). Again the book went back to no discount, then on August 20th started at 0%, went to 22% off, then 33% off, all in the same day. The next day there was no discount, then the day after it went back to 33%. August 28, when I checked again, it was at 22%, and this discount held through the end of the month. On September 1st it went up to 28% off, and there it’s been for a whole 9 days.

The oddest bit was that, in searching around for prices (Amazon’s is indeed the best, at least as of today), I noticed that the first edition of our book, from 1999, sells used for twice as much or more than our new book. Funny world.

By the way, if you are looking to write a book and want to understand royalties and going rates a little bit better, see my old article on this topic. Really, it’s not my article, it’s a collection of responses from authors I know. Some of it’s a bit confrontational and might make you a little paranoid, but I think it’s worth a read. If you’re writing technical books to get rich, you’re fooling yourself, but on the other hand there’s no reason to let someone take advantage of you. My favorite author joke, from Michael Cohen via John Wallace, is that there are dozens of dollars to be made writing a book, dozens I tell you. It can be a bit better than that if you’re lucky, but still comes out to about minimum wage when divided by the time spent. But for me it’s a lot more fun and educational work than flipping burgers, and the money is not why we wrote our book. We did it for the wild parties and glamorous lifestyle.

Update: heh, that didn’t last long. I wrote this entry Sept. 9th. As of the 10th, the book is (a) out of stock again and (b) down to a 2% discount. 2%?! Truly obscure.

Gamefest presentations and other links

Christer Ericson points out in a recent blog post that Microsoft has uploaded the Gamefest 2008 slides. These include a lot of relevant information, especially in relation to Direct3D 11. Christer’s post has many other links to interesting stuff – I particularly liked Iñigo Quilez’s slides on raycasting distance fields. Distance fields (sometimes referred to as Euclidean distance transforms, though that properly refers to the process of creating such a distance field) are very useful data structures. As Valve showed at SIGGRAPH last year, they can also be used for cheap vector shapes (the basic form of their technique is a better way to generate data for alpha testing, with zero runtime cost!).

Bézier, Gouraud, Fresnel

Vincent Scheib’s terminology rant included how to pronounce “SIGGRAPH” (i.e., like “sigma”), a pet peeve of mine. This reminded me of the following.

While writing the book, we wanted to give phonetic spellings of various common graphics terms – after hearing someone pronounce “Gouraud” as “Goo-raude” I thought it worth the time. In searching around, I realized that people seemed to somewhat disagree about Bézier, so finally I asked a few people from France. Frédo Durand gave the best response, sending an audio clip of him pronouncing the words. So, without further ado, here’s the audio clip. Now you know.

Update: here’s a nice article about pronunciation of many other computer graphics related terms.

I3D 2009 CFP

I3D is a symposium (fancy word for “small conference”) that is a great way to spend three days with academics and industry people in the fields of interactive 3D graphics and games. The program is a single track, consisting primarily of original research papers, but with other events such as keynotes, posters, roundtables, etc. See 2008’s program for an example.

I’m papers cochair this year with Morgan McGuire, so I’m biased, but I love the venue. Instead of the 28,000+ people of SIGGRAPH, you have around 100 or so people at a hotel or campus (2008’s was at EA’s headquarters). The best part is that you are likely to have a good conversation with anyone you meet, since they’re all there for the same reason. There is also plenty of time in the evening to socialize. I thoroughly enjoyed organizing the pub quiz for 2008.

If a paper is too daunting a task, consider submitting a poster or demo; it’s an easy way to get your idea out there and get feedback from others in the field.

The call for participation (CFP) is at http://i3dsymposium.org.

The quick summary:

Conference Feb. 27 – March 1, 2009
Location: Radisson Hotel Boston, Boston, MA

Paper submissions: October 24, 2008
Poster and demo submissions: December 19, 2008

A cool stat I learned of recently is that I3D is #25 of all computer science journals in its impact; the only graphics publication of more impact is SIGGRAPH itself, at #9: http://citeseer.ist.psu.edu/impact.html (study’s from 2003, I’d love to find a newer one if anyone knows).

Hope to see you there.

Portal adds

No, not that Portal (which if you haven’t played, you should, even if you have no time; it’s short! For NVIDIA card owners the first slice is free). I’ve updated our portal page with a few additions.

New blogs added: Pandemonium, C0DE517E, Gates 381, GameDevKicks, Chris Hecker’s, and Beyond3D. Being a trailing-edge adoption kind of a guy (I’ve kept my Tivo 1 alive by replacing the disk drives three times so far, my cell phone’s $90 from Indonesia via eBay), I ignored blogs for the most part until last year, when I finally learned how simple it was to use an RSS reader (I like Google’s). My philosophy since then is that if a blog has any articles relevant to interactive rendering techniques, I’ll subscribe. Since most graphics blogs don’t post daily, traffic is low, so checking new postings takes a minute or two a day. That said, if I had to pick just one, it would probably be GameDevKicks, since it’s an aggregator, similar to Digg (though the low counts on the digs, excuse me, kicks, means that some things may fall through the cracks). This service means I’m off the hook in noting new articles on Gamasutra on this blog, since these usually get listed there.

Ogre Forums has been added to the list of developer sites. Ogre is a popular free game development platform. I can’t say I frequent the forum, but on the strength of this article on using the pixel shader to generate the illusion of geometry, there are obviously good things happening here.

The Unity Web Player Hardware Statistics page is similar to the well-known Steam survey, but for machines used by casual gamers.

A site that’s been around a long while and should have been on the portal from the start is the Virtual Terrain Project, a constantly-expanding repository of algorithms about and models of terrain, vegetation, natural phenomena, etc.

… and that’s it for now.

Drawing Silhouette Edges

With SIGGRAPH, the release of ShaderX^2 for free, and the publication of our own 3rd edition, there was much to report, but now things have settled down a bit. The bread and butter content of this blog is any new or noteworthy article or demo that is related to the field. The assumption is that not everyone is tracking all sources of information all the time.

So, if you don’t subscribe to Gamasutra’s free email newsletters, you wouldn’t know of this article: Inking the Cube: Edge Detection with Direct3D 10. It walks through the details of creating geometry for silhouette and crease edges using the geometry shader. To its credit, it also shows the problem with the basic approach: separate silhouette edges can have noticeable join and endcap gaps. One article that addresses this problem:

McGuire, Morgan, and John F. Hughes, “Hardware-Determined Feature Edges,” The 3rd International Symposium on Non-Photorealistic Animation and Rendering (NPAR 2004), pp. 35–47, June 2004.

One minor flaw in the Gamasutra article: the URL to Sarah Tariq’s presentation is broken (I’m writing Gamasutra to ask them to correct it), that article is here.

Disk-Based Global Illumination in RenderMan

In Section 9.1 of our book we discuss Bunnell’s disk-based approximation for computing dynamic ambient occlusion and indirect lighting, and mention that this technique was used by ILM when performing renders for the Pirates of the Caribbean films.

Recently, more details on this technique have appeared in a RenderMan Technical Memo called Point-Based Approximate Color Bleeding, available on Pixar’s publication page. Pixar have implemented an interesting global illumination algorithm based in part on Bunnell’s disk approximation, which is used for transfer over intermediate distances. Spherical harmonics are used to approximate distant transfer and ray-tracing is used for transfer between nearby points. This technique is now built into Pixar’s RenderMan and has been used in over 12 films to date, including Pixar’s own Wall-E. it is interesting to see a technique originating from real-time rendering used in film production; the opposite is much more usual. The paper is worth a close read – perhaps someone will close the loop by adapting some of Pixar’s enhancements into new real-time techniques.

Pixar’s publication page is a valuable resource. The papers span a quarter century, and most of them have been very influential in the field. The first seven papers gave us the Cook-Torrance BRDF, programmable shaders, distributed ray tracing, image compositing, stochastic sampling, percentage-closer filtering, and the REYES rendering architecture (upon which almost all film production renderers are based). the page includes many other important papers as well as SIGGRAPH course notes and Renderman Technical Memos.

RT’08 Presentation

The RT’08 symposium is a small conference (160 people, vs. 28,000+ people at SIGGRAPH) focussed on interactive ray tracing research. It was twice as large as last year’s due to its co-location with SIGGRAPH. There were no big breakthroughs; rather, people were exploring how best to take advantage of new hardware. Of personal interest, there were also talks on optimizing various acceleration structures. I should be making an issue of the Ray Tracing News pretty soon with a summary.

One happy event for me was that my keynote went well, after losing sleep over it the night before. The subject was ray tracing, rasterization, and hardware, past and future. It was one of the best talks I’ve ever given, which is somewhat equivalent to “one of the best films Rob Schneider’s ever made”. Well, maybe better than that. It was fun to dig up provocative quotes and engaging images for the talk; that said, the slideset doesn’t include most of my jokes (though it does include a photo of an object that luckily did not burn down my office’s building). I would have liked to go even further in the direction of more images and less text–it’s a time-consuming task to find good images! I did sometimes break my own personal rule of a maximum of 6 lines per slide, but usually for effect, to overwhelm the viewer with text; my favorite is my “buffers slide”, which lists all the named buffers I could find up to the year 2000.

After the talk Larry Gritz and Dan Wexler pointed me at the new art of Pecha Kucha, where a presentation consists mostly of images and runs at a constant rate. This sounds pretty ideal for most talks. I sometimes find myself distracted between looking at the text on a slide while also trying to listen to the speaker. At least I didn’t show any equations, which are absolute death most of the time. An equation is highly dense information, suitable for a talk only if (a) you are noting what the equation looks like, so people will recognize it later when they read your paper, (b) you actually spend the time to slowly explain each term in the equation and let it sink in, or (c) everyone in the audience already knows the equation (in which case it would be better to just say the equation’s name). Even then, you risk losing much of your audience when you put up an equation. Same rule applies to long shaders or pseudocode samples.

I think this phenomenon of too much information occurs more frequently now than in the past because slidesets often take the place of white papers. This happens quite frequently with GDC, XNA Gamefest, SIGGRAPH class talks, and many other venues. It’s nice that people who can’t attend the talk can at least see the slides, but it leads to slidesets having two purposes: one is to enhance the verbal part of the talk, the other is to reiterate the verbal part of the talk. Enhancement favors succinct bullet items, which can be hard to understand when downloaded. Reiteration helps the downloader, but is either overwhelming (read, or listen?) or boring (OMG he’s reading his slides, line by line) during the talk itself. OK, end of rant.

I’m as guilty as the rest (and now wish I had trimmed back on text on my last talk, now that this phenomenon has dawned on me), but part of the solution is to add at least some further notes (not seen on the screen) with the slides, which I did do with my slideset (the non-PDF version). Better is to write a blog entry, webpage, or paper about the subject, as PowerPoint is not really a word-processor. And of course I probably won’t do so myself for my own talk, as that’s too much time and I don’t think I’d add much to the presentation. But, my excuse is that my presentation is a high-level “soft” talk than anything with a lot of technical chew.