I’ve been busy with papers cochairing for I3D 2009 (he said casually, knowing he’ll probably never get an opportunity to do so again, being a working stiff and not an academic), but hope to get back to blogging soon. In the meantime, here’s the best conference ever: “Foundations of Digital Games“. I3D’s the end of February in Boston, vs. April on a cruise ship between Florida and the Bahamas. Why don’t I get invited to help out at conferences like this?
Corrigenda
“Corrigenda” is a classy publisher’s word for “bugs,” but it also means listing fixes for these errors. Morgan McGuire and his students have been reading our book closely, and have found the first two significant errors in the 3rd edition. These errors and their corrections can be found on our corrigenda page.
Donald Knuth sends checks for $2.56 for each error found in his classic (but still being written) series “The Art of Computer Programming”; Sir James Murray, the editor of the first Oxford English Dictionary, was perhaps the first to reward readers in this way. Knuth has an even more lucrative/costly reward doubling scheme for errors found in his software, with the prize now locked at $327.68.
Tomas offered his students a piece of candy for each error they found in our second edition. I like the idea of rewarding readers in some way, beyond naming them on the page. We’ll think of something; suggestions? More important, have you found any bugs, large or small? Please do pass them on, as it helps everyone.
Interesting bits
I’ve been collecting links via del.icio.us of things for the blog. Let’s go:
Antialiasing thick lines by using textures is an old technique. Areakkusu’s site is nice in that it has good examples and code.
The Level of Detail blog has a great pointer to Slisesix’s amazing demo. “Demo” as in “Demoscene,” where his program is a mere 4k bytes in size. It’s not animated, not real-time, but shows how distance fields could be used for ambient occlusion approximation. Definitely check out all the links: Alex Evan (of LittleBIGPlanet) has a worthwhile talk, and Iñigo’s presentation is even better: good technical content and real-time programs running inside the slides.
I’d rather avoid logrolling in this blog, but did want to mention enjoying Christer Ericson’s post on graphical shader systems. I have to agree that such systems are bad for creating efficient shaders, but these tools do at least allow a wider range of people to experiment and explore. There are a lot of worthwhile followup comments on this thread.
Oogst has a clever trick he calls interior mapping, for rendering walls, floors, and ceilings for buildings seen from the outside. Define a texture to be used for each interior element, and have the pixel shader compute from the eye direction what would be seen inside. There’s no actual geometry, it’s all just computing the ray intersection using (wait for it) a floor function. Humus has demo code available for this technique, using DirectX 10. Admittedly, the various tiles repeat and there are other limits, but actual interiors are vastly superior to the usual dirty or reflective windows currently used in games, with no extra geometry added.
Bavoil and Sainz have a new approach for Screen-Space Ambient Occlusion, using a more elaborate form of horizon mapping: http://developer.nvidia.com/object/siggraph-2008-HBAO.html. Code’s available in NVIDIA’s DX 10 SDK.
If you missed Jon Olick’s talk at SIGGRAPH about voxel octree representation, Timothy Farrar has a summary. Personally, I think Jon’s research is very much that-research, not something that is immediately practical-but I love seeing how changing capabilities and increased flexibility can lead to different approaches.
On Amazon: 4 graphics books for the price of 2, minus the papery bits. Pharr and Humphrey’s “Physically Based Rendering” (PBR) and Luebke’s “Level of Detail for 3D Graphics” are certainly worthwhile, the other two I don’t know about (though look worthwhile and are well-rated). I don’t know a thing about the electronic media used; I’m guessing the books are DRM’ed, not naked PDFs. Searchable is certainly nice. While it’s too bad you can’t just buy the ones you want (I smell a marketing department having some “what can we get them to pay for what bundle?” meetings, given the negligible physical cost), I did notice an interesting thing on Amazon I hadn’t seen before for each book except PBR: “Upgrade this book for $18.39 more, and you can read, search, and annotate every page online.” You can also upgrade books you’ve previously purchased on Amazon.
On Gamasutra, an article summarizing DirectX 11. I liked it: to the point, and with some useful figures.
Every once in awhile someone will say he has a new graphics rendering method that’s awesome, but won’t explain it because of some reason (usually involving money or fame). Here’s one, from Sunfish Studio: no micropolygons, no point sampling. OK, so that leaves-what?-voxels? If anyone knows what this is about, please comment; I’m curious.
GameDeveloperTools.com is a new site that tracks news and has users rate books. To be honest, a lot more voting needs to happen to make the ratings useful-I’d stick with Amazon for now. The main use is that you can look at specific categories, which are a bit better than Amazon’s somewhat random sorting of graphics books (e.g., our book is in three categories on Amazon, competing against artists’ books on using mental ray and RenderMan).
Finally, this, well, this is not interactive graphics, but is just so cool: parking signs understandable from only certain locations.
Latency
Herb Sutter’s site has some interesting material about CPU architectures. His article “The Free Lunch is Over” is a bit dated-everyone should know by now that multicore is upon us-but does a good job pounding home that concurrency is the way of the future (i.e., like, now). It also has some memorable lines, like, “Cache is King,” and “Andy Giveth, and Bill Taketh Away.” I contemplate the latter every time I open up a Word document and it takes 25 seconds to appear.
What I noticed today, due to Eric Preisz’s new indexbuffer site, was that Herb has a newer presentation available, “Machine Architecture: Things Your Programming Language Never Told You.” This covers in-depth the topic of latency and how the CPU attempts to hide it. It’s worth a look if you’re at all interested in the topic; there’s material here that I hadn’t seen presented before. I skimmed over the odd things that compilers might do to code, I must admit, but overall I found it worthwhile.
New stuff from NVIDIA
NVIDIA have finally finished posting all of the chapters of GPU Gems 2 online (the first GPU Gems is available as well). This is a great resource with many useful and interesting articles. NVIDIA have also been posting many of the presentations from their NVISION conference, which can be found on their news page.
Face and Skin Papers at SIGGRAPH Asia 2008
Ke-Sen Huang has recently added three papers relating to human face and skin rendering to his excellent list of SIGGRAPH Asia 2008 papers. Human faces are among the hardest objects to render realistically, since people are used to examining faces very closely.
The first two papers focus on modeling the effect of human skin layers on reflectance. The authors of the first paper, “Practical Modeling and Acquisition of Layered Facial Reflectance” work in Paul Debevec’s group at the USC Institue for Creative Technologies, which has done a lot of influential work on acquisition of reflectance from human faces (the results of which are now being offered as a commercial product). Previous work focused on polarization to separate reflectance into specular and diffuse. Here diffuse is further separated into single scattering, shallow multiple scattering, and deep multiple scattering (using structured light). Specular and diffuse albedo are captured per-pixel. Unfortunately specular roughness (lobe width) is only captured for each of several regions and not per-pixel, but since normals are captured at very high resolution they could presumably be used to generate per-pixel roughness values which could be useful in rendering at lower resolutions, as we discuss in Section 7.8.1 of Real-Time Rendering. The scattering model is based on the dipole approximation of subsurface scattering introduced by Henrik Wann Jensen and others. NVIDIA have shown real-time rendering of such models using multiple texture-space diffusion passes.
The authors of the second paper, “A Layered, Heterogeneous Reflectance Model for Acquiring and Rendering Human Skin” have also written several important papers on skin reflectance, focused more on simulating physical processes from first principles. They model human skin as a collection of heterogeneous scattering layers separated by infinitesimally thin heterogeneous absorbing layers. They design their model for efficient GPU evaluation, similar to NVIDIA’s approach mentioned above (one of this paper’s authors also worked on the NVIDIA skin demo). “Efficient” here is a relative term, since their model is too complex to be real-time on current hardware, and as presented is probably too complicated for game use. However, ideas gleaned from this paper are likely to be useful for skin rendering in games. The authors also present a protocol for measuring the parameters of their model
The third paper, “Facial Performance Synthesis Using Deformation-Driven Polynomial Displacement Maps” is also from Debevec’s USC group and focuses on animation rather than reflectance. They use the same facial capturing setup, but with different software to capture animated facial deformations instead of reflectance (this too has been turned into a commercial product). This paper is interesting because it extends previous coarse / fine deformation approaches to multiple scales, and uses a novel method to relate the different scales to each other. They use a polynomial displacement map, which uses the same form as Polynomial Texture Mapping (an interesting technique in its own right) but for deformation rather than shading. This method also bears some resemblance to the wrinkle map approach used by AMD for their Ruby Whiteout demo, which they presented at SIGGRAPH 2007.
Tim Sweeney Interview
Tim Sweeney is a cofounder of Epic Games and lead developer behind the graphics engines for the Unreal series of games. Jon Stokes has a meaty interview with him, up on Ars Technica; go read it!
Tim talks about how the GPU has become general enough that we will soon be able to get away from rasterization as the only rendering algorithm. Back ten years ago, dealing with an API to do all interactive graphics was limiting. Widening it out with programmable shaders gives more flexibility, but at the cost of complexity of managing the programming environment. Nowadays you’re programming two separate computers that talk to each other. The shift to parallel programming is already a major change in how we need to think about computers, one that hasn’t become a core concept for most of us, yet (myself included; I’m doing my best to wrap my head around Intel’s Threading Building Blocks, for example). Doing such programming in a few different languages is a “feature” we’d all love to see go away.
With Larrabee, CUDA, and compute shaders, the trends of more flexibility continue, though in different flavors. It seems unlikely to me that the pipeline model itself for rendering will fade in popularity any time soon, though rasterization (traditional GPUs) vs. tiling (Larrabee, handhelds) will continue to be a debate. Tim mentions voxel rendering techniques (really, heightfield, in the old games) as something that died once the GPU took over. True. Such techniques are making a return on the GPU even today, via relief mapping and adaptive tessellation. We’re also seeing volume rendering by marching along rays; if an algorithm can be refit to work on a GPU, it will find some use.
So I agree, the increase in flexibility will be all to the good in letting programmers again do much more than render textured opaque triangles via a Z-buffer really fast and most everything else not-so-fast. Frankly, I believe much of the buzz about interactive ray tracing is more an expression of yearning by us graphics programmers that we could actually program again, vs. calling an API. The April Fool’s Day spoof about ray tracing in DirectX 11 fooled a number of people I know, I believe because they wished it were true. Having hacked my fair share of rendering algorithms, I certainly see the appeal.
I think Tim’s a bit overoptimistic on the time frame in which such changes will occur. First, everyone needs to get this future hardware. Sure, NVIDIA points out there are 70 million CUDA-capable graphics cards out there today, but no one is floating CUDA-based programs as alternative interactive renderers at this point (though NVIDIA’s experiments with CUDA ray tracing are wonderful to see). DirectX 9 graphics cards will be around for years to come. As significant, making such techniques part of the normal development toolchain also takes awhile. I think of how long normal (dot-product) bump mapping, introduced around 2001, took to become a feature that was used in games: first most GPUs had to support it, then tools had to generate and manage the maps, then artists had to be trained to use the tools, etc.
When the second edition of our book came out, it was a few hundred pages longer than the first. I held out the hope to Tomas that our third edition would be shorter. My logic was that, with programmable shaders coming to the fore, we wouldn’t have to cover all the little variants that were possible, but rather could just present pure algorithms and not worry about the implementation details.
This came true to some extent. For example, we could cut out chunks of text about extremely specific ways to efficiently compute the Fresnel term, or give examples showing how assembly instructions are packed together in a pixel shader. There was now plenty of space on the GPU for shader instructions, so such detail was nonsensical. It would be like a programming languages book listing all the programs that could be written in the language. We still do have to spend time dealing with the vagaries of the APIs, such as the relatively space-inefficient ways in which triangles are fed through the pipeline (e.g. a “compact” representation of a cube must use 24 separate vertices, when all that is really needed are 8 points and 6 normals).
Counterbalancing such cuts in text, we found we had many more algorithms to write about. With both the increase in abilities in each successive generation of GPUs and APIs, coupled with research into ways to efficiently map algorithms onto new architectures, the book became considerably longer (and certainly heavier, since each illustration’s atoms now each needed 3 bytes instead of 1). So, I’m not holding out much hope for a shorter edition next time around-there’s just so much cool stuff that we can now do, and more yet to come.
Incidentally, we had asked Tim for a pithy quote for our new Hardware chapter. He said he didn’t have anything, but passed on one from Bily Zelsnack. This quote was tempting, but instead we used it in our last chapter: “Pretty soon, computers will be fast,” which I just love for some reason. It may sometimes take 20 seconds to open a file folder on Windows today, but I remain hopeful that someday, someday…
More on Disk-Based Global Illumination
Bunnell’s disk-based global illumination algorithm has been discussed on this blog before, and it is an interesting example of GPU-oriented global illumination. I just read (on the Level of Detail blog) that Bunnell’s company, Fantasy Lab, is now selling an SDK incorporating an advanced version of this algorithm.
GPU REYES Implementation
Pixar’s Renderman rendering package is based on the REYES rendering pipeline (an acronym for the humble phrase “Render Everything You Ever Saw”). Most film studios use Pixar’s Renderman, and many others use renderers operating on similar principles. A close reading of the original REYES paper shows a pipeline which was designed to be extremely efficient (it had to be, to run on 1980’s hardware!) and produce very high quality images. I have long thought that this pipeline is a good fit for graphics hardware (given some minor changes or an increase in generality), and is perhaps a better fit to today’s dense scenes than the traditional triangle pipeline. A paper to be published in SIGGRAPH Asia this year describes a GPU implementation of the subdivision stages of the REYES pipeline, which is a key step towards a full GPU REYES implementation. They use CUDA for the subdivision stages, and then pass the resulting micropolygons to a traditional rendering pass. Although combining CUDA and traditional rendering in this manner introduces performance problems, newer APIs such as DX11 compute shaders have been designed to perform well under such conditions. Of course, this algorithm would be a great fit for Larrabee.
Anyone interested in the implementation details of the REYES algorithm should also read “How PhotoRealistic RenderMan Works”, which is available as a chapter in the book Advanced Renderman and in the SIGGRAPH 2000 Renderman course notes.
I found this paper on Ke-Sen Huang‘s SIGGRAPH Asia preprint page. Ke-Sen performs an invaluable service to the community by providing links to preprints of papers from all the major graphics-related conferences. This preprint page is all the more impressive when you realize that SIGGRAPH Asia has not even published a list of accepted papers yet!
At long last, in stock
Lately I’ve been looking at Amazon’s listing of our book daily, to see if it’s in stock. Finally, today, it is, for the first time ever, a mere 40 days after its release. Not our publisher’s fault at all (A.K. Peters rules, OK?), and the book’s not that popular (AFAIK); it evidently just takes awhile for the books delivered to percolate out into Amazon’s system. Amazon under-ordered, so I believe by the time the books they first ordered made it to the distribution centers, they were already sold out, making the book again out of stock. Lather, rinse, repeat. So maybe I should be sad that it’s now in stock.
Anyway, the amusing part of visiting each day has been looking at the discount given on the book. It’s nice to see a discount at all, as Amazon didn’t discount our previous book for the first few years. With the current 28% discount, it means our new edition is effectively $5 less than the previous edition’s original price. Which cheers me up, as I like to imagine that students are saving money; my older son will be in college next year, and any royalties I make from our book will effectively get recycled over the next four years in buying his texts. His one book for a summer course this year was a black & white softbound book, 567 pages, and cost an astounding (to me) $115, and that was “discounted” from $128.95. I’m now encouraging my younger son to skip college and go into the lucrative field of transistor repair.
Amazon’s discount has varied like a random walk among four values: 0%, 22%, 28%, and 33%. Originally, in July, it was list price, then the discount was set at 33% (so Amazon was paying more for the book than they were selling it for), then back to normal, then 33%. Around August 14th I started checking once a week or so and also looking at Associates sales (a program I recommend if you’re a book author, as it’s found money – it pays for this website). Again the book went back to no discount, then on August 20th started at 0%, went to 22% off, then 33% off, all in the same day. The next day there was no discount, then the day after it went back to 33%. August 28, when I checked again, it was at 22%, and this discount held through the end of the month. On September 1st it went up to 28% off, and there it’s been for a whole 9 days.
The oddest bit was that, in searching around for prices (Amazon’s is indeed the best, at least as of today), I noticed that the first edition of our book, from 1999, sells used for twice as much or more than our new book. Funny world.
By the way, if you are looking to write a book and want to understand royalties and going rates a little bit better, see my old article on this topic. Really, it’s not my article, it’s a collection of responses from authors I know. Some of it’s a bit confrontational and might make you a little paranoid, but I think it’s worth a read. If you’re writing technical books to get rich, you’re fooling yourself, but on the other hand there’s no reason to let someone take advantage of you. My favorite author joke, from Michael Cohen via John Wallace, is that there are dozens of dollars to be made writing a book, dozens I tell you. It can be a bit better than that if you’re lucky, but still comes out to about minimum wage when divided by the time spent. But for me it’s a lot more fun and educational work than flipping burgers, and the money is not why we wrote our book. We did it for the wild parties and glamorous lifestyle.
Update: heh, that didn’t last long. I wrote this entry Sept. 9th. As of the 10th, the book is (a) out of stock again and (b) down to a 2% discount. 2%?! Truly obscure.