Tim Sweeney is a cofounder of Epic Games and lead developer behind the graphics engines for the Unreal series of games. Jon Stokes has a meaty interview with him, up on Ars Technica; go read it!
Tim talks about how the GPU has become general enough that we will soon be able to get away from rasterization as the only rendering algorithm. Back ten years ago, dealing with an API to do all interactive graphics was limiting. Widening it out with programmable shaders gives more flexibility, but at the cost of complexity of managing the programming environment. Nowadays you’re programming two separate computers that talk to each other. The shift to parallel programming is already a major change in how we need to think about computers, one that hasn’t become a core concept for most of us, yet (myself included; I’m doing my best to wrap my head around Intel’s Threading Building Blocks, for example). Doing such programming in a few different languages is a “feature” we’d all love to see go away.
With Larrabee, CUDA, and compute shaders, the trends of more flexibility continue, though in different flavors. It seems unlikely to me that the pipeline model itself for rendering will fade in popularity any time soon, though rasterization (traditional GPUs) vs. tiling (Larrabee, handhelds) will continue to be a debate. Tim mentions voxel rendering techniques (really, heightfield, in the old games) as something that died once the GPU took over. True. Such techniques are making a return on the GPU even today, via relief mapping and adaptive tessellation. We’re also seeing volume rendering by marching along rays; if an algorithm can be refit to work on a GPU, it will find some use.
So I agree, the increase in flexibility will be all to the good in letting programmers again do much more than render textured opaque triangles via a Z-buffer really fast and most everything else not-so-fast. Frankly, I believe much of the buzz about interactive ray tracing is more an expression of yearning by us graphics programmers that we could actually program again, vs. calling an API. The April Fool’s Day spoof about ray tracing in DirectX 11 fooled a number of people I know, I believe because they wished it were true. Having hacked my fair share of rendering algorithms, I certainly see the appeal.
I think Tim’s a bit overoptimistic on the time frame in which such changes will occur. First, everyone needs to get this future hardware. Sure, NVIDIA points out there are 70 million CUDA-capable graphics cards out there today, but no one is floating CUDA-based programs as alternative interactive renderers at this point (though NVIDIA’s experiments with CUDA ray tracing are wonderful to see). DirectX 9 graphics cards will be around for years to come. As significant, making such techniques part of the normal development toolchain also takes awhile. I think of how long normal (dot-product) bump mapping, introduced around 2001, took to become a feature that was used in games: first most GPUs had to support it, then tools had to generate and manage the maps, then artists had to be trained to use the tools, etc.
When the second edition of our book came out, it was a few hundred pages longer than the first. I held out the hope to Tomas that our third edition would be shorter. My logic was that, with programmable shaders coming to the fore, we wouldn’t have to cover all the little variants that were possible, but rather could just present pure algorithms and not worry about the implementation details.
This came true to some extent. For example, we could cut out chunks of text about extremely specific ways to efficiently compute the Fresnel term, or give examples showing how assembly instructions are packed together in a pixel shader. There was now plenty of space on the GPU for shader instructions, so such detail was nonsensical. It would be like a programming languages book listing all the programs that could be written in the language. We still do have to spend time dealing with the vagaries of the APIs, such as the relatively space-inefficient ways in which triangles are fed through the pipeline (e.g. a “compact” representation of a cube must use 24 separate vertices, when all that is really needed are 8 points and 6 normals).
Counterbalancing such cuts in text, we found we had many more algorithms to write about. With both the increase in abilities in each successive generation of GPUs and APIs, coupled with research into ways to efficiently map algorithms onto new architectures, the book became considerably longer (and certainly heavier, since each illustration’s atoms now each needed 3 bytes instead of 1). So, I’m not holding out much hope for a shorter edition next time around-there’s just so much cool stuff that we can now do, and more yet to come.
Incidentally, we had asked Tim for a pithy quote for our new Hardware chapter. He said he didn’t have anything, but passed on one from Bily Zelsnack. This quote was tempting, but instead we used it in our last chapter: “Pretty soon, computers will be fast,” which I just love for some reason. It may sometimes take 20 seconds to open a file folder on Windows today, but I remain hopeful that someday, someday…
Pingback: Real-Time Rendering · Predicting the Past