Author Archives: Eric

Ray Tracing News v. 21 n. 1 is out

I’ve put out the Ray Tracing News for more than 20 years now. New issues come maybe once a year, but there you have it. There’s a little overlap with this blog, but not that much. Find the latest issue here. Now that I’m finally done with this issue I can imagine blogging again (it wasn’t just I3D that was holding me back).

Best Conference Ever

I’ve been busy with papers cochairing for I3D 2009 (he said casually, knowing he’ll probably never get an opportunity to do so again, being a working stiff and not an academic), but hope to get back to blogging soon. In the meantime, here’s the best conference ever: “Foundations of Digital Games“. I3D’s the end of February in Boston, vs. April on a cruise ship between Florida and the Bahamas. Why don’t I get invited to help out at conferences like this?

Corrigenda

“Corrigenda” is a classy publisher’s word for “bugs,” but it also means listing fixes for these errors. Morgan McGuire and his students have been reading our book closely, and have found the first two significant errors in the 3rd edition. These errors and their corrections can be found on our corrigenda page.

Donald Knuth sends checks for $2.56 for each error found in his classic (but still being written) series “The Art of Computer Programming”; Sir James Murray, the editor of the first Oxford English Dictionary, was perhaps the first to reward readers in this way. Knuth has an even more lucrative/costly reward doubling scheme for errors found in his software, with the prize now locked at $327.68.

Tomas offered his students a piece of candy for each error they found in our second edition. I like the idea of rewarding readers in some way, beyond naming them on the page. We’ll think of something; suggestions? More important, have you found any bugs, large or small? Please do pass them on, as it helps everyone.

Interesting bits

I’ve been collecting links via del.icio.us of things for the blog. Let’s go:

Antialiasing thick lines by using textures is an old technique. Areakkusu’s site is nice in that it has good examples and code.

The Level of Detail blog has a great pointer to Slisesix’s amazing demo. “Demo” as in “Demoscene,” where his program is a mere 4k bytes in size. It’s not animated, not real-time, but shows how distance fields could be used for ambient occlusion approximation. Definitely check out all the links: Alex Evan (of LittleBIGPlanet) has a worthwhile talk, and Iñigo’s presentation is even better: good technical content and real-time programs running inside the slides.

I’d rather avoid logrolling in this blog, but did want to mention enjoying Christer Ericson’s post on graphical shader systems. I have to agree that such systems are bad for creating efficient shaders, but these tools do at least allow a wider range of people to experiment and explore. There are a lot of worthwhile followup comments on this thread.

Oogst has a clever trick he calls interior mapping, for rendering walls, floors, and ceilings for buildings seen from the outside. Define a texture to be used for each interior element, and have the pixel shader compute from the eye direction what would be seen inside. There’s no actual geometry, it’s all just computing the ray intersection using (wait for it) a floor function. Humus has demo code available for this technique, using DirectX 10. Admittedly, the various tiles repeat and there are other limits, but actual interiors are vastly superior to the usual dirty or reflective windows currently used in games, with no extra geometry added.

Bavoil and Sainz have a new approach for Screen-Space Ambient Occlusion, using a more elaborate form of horizon mapping: http://developer.nvidia.com/object/siggraph-2008-HBAO.html. Code’s available in NVIDIA’s DX 10 SDK.

If you missed Jon Olick’s talk at SIGGRAPH about voxel octree representation, Timothy Farrar has a summary. Personally, I think Jon’s research is very much that-research, not something that is immediately practical-but I love seeing how changing capabilities and increased flexibility can lead to different approaches.

On Amazon: 4 graphics books for the price of 2, minus the papery bits. Pharr and Humphrey’s “Physically Based Rendering” (PBR) and Luebke’s “Level of Detail for 3D Graphics” are certainly worthwhile, the other two I don’t know about (though look worthwhile and are well-rated). I don’t know a thing about the electronic media used; I’m guessing the books are DRM’ed, not naked PDFs. Searchable is certainly nice. While it’s too bad you can’t just buy the ones you want (I smell a marketing department having some “what can we get them to pay for what bundle?” meetings, given the negligible physical cost), I did notice an interesting thing on Amazon I hadn’t seen before for each book except PBR: “Upgrade this book for $18.39 more, and you can read, search, and annotate every page online.” You can also upgrade books you’ve previously purchased on Amazon.

On Gamasutra, an article summarizing DirectX 11. I liked it: to the point, and with some useful figures.

Every once in awhile someone will say he has a new graphics rendering method that’s awesome, but won’t explain it because of some reason (usually involving money or fame). Here’s one, from Sunfish Studio: no micropolygons, no point sampling. OK, so that leaves-what?-voxels? If anyone knows what this is about, please comment; I’m curious.

GameDeveloperTools.com is a new site that tracks news and has users rate books. To be honest, a lot more voting needs to happen to make the ratings useful-I’d stick with Amazon for now. The main use is that you can look at specific categories, which are a bit better than Amazon’s somewhat random sorting of graphics books (e.g., our book is in three categories on Amazon, competing against artists’ books on using mental ray and RenderMan).

Finally, this, well, this is not interactive graphics, but is just so cool: parking signs understandable from only certain locations.

Latency

Herb Sutter’s site has some interesting material about CPU architectures. His article “The Free Lunch is Over” is a bit dated-everyone should know by now that multicore is upon us-but does a good job pounding home that concurrency is the way of the future (i.e., like, now). It also has some memorable lines, like, “Cache is King,” and “Andy Giveth, and Bill Taketh Away.” I contemplate the latter every time I open up a Word document and it takes 25 seconds to appear.

What I noticed today, due to Eric Preisz’s new indexbuffer site, was that Herb has a newer presentation available, “Machine Architecture: Things Your Programming Language Never Told You.” This covers in-depth the topic of latency and how the CPU attempts to hide it. It’s worth a look if you’re at all interested in the topic; there’s material here that I hadn’t seen presented before. I skimmed over the odd things that compilers might do to code, I must admit, but overall I found it worthwhile.

Tim Sweeney Interview

Tim Sweeney is a cofounder of Epic Games and lead developer behind the graphics engines for the Unreal series of games. Jon Stokes has a meaty interview with him, up on Ars Technica; go read it!

fourth edition might be C++?Tim talks about how the GPU has become general enough that we will soon be able to get away from rasterization as the only rendering algorithm. Back ten years ago, dealing with an API to do all interactive graphics was limiting. Widening it out with programmable shaders gives more flexibility, but at the cost of complexity of managing the programming environment. Nowadays you’re programming two separate computers that talk to each other. The shift to parallel programming is already a major change in how we need to think about computers, one that hasn’t become a core concept for most of us, yet (myself included; I’m doing my best to wrap my head around Intel’s Threading Building Blocks, for example). Doing such programming in a few different languages is a “feature” we’d all love to see go away.

With Larrabee, CUDA, and compute shaders, the trends of more flexibility continue, though in different flavors. It seems unlikely to me that the pipeline model itself for rendering will fade in popularity any time soon, though rasterization (traditional GPUs) vs. tiling (Larrabee, handhelds) will continue to be a debate. Tim mentions voxel rendering techniques (really, heightfield, in the old games) as something that died once the GPU took over. True. Such techniques are making a return on the GPU even today, via relief mapping and adaptive tessellation. We’re also seeing volume rendering by marching along rays; if an algorithm can be refit to work on a GPU, it will find some use.

So I agree, the increase in flexibility will be all to the good in letting programmers again do much more than render textured opaque triangles via a Z-buffer really fast and most everything else not-so-fast. Frankly, I believe much of the buzz about interactive ray tracing is more an expression of yearning by us graphics programmers that we could actually program again, vs. calling an API. The April Fool’s Day spoof about ray tracing in DirectX 11 fooled a number of people I know, I believe because they wished it were true. Having hacked my fair share of rendering algorithms, I certainly see the appeal.

I think Tim’s a bit overoptimistic on the time frame in which such changes will occur. First, everyone needs to get this future hardware. Sure, NVIDIA points out there are 70 million CUDA-capable graphics cards out there today, but no one is floating CUDA-based programs as alternative interactive renderers at this point (though NVIDIA’s experiments with CUDA ray tracing are wonderful to see). DirectX 9 graphics cards will be around for years to come. As significant, making such techniques part of the normal development toolchain also takes awhile. I think of how long normal (dot-product) bump mapping, introduced around 2001, took to become a feature that was used in games: first most GPUs had to support it, then tools had to generate and manage the maps, then artists had to be trained to use the tools, etc.

When the second edition of our book came out, it was a few hundred pages longer than the first. I held out the hope to Tomas that our third edition would be shorter. My logic was that, with programmable shaders coming to the fore, we wouldn’t have to cover all the little variants that were possible, but rather could just present pure algorithms and not worry about the implementation details.

This came true to some extent. For example, we could cut out chunks of text about extremely specific ways to efficiently compute the Fresnel term, or give examples showing how assembly instructions are packed together in a pixel shader. There was now plenty of space on the GPU for shader instructions, so such detail was nonsensical. It would be like a programming languages book listing all the programs that could be written in the language. We still do have to spend time dealing with the vagaries of the APIs, such as the relatively space-inefficient ways in which triangles are fed through the pipeline (e.g. a “compact” representation of a cube must use 24 separate vertices, when all that is really needed are 8 points and 6 normals).

Counterbalancing such cuts in text, we found we had many more algorithms to write about. With both the increase in abilities in each successive generation of GPUs and APIs, coupled with research into ways to efficiently map algorithms onto new architectures, the book became considerably longer (and certainly heavier, since each illustration’s atoms now each needed 3 bytes instead of 1). So, I’m not holding out much hope for a shorter edition next time around-there’s just so much cool stuff that we can now do, and more yet to come.

Incidentally, we had asked Tim for a pithy quote for our new Hardware chapter. He said he didn’t have anything, but passed on one from Bily Zelsnack. This quote was tempting, but instead we used it in our last chapter: “Pretty soon, computers will be fast,” which I just love for some reason. It may sometimes take 20 seconds to open a file folder on Windows today, but I remain hopeful that someday, someday…

At long last, in stock

Lately I’ve been looking at Amazon’s listing of our book daily, to see if it’s in stock. Finally, today, it is, for the first time ever, a mere 40 days after its release. Not our publisher’s fault at all (A.K. Peters rules, OK?), and the book’s not that popular (AFAIK); it evidently just takes awhile for the books delivered to percolate out into Amazon’s system. Amazon under-ordered, so I believe by the time the books they first ordered made it to the distribution centers, they were already sold out, making the book again out of stock. Lather, rinse, repeat. So maybe I should be sad that it’s now in stock.

Anyway, the amusing part of visiting each day has been looking at the discount given on the book. It’s nice to see a discount at all, as Amazon didn’t discount our previous book for the first few years. With the current 28% discount, it means our new edition is effectively $5 less than the previous edition’s original price. Which cheers me up, as I like to imagine that students are saving money; my older son will be in college next year, and any royalties I make from our book will effectively get recycled over the next four years in buying his texts. His one book for a summer course this year was a black & white softbound book, 567 pages, and cost an astounding (to me) $115, and that was “discounted” from $128.95. I’m now encouraging my younger son to skip college and go into the lucrative field of transistor repair.

Amazon’s discount has varied like a random walk among four values: 0%, 22%, 28%, and 33%. Originally, in July, it was list price, then the discount was set at 33% (so Amazon was paying more for the book than they were selling it for), then back to normal, then 33%. Around August 14th I started checking once a week or so and also looking at Associates sales (a program I recommend if you’re a book author, as it’s found money – it pays for this website). Again the book went back to no discount, then on August 20th started at 0%, went to 22% off, then 33% off, all in the same day. The next day there was no discount, then the day after it went back to 33%. August 28, when I checked again, it was at 22%, and this discount held through the end of the month. On September 1st it went up to 28% off, and there it’s been for a whole 9 days.

The oddest bit was that, in searching around for prices (Amazon’s is indeed the best, at least as of today), I noticed that the first edition of our book, from 1999, sells used for twice as much or more than our new book. Funny world.

By the way, if you are looking to write a book and want to understand royalties and going rates a little bit better, see my old article on this topic. Really, it’s not my article, it’s a collection of responses from authors I know. Some of it’s a bit confrontational and might make you a little paranoid, but I think it’s worth a read. If you’re writing technical books to get rich, you’re fooling yourself, but on the other hand there’s no reason to let someone take advantage of you. My favorite author joke, from Michael Cohen via John Wallace, is that there are dozens of dollars to be made writing a book, dozens I tell you. It can be a bit better than that if you’re lucky, but still comes out to about minimum wage when divided by the time spent. But for me it’s a lot more fun and educational work than flipping burgers, and the money is not why we wrote our book. We did it for the wild parties and glamorous lifestyle.

Update: heh, that didn’t last long. I wrote this entry Sept. 9th. As of the 10th, the book is (a) out of stock again and (b) down to a 2% discount. 2%?! Truly obscure.

Bézier, Gouraud, Fresnel

Vincent Scheib’s terminology rant included how to pronounce “SIGGRAPH” (i.e., like “sigma”), a pet peeve of mine. This reminded me of the following.

While writing the book, we wanted to give phonetic spellings of various common graphics terms – after hearing someone pronounce “Gouraud” as “Goo-raude” I thought it worth the time. In searching around, I realized that people seemed to somewhat disagree about Bézier, so finally I asked a few people from France. Frédo Durand gave the best response, sending an audio clip of him pronouncing the words. So, without further ado, here’s the audio clip. Now you know.

Update: here’s a nice article about pronunciation of many other computer graphics related terms.

I3D 2009 CFP

I3D is a symposium (fancy word for “small conference”) that is a great way to spend three days with academics and industry people in the fields of interactive 3D graphics and games. The program is a single track, consisting primarily of original research papers, but with other events such as keynotes, posters, roundtables, etc. See 2008’s program for an example.

I’m papers cochair this year with Morgan McGuire, so I’m biased, but I love the venue. Instead of the 28,000+ people of SIGGRAPH, you have around 100 or so people at a hotel or campus (2008’s was at EA’s headquarters). The best part is that you are likely to have a good conversation with anyone you meet, since they’re all there for the same reason. There is also plenty of time in the evening to socialize. I thoroughly enjoyed organizing the pub quiz for 2008.

If a paper is too daunting a task, consider submitting a poster or demo; it’s an easy way to get your idea out there and get feedback from others in the field.

The call for participation (CFP) is at http://i3dsymposium.org.

The quick summary:

Conference Feb. 27 – March 1, 2009
Location: Radisson Hotel Boston, Boston, MA

Paper submissions: October 24, 2008
Poster and demo submissions: December 19, 2008

A cool stat I learned of recently is that I3D is #25 of all computer science journals in its impact; the only graphics publication of more impact is SIGGRAPH itself, at #9: http://citeseer.ist.psu.edu/impact.html (study’s from 2003, I’d love to find a newer one if anyone knows).

Hope to see you there.

Portal adds

No, not that Portal (which if you haven’t played, you should, even if you have no time; it’s short! For NVIDIA card owners the first slice is free). I’ve updated our portal page with a few additions.

New blogs added: Pandemonium, C0DE517E, Gates 381, GameDevKicks, Chris Hecker’s, and Beyond3D. Being a trailing-edge adoption kind of a guy (I’ve kept my Tivo 1 alive by replacing the disk drives three times so far, my cell phone’s $90 from Indonesia via eBay), I ignored blogs for the most part until last year, when I finally learned how simple it was to use an RSS reader (I like Google’s). My philosophy since then is that if a blog has any articles relevant to interactive rendering techniques, I’ll subscribe. Since most graphics blogs don’t post daily, traffic is low, so checking new postings takes a minute or two a day. That said, if I had to pick just one, it would probably be GameDevKicks, since it’s an aggregator, similar to Digg (though the low counts on the digs, excuse me, kicks, means that some things may fall through the cracks). This service means I’m off the hook in noting new articles on Gamasutra on this blog, since these usually get listed there.

Ogre Forums has been added to the list of developer sites. Ogre is a popular free game development platform. I can’t say I frequent the forum, but on the strength of this article on using the pixel shader to generate the illusion of geometry, there are obviously good things happening here.

The Unity Web Player Hardware Statistics page is similar to the well-known Steam survey, but for machines used by casual gamers.

A site that’s been around a long while and should have been on the portal from the start is the Virtual Terrain Project, a constantly-expanding repository of algorithms about and models of terrain, vegetation, natural phenomena, etc.

… and that’s it for now.