Visual Treats

I usually end each “7 Things” post with a lighter item. Having beaten through my backlog of resources, there are a bunch of visual links left over. So, here’s a post of pure fluffy desserts. All images are clickable for more information.

First, camels:

and zebras:

These are now a part of the “Too True to be Good” gallery.

3D fractals:

More information about these on Geeks3D.

Crayola’s Law is that the number of crayon colors doubles every 28 years:

I just plain liked this animated Sierpinski triangle someone used as a profile image (thanks, Evan):

Nice concept, the equations of art:

Here’s an art something (not embedded here, you actually have to click), but I don’t have the OCD needed to see the image hidden.

Perhaps not massively visual, but amazing nonetheless, various prime number calculators run using the Game of Life. Here’s the Mersenne prime generator:

I’m starting to believe everything can make a pixel. Here’s coffee:

And rice plants (thanks, Doug):

For a finish, here’s a history of 100 years of film VFX in five minutes:

Well, wait, there’s one more thing… Naty and I love the realistic CG in this piece, The Third & The Seventh. I’m not going to embed it here; follow the link and definitely watch it fullscreen. More amazing still, it’s the work of one guy, Alex Roman. The only elements that are not CG are the photographer, the pigeons, the time-lapsed sky and growing flowers, and the jet. There’s a compositing breakdown video of various scenes, showing the techniques used. That said, Naty likes it but for my tastes it’s pretty boring to watch for more than half a minute, as most CG demos are (jaded? Maybe; mostly, I just like plot).

If your interest wanes, skip to 8 minutes in (well, you can’t skip ahead with Vimeo; just let it load and come back later). Perhaps that’s the best way to appreciate the clip: play it as a loop and look at it now and then, in small doses. There’s a heavy hand with the focus/depth-of-field effect at times, but in one sense I do like seeing this effect overused: it’s like watching CPU cycles burn before my very eyes, knowing how much the algorithm costs. Last niggle (and I should probably be soundly thrashed with a riding crop for noting these things, but it stuck out for me): the wind turbines turn backwards. Quibbles aside, the images here are so much better than any I will ever make that I’m a total admirer of it on the “technical chops” and “incredible dedication” levels.

7 Things for February 10

  • The first three are from Geeks3D, which is a worthwhile site I frequently reference. First: some noise textures, in case you don’t feel like making some yourself.
  • Next, a night-vision filter in GLSL, developed with their GeeXLab tool for prototyping shaders.
  • Finally, PyOpenGL_Lab, which calls OpenGL from Python. Interpreted languages like Python are lovely in that there’s no compilation step, making experimentation much more rapid. If you’re a Perl person, there’s this module.
  • Daniel Rákos has an article about how to perform instance culling using the GPU, using OpenGL 3.2. The basic idea is to run the bounding volumes through the geometry shader for frustum culling and pipe out results as transform feedback, which is then used in a second pass for which instances to actually render. This type of technique has been done using DirectX (e.g., Froblins), Daniel shows how to do it in OpenGL and provides source.
  • Aras Pranckevičius has a worthwhile post on deferred rendering and mipmap bugs, along with some good follow-up comments.
  • John Ratcliff’s Code Suppository has lots of little handy graphics code tidbits and chunks. It’s moving here and here on Google Code, but the original page is much easier to skim.
  • Wolfgang Engel provides a nice little page of books and resources he recommends for upcoming graphics programmers, with some good follow-up comments. I hadn’t heard of the 3D Math Primer before. It gets high ratings on Amazon, and you can use Look Inside. Skimming it over, it does look like a good book, covering many topics with the space they deserve (vs. our sometimes quick zoom through them in our own book). Code snippets are also given throughout. The book mentions “The First Law of Computer Graphics,” but unfortunately the pages explaining it are blocked. Happily, I found it on Google Books: “If it looks right, it is right”. Whew, good, I honestly was concerned there was some law I had been breaking all these years.

… and I’m all caught up, my queue is empty! Well, there will be a special post tomorrow.

7 Things for February 9

Some news, and some olds.

  • HPG has a CFP. In slow motion,  this means the High Performance Graphics conference, June 25-27 in Saarbrucken, Germany, has a call for participation. Naty talked about this conference in his post two months ago; now the HPG website and CFP are up. In case you don’t recognize the conference’s name, this is the combination of the Graphics Hardware and Interactive Ray Tracing symposia. HPG was fantastic last year, with more useful (to me) papers than SIGGRAPH (where it was co-located). Potential submitters please note: because HPG 2010 is co-located with EGSR this year, the deadlines are very tight after SIGGRAPH notification and quite rigid. In other words, if your SIGGRAPH submission is rejected, you will have a very short time to revise and submit to HPG (i.e., by April 2nd).
  • NVIDIA has put up a list of talks at GDC in which it is participating, which will undoubtedly appear soon after on the web. In other NVIDIA news, there’s an interesting press release about NVIDIA and Avatar and how GPUs were used in precomputation of occlusion using ray tracing, for scenes with billions of polygons.
  • A handy tool for showing frame rate and capturing screenshots and video that is worth a mention again (it’s buried on the Resources page): FRAPS. It’s been around forever, continues to improve, and the basic version is free.
  • Crytek made an updated version of the famous Sponza model (used in many global illumination papers) available in OBJ and 3DS Max formats, along with textures. If you have the time, in theory 99 lines of code will make a picture for you.
  • Stefan Gustavson has a nice little demo of using distance fields for “perfect” text rendering. This type of technique has been used for a number of years in various games, such as Valve’s Team Fortress 2. The demo unfortunately falls apart when you rotate the scene off-axis, but otherwise is lovely.
  • SUBSTANCE is an application for making 3D evolutionary art. I really need more time on my hands to check this sort of tool out…
  • Theory for the day: we don’t have fur because our skin can show our emotions, which we pick up with our improved color perception.

New Books and Reworked Books Pages

We’ve been reworking our books page to take longer to download, I mean, to be more visually interesting and readable. Honestly, the old one was a dense, hard to view pile of book titles. Just adding whitespace between titles is a plus. We’ve also added one book to the recommended list, Eric Lengyel’s math book. Anyway, go check it out. On our main resources page we’ve put all the free books online into one section.

There are some new books coming out that look interesting. For those of you going to GDC, there should be some worthwhile offerings to check out on the floor.

The Programming Massively Parallel Processors book by Kirk (Chief Scientist at NVIDIA) and Hwu (professor at U. of Illinois) is out by now, as of 3 days ago, and is currently sold out on Amazon. It’s undoubtedly derived from the course they co-taught at Illinois. CUDA and Tesla are the keywords here. Hwu’s current course lectures are here and here; I don’t know how they compare to the book, but these newer (non-Kirk) lectures seem more general. I look forward to learning more about this volume—if you have it, please do leave a comment (or better yet, a review on Amazon).

Wolfgang Engel and all have a new book out, GPU Pro. He’s using a new publisher, so it does not have the ShaderX name, but effectively is ShaderX 8. Finally, the book is color throughout vs. previous ShaderX’s. I’ve skimmed some of the articles, and it’s in the same vein as others in the series: a range from practical advice to wild ideas. I can just about guarantee that professional interactive graphics programmers will find something of interest—I found about 5 articles off the bat I want to read through, and plenty of others I should at least skim. More info at the blog for this book.

Game Programming Gems 8 adds to this long-lived series. I haven’t seen it yet, so no comments; Adam Lake’s blog may give updates on status, contents, etc. This series has slowly drifted to including much more non-graphical material over the years. Understandable, but Adam’s someone I think as a graphics guy, so I’m selfishly hoping for more graphics and less the other stuff. My view on collection books like ShaderX and this is simple: an hour of a programmer’s time is about the same as the cost of a book, so if the book saves an hour, it’s paid for itself. Of course, there’s the time cost of reading the articles of interest, but still…

Second editions have been announced for Physically Based Rendering Techniques and High Dynamic Range Imaging. PBRT is more offline rendering oriented, but is a great book because it takes a stand; the authors say what they do for a real system and why they made that choice, vs. listing all possible techniques. It also presents about the longest literate programming presentation published. I have a short review of the first edition. The HDRI book is nice in that it pulls together the various research articles out there into one place, with a coherent thread to it all. The second edition’s new material is described on its Amazon page.

7 Things for February 8

I use a LIFO stack for these link collections, so we’re starting to get into older news. Olds? Still good stuff, though.

  • I hadn’t noticed this set of notes before from Valve, “Post Processing in the Orange Box.” It’s about sRGB (think, gamma correction), tone mapping (think, rescaling using the histogram), and motion blur (think, types of blur). Interesting that a variable frame rate combined with blur made people sick. They’d also turn blur off if a single frame was taking too long. (from Morgan)
  • Wolfgang Engel has posted DirectX 11 and DirectX 10 pipeline overview charts. In a similar vein, Mark Kilgard has a talk about the changes from OpenGL 1.0 to 3.2 with some worthwhile data flow diagrams and other material.
  • openSourceVFX.org is a catalog of open source projects that are particularly suited for film visual effects and animation work. It is maintained by professionals in the field, so the resources listed are those known to actually be used and production-worthy. (thanks, Larry)
  • Here’s another PhysX demo, of water—a little jelly-like (good spray is hard, since it’s so fine-grained), but pretty amazing to see happen at interactive rates.
  • One resource I didn’t recall for my blog entry about tools for teaching about graphics and game creation: Kodu, from Microsoft. For grade schoolers, it uses a visual language. Surprisingly, it’s in 3D, with a funky chiclet terrain system. For still more tools, check the comments on the original blog entry—some great additions there. (pointed out by Mark DeLoura)
  • Another interesting graphics programming tool is NodeBox 2, now in beta. It uses a node graph-based approached, see some examples here.
  • The story of Duke Nukem in Wired is just fascinating. We all like to tell and listen to stories, so it’s hard to know how true any narrative is, but this one seems reasonably on the mark. A little balance is provided by Raphael van Lierop.

Game developers – submit a SIGGRAPH Talk before February 18!

The deadline for submitting a Talk to SIGGRAPH is February 18 – less than two weeks away as I’m writing this.  Although the time is short, all game developers working in graphics should seriously consider submitting one; it’s not a lot of work, and the potential benefits are huge.  As a member of the 2010 conference committee, I thought I’d take a little time to elucidate.

SIGGRAPH 2010 is in Los Angeles this summer.  Although most people think of SIGGRAPH in connection with academic papers, it is also where film production people share practical tips and tricks, show off cool things they did on their last film, learn from their colleagues, and make professional connections.  Over the last few years, there has been a steadily growing game developer presence as well, which is exciting because SIGGRAPH is a unique opportunity for these two graphics communities to meet and learn from each other. The convergence between the technology, production methods, and artistic vision of film and games is a critical trend in both industries, and SIGGRAPH is where the rubber meets the road.

In 2010, SIGGRAPH is making a big push to increase the amount of game content.  Stop and think for a minute; isn’t there something you’ve done over the past year or two that’s just wicked awesome?  Wouldn’t it be cool to show it off not just to your fellow game developers, but to people from companies like ILM, Pixar and Sony Pictures Imageworks?  Imagine the conversations you could have, about adapting your technique for film use or improving it with ideas taken from film production!

Most film production content is presented as 20-minute Talks (formerly called Sketches); this makes the most sense for game developers as well.  Submitting a Talk requires only a one-page abstract and takes little time.  If you happen to have some video or additional documentation ready you can attach those as supplementary material.  This can help the reviewers assess your technique, but is not required.  If your talk is accepted, you have until the day of your presentation in late July to prepare slides (just 20 minutes worth).

To help see the level of detail expected in the one-page abstract, here are three examples.

A little time invested in submitting a Talk for SIGGRAPH 2010 can pay back considerable dividends in career development and advancement, so go for it!

7 Things for February 7

Comin’ at ya, lots of one-liners, vs. yesterday’s verbose posting.

7 Things for February 6

With the excitement of Ground Hog’s Day and James Joyce’s birthday over, it’s time to take off the silly paper hats and get back to writing “7 things” columns. Here goes:

  • Jeremy Shopf gives a nice summary of recent ambient occlusion papers. AO is becoming the new Shadows—every conference must have a paper on the topic. Honestly, it’s amazing that some of these ideas haven’t popped up earlier, like the line integral method. If you accept the basic approximation of AO from the start, then it’s a matter of how to best integrate the hemisphere around the point. I’m not downplaying the contribution of this research. Just the opposite, it’s more along the lines of “d’oh, brilliant, and why didn’t anyone think of that earlier?” The answer is both, “because those guys are smart” and, “they actually tried it out, vs. thinking of an idea and not pursuing it.”
  • Thinking about C++ and looking at my old utilities post, I realized I forgot an add-on I use just about every day: Visual Assist X. This product makes Visual Studio much more usable for C++. Over the years it’s become indispensable to me, as more and more features get integrated into how I work. I started off small: there’s a great button that simply switches you between the .cpp and .h version of the file. Then I noticed that other button which takes a set of lines I’ve selected and comments them out in a single mouse press, and the other button that uncomments them back. Then I found I could add a control that lets me type in a few characters to find a code file, or find a class. On and on it goes… Anyway, there’s a free trial, and for individuals it’s an entirely reasonable (for what you get) $99 license. By the way, you really don’t need to get the maintenance renewal every year.
  • As you may know, MIT has had a mandate for a number of years to put all of its courses online in some form—there are now 1900 of them. The EE & CS department, naturally enough, has quite a selection. The third most visited course on the whole site is Introduction to Computer Science and Programming, from Fall 2008 (and I approve: they use Python!). There’s only one computer graphics course, from 2003, but it covers unchanging principles and concepts so the “ancient” date is a minor problem.
  • Naty pointed out this article about deferred rendering. He notes, “A nice description of a deferred rendering system used in a demo—of particular interest is the use of raytraced distance fields for rendering fluids, and the integration of this into the overall deferred system.”
  • A month and a half ago I listed some articles about reconstructing the position or linear z-depth in a shader. Here’s another.
  • It’s the ongoing debate, back again. No, not dark vs. milk chocolate, nor Ferrari vs. Porsche, but DirectX vs. OpenGL. My own feeling is “whatever, we support both”. By the way, the upcoming book GPU PRO (which also has a blog, and has just been listed on Amazon) includes an in-depth article on porting from DX9 to OpenGL 2.0. Mark Kilgard’s presentation also discusses the differences, including the coordinate space and window space conventions.
  • I love human pixels. The Arirang Festival in North Korea is a famous example, check out Google Images. But that’s just a card stunt, impressive as it is. This video shows a technique I hadn’t seen before (note that some of it is sped up—check the speed of the people on the field—but still fantastic). There are other videos, such as this and this.

C++, Baby

I was catching up on the Communications of the ACM, and noticed this article, Computer Science in the Conceptual Age. The “catch your eye” text on one page was: “Programming interns/job seekers from our program Spring 2009 (35 interviewed in the game industry) found no companies administering programming tests in Java.”

There are other chewy bits, such as: “The USC experience is that 100% of its students interviewed for programming positions are given three-to-four-hour-long programming tests, with almost all companies administering the tests in C++.”

Also, this: “The game industry will also tell you that it wants the first four programming classes in C++, not Java, according to M.M. McGill and my own private communications with directors of human resources in major game-development companies.”

One final morsel: “Many game companies say they will not interview or hire someone whose first programming language is Java.”

Wow, that last one’s harsh, especially since my experience with two teenage sons (one in high school, the other a freshman computer science major) is that Java is the norm for the first “real” language taught (I don’t count Scheme as a real, “you’ll get paid programming in it” type of language). I don’t think I’d rule someone out for knowing Java first, though having gone from C++ to Java and then back, the transition from Java to C++ is like being thrown out of the promised land: you suddenly again spend half your time messing with memory in one form or another. C# and Java are darn productive in that way. And, no, for me at least, those auto-pointer classes in C++ never quite seem to work—they need a different sort of discipline I don’t appear to have. I also love that my first Java program, from 1997, still works on the web; some of my C++ programs from back then won’t run on Vista or Windows 7 because the WinG DLLs are not a part of those operating systems (thanks, Microsoft).

Nonetheless, the article’s right: at Autodesk we’ve dabbled with Java and C#, I’ve seen Python used for UI control around the fringes of a program, but the heart of client-side graphical programs is almost always C++ (or isn’t, with regrets and cancellation often soon following—been there myself, though Java was only a little bit to blame, to be fair). Also, XNA, which uses C#, does not have a 64 bit version. In addition, Microsoft’s managed code support usually lags behind the “real” DirectX, i.e., the one for C++.

Looking around, I did find an open-source project, SlimDX, that does support 64-bit assemblies for interfacing with DirectX. Interestingly, they claim one AAA game title shipped using SlimDX, but no mention of which. So I asked. They’re keeping the information confidential, which is fine, but the other comment sounds about right: “The large majority of professional commercial PC/console games are still developed in C++ because of the sheer amount of legacy code the studios developing those games have that is already in C++ (and because of the generally poor support from major console vendors for languages other than C or C++, which contributes to lock-in).”

Long and short: it’s C++, baby.

I3D 2010 Papers

The Symposium on Interactive 3D Graphics and Games (I3D) has been a great little conference since its genesis in the mid-80s, featuring many influential papers over this period.  You can think of it as a much smaller SIGGRAPH, focused on topics of interest to readers of this blog.  This year, the I3D papers program is especially strong.

Most of the papers have online preprints (accessible from Ke-Sen Huang’s I3D 2010 paper page), so I can now do a proper survey.  Unfortunately, I was able to read two of the papers only under condition of non-disclosure (Stochastic Transparency and LEAN Mapping).  Both papers are very good; I look forward to being able to discuss them publicly (at the latest, when I3D starts on February 19th).

Other papers of interest:

  • Fourier Opacity Mapping riffs off the basic concept of Variance Shadow Maps, Exponential Shadow Maps (see also here) and Convolution Shadow Maps.  These techniques store a compact statistical depth distribution at each texel of a shadow map; here, the quantity stored is opacity as a function of depth, similarly to the Deep Shadow Maps technique commonly used in film rendering.  This is applied to shadows from volumetric effects (such as smoke), including self-shadowing.  This paper is particularly notable in that the technique it describes has been used in a highly regarded game (Batman: Arkham Asylum).
  • Volumetric Obscurance improves upon the SSAO technique by making better use of each depth buffer sample; instead of treating them as point samples (with a simple binary comparison between the depth buffer and the sampled depth), each sample is treated as a line sample (taking full account of the difference between the two values).  It is similar to a concurrently developed paper (Volumetric Ambient Occlusion); the techniques from either of these papers can be applied to most SSAO implementations to improve quality or increase performance.  The Volumetric Obscurance paper also includes the option to extend the idea further and perform area samples; this can produce a simple crease shading effect with a single sample, but does not scale well to multiple samples.
  • Spatio-Temporal Upsampling on the GPU – games commonly use cross-bilateral filtering to upsample quantities computed at low spatial resolutions.  There have also been several recent papers about temporal reprojection (reprojecting values from previous frames for reuse in the current frame); Gears of War 2 used this technique to improve the quality of its ambient occlusion effects. The paper Spatio-Temporal Upsampling on the GPU combines both of these techniques, filtering samples across both space and time.
  • Efficient Irradiance Normal Mapping – at GDC 2004, Valve introduced their “Irradiance Normal Mapping” technique for combining a low-resolution precomputed lightmap with a higher-resolution normal map.  Similar techniques are now common in games, e.g. spherical harmonics (used in Halo 3), and directional lightmaps (used in Far Cry).  Efficient Irradiance Normal Mapping proposes a new basis, similar to spherical harmonics (SH) but covering the hemisphere rather than the entire sphere.  The authors show that the new basis produces superior results to previous “hemispherical harmonics” work.  Is it better than plain spherical harmonics?  The answer depends on the desired quality level; with four coefficients, both produce similar results.  However, with six coefficients the new basis performs almost as well as quadratic SH (nine coefficients), making it a good choice for high-frequency lighting data.
  • Interactive Volume Caustics in Single-Scattering Media – I see real-time caustics as more of an item to check off a laundry list of optical phenomena than something that games really need, but they may be important for other real-time applications.  This paper handles the even more exotic combination of caustics with participating media (I do think participating media in themselves are important for games).  From a brief scan of the technique, it seems to involve drawing lines in screen space to render the volumetric caustics.  They do show one practical application for caustics in participating media – underwater rendering.  If this case is important to your application, by all means give this paper a read.
  • Parallel Banding Algorithm to Compute Exact Distance Transform with the GPU – I’m a big fan of Valve’s work on using signed distance fields to improve font rendering and alpha testing.  These distance fields are typically computed offline (a process referred to as “computing a distance transform”, sometimes “an Euclidian distance transform”).  For this reason, brute-force methods are commonly employed, though there has been a lot of work on more efficient algorithms.  This paper gives a GPU-accelerated method which could be useful if you are looking to speed up your offline tools (or if you need to compute alpha silhouettes on the fly for some reason).  Distance fields have other uses (e.g. collision detection), so there may very well be other applications for this paper.  Notably, the paper project page includes links to source code.
  • A Programmable, Parallel Rendering Architecture for Efficient Multi-Fragment Effects – one of the touted advantages of  Larrabee was the promise of flexible graphics pipelines supporting stuff like multi-fragment effects (A-buffer-like things like order independent transparency and rendering to deep shadow maps).  Despite a massive software engineering effort (and an instruction set tailored to help), Larrabee has not yet been able to demonstrate software rasterization and blending running at speeds comparable to dedicated hardware.  The authors of this paper attempt to do the same on off-the-shelf NVIDIA hardware using CUDA – a very aggressive target!  Do they succeed?  it’s hard to say.  They do show performance which is pretty close to the same scene rendering through OpenGL on the same hardware, but until I have time to read the paper more carefully (with an eye on caveats and limitations) I reserve judgment.  I’d be curious to hear what other people have to say on this one.
  • On-the-Fly Decompression and Rendering of Multiresolution Terrain (link is to an earlier version of the paper) – the title pretty much says it all.  They get compression ratios between 3:1 and 12:1, which isn’t bad for on-the-fly GPU decompression.  A lot of water has gone under the terrain rendering bridge since I last worked on one, so it’s hard for me to judge how it compares to previous work; if you’re into terrain rendering give it a read.
  • Radiance Scaling for Versatile Surface Enhancement – this could be thought of as an NPR technique, but it’s a lot more subtle than painterly techniques.  It’s more like a “hyper-real” or “enhanced reality” technique, like ambient occlusion (which darkens creases a lot more than a correct global illumination solution, but often looks better; 3D Unsharp Masking achieves a more extreme version of this look).  Radiance Scaling for Versatile Surface Enhancement is a follow-on to a similar paper by the same authors, Light Warping for Enhanced Surface Depiction.  Light warping changes illumination directions based on curvature, while radiance scaling scales the illumination instead, which enables cheaper implementations and increased flexibility.  With some simplifications and optimizations, the technique should be fast enough for most games, making this paper useful to game developers trying to give their game a slightly stylized or “hyper-real” look.
  • Cascaded Light Propagation Volumes for Real-time Indirect Illumination – this appears to be an updated (and hopefully extended) version of the CryEngine 3 technique presented by Crytek at a SIGGRAPH 2009 course (see slides and course notes).  This technique, which computes dynamic approximate global illumination by propagating spherical harmonics coefficients through a 3D grid, was very well-received, and I look forward to reading the paper when it is available.
  • Efficient Sparse Voxel Octrees – there has been a lot of excited speculation around raycasting sparse voxel octrees since John Carmack first hinted that the next version of id software‘s rendering engine might be based on this technology.  A SIGGRAPH 2008 presentation by Jon Olick (then at id) raised the excitement further (demo video with unfortunate soundtrack here).  The Gigavoxels paper is another example of recent work in this area.  Efficient Sparse Voxel Octrees promises to extend this work in interesting directions (according to the abstract – no preprint yet unfortunately).
  • Assisted Texture Assignment – the highly labor-intensive (and thus expensive) nature of art asset creation is one of the primary problems facing game development.  According to its abstract (no preprint yet), this paper proposes a solution to part of this problem – assigning textures to surfaces.  There is also a teaser posted by one of the authors, which looks promising.
  • Epipolar Sampling for Shadows and Crepuscular Rays in Participating Media with Single Scattering – volumetric effects such as smoke, shafts of light (also called “god rays” or crepuscular rays) and volumetric shadows are important in film rendering, but usually missing (or coarsely approximated) in games.  Unfortunately, nothing is known about this paper except its title and the identities of its authors.  I’ll read it (and pass judgement on whether the technique seems practical) when a preprint becomes available (hopefully soon).

The remaining papers are outside my area of expertise, so it’s hard for me to judge their usefulness: