ShaderX7 has been out for a few months now, but due to its size (at 773 pages, it is by far the largest of the series) I haven’t been able to finish going through it until recently. Here are the chapters I found most interesting (click the link for the rest of this post): Continue reading
EGSR and HPG 2009 papers
Ke-Sen Huang has what looks like the full lists of papers for both HPG 2009 and EGSR 2009. Both of these lists are only available on Ke-Sen’s site at the moment; presumably they will appear on the HPG and EGSR websites soon. I have had high hopes for these conferences, especially given the somewhat disappointing real-time content of the SIGGRAPH 2009 papers program. EGSR has historically had some good real-time stuff in it, and the new HPG (High-Performance Graphics) conference has a highly relevant area of focus. So how do the paper lists stack up?
EGSR 2009 has a bunch of potentially interesting papers, including some on GPU-accelerated ray-tracing and photon mapping. Some have intriguing titles (but no other information, so it’s hard to guess how relevant they are): Fast Global Illumination on Dynamic Height Fields, Efficient and Accurate Rendering of Complex Light Sources. One paper I found particularly interesting is Hierarchical Image-Space Radiosity for Interactive Global Illumination (available here): This paper extends an I3D 2009 paper (Multiresolution Splatting for Indirect Illumination) which described an “Instant Radiosity”-type approach (using lots of point light sources to simulate indirect bounces), rendering into a pyramid of frame buffers at different resolutions. The pyramid was finally collapsed into a single frame buffer to generate the final frame. I found the multiresolution rendering approach interesting, but the implementation was very slow. The EGSR 2009 paper speeds this part of the algorithm up significantly, and adds some other extensions and improvements. I wouldn’t run off and implement this paper into a game engine (it has some significant limitations, and is not nearly fast enough on current hardware), but it does suggest some interesting research directions.
What about HPG 2009, the new kid on the block? Given the partial descent of this conference from the Interactive Ray-Tracing symposium, one would expect a fair bit of ray-tracing-related papers, but there aren’t that many: out of 21 papers, 4 papers explicitly mention ray-tracing, and 3 more deal with dynamic construction of bounding volume hierarchies (a particular concern of ray-tracing algorithms). Many of the remaining papers deal with other (and to my mind, more interesting) rendering algorithms. Data-Parallel Rasterization of Micropolygons With Defocus and Motion Blur appears to describe an algorithm similar to REYES (which powers Pixar’s Renderman). There are two papers on image space techniques (Hardware-Accelerated Global Illumination by Image Space Photon Mapping and Image Space Gathering), which is a “hot” area right now following the popularity of SSAO and related techniques. There are two papers relating to the important topic of antialiasing (A Directionally Adaptive Edge Anti-Aliasing Filter and Morphological Antialiasing). One paper (Stream Compaction for Deferred Shading) relates to deferred shading, which is also a “hot” topic in game rendering at the moment.
I look forward to the preprints becoming available, so we can see if these papers live up to the promise of their titles (anmd perhaps discover some surprises among the more ambiguously-titled papers).
Bits of News
Just some quick bits to chew on for breakfast:
- Microsoft announced Project Natal at E3; the (simulated) video is entertaining. Lionhead Studios’ demo is also worth a look. Somehow a little creepy, and I suspect in practice there’s a high likelihood that a user will quickly run off the rails and not do what’s expected, but still. Considering how limited the Eye Toy is compared to its hype, I’m not holding my breath, but it’s interesting to know & think about. (thanks to Adam Felt for the link)
- New book out, Graphics Shaders: Theory and Practice. It’s about GLSL, you can find the Table of Contents and other front matter at the book’s site (look to the right side). I hope to get a copy and give a review at some point.
- I mentioned Mark Haigh-Hutchinson’s Real-Time Cameras book in an earlier post. The, honestly, touching story of its history is republished on Mark DeLoura’s blog at Gamasutra.
- Nice history of graphics cards, with many pictures.
- Humus describes a clever particle rendering optimization technique (update), and provides a utility. Basically, make the polygon fit the visible part of the particle to save on fill rate. One of those ideas that I suspect many of us have wondered if it’s worth doing. It is, and it’s great to have someone actually test it out and publish the results.
- This is an interesting concept: with an NVIDIA card and their new driver you can now turn on ambient occlusion for 22 games that don’t actually use this technique in their shipped shaders. In itself, this feature is a minor plus, but brings up all sorts of questions, such as buying into a particular brand to improve quality, who controls and who can modify the artistic look of a game, etc. (thanks to Mauricio Vives for the link)
- Old, but if you haven’t seen it before, it’s a must: transparent screens.
Deferred lighting approaches
In Section 7.9.2 of Real-Time Rendering, we discussed deferred rendering approaches, including “partially-deferred” methods where some subset of shader properties are written to buffers. Since publication, a particular type of partially-deferred method has gained some popularity. There are a few different variants of this approach that are worth discussing; more details “under the fold”.
Game Engine Gems CFP
As I mentioned in a previous post, Eric Lengyel is heading up a new project, a book series called “Game Engine Gems”. It turns out that we ran across the website before it was announced (moral: there’s no hiding on the internet). He’s sent out an official call for papers today – see the book’s website for basic information.
I’m posting today to mention a few dates not currently shown on the website (though I expect this will change soon):
- August 1 – Final day to submit article proposals
- August 15 – Authors notified of acceptance
- October 15 – Final day to submit completed articles
Contact Eric for more information.
Full SIGGRAPH 2009 paper list
Can be found here. Although it has only been up for a few days, Ke-Sen Huang already has 72 out of 97 (74%) papers linked from his SIGGRAPH 2009 papers page. The total of 97 papers presented at SIGGRAPH this year includes 78 actual SIGGRAPH papers, and 19 Transactions on Graphics papers which were selected to be presented at the conference.
SIGGRAPH 2009 courses
No sooner had I written about the full SIGGRAPH 2009 course list not being up yet, and bam! there it is. As I hinted at, there is a lot of stuff there for real-time rendering folks. Some highlights:
Advances in Real-Time Rendering in 3D Graphics and Games (two-parter; second part here): This (somewhat awkwardly-named course) has been my favorite thing at SIGGRAPH for the past three years. Each year it presents all-new material. Previous courses have seen the debut of important rendering techniques algorithms like SSAO, signed distance-field vector texturing, and wrinkle mapping, as well as details on the rendering secrets behind games like Halo 3, Crysis, Starcraft 2, Half-Life 2, Team Fortress 2, and LittleBigPlanet. Not much is known about this year’s content, except that it will include details on Crytek’s new global illumination algorithm; but this is one course I know I’m going to!
Beyond Programmable Shading (another two-parter): GPGPU was promoted by GPU manufacturers in an attempt to find non-graphics uses for their products, and then turned full circle as people realized that drawing pretty pictures is really the best way to use massive general-purpose computing power. Between CUDA, OpenCL, and Larrabee, this has been a pretty hot topic. This is the second year that this course has been presented; last year had information on all the major APIs, and some case studies including a presentation on id’s Voxel Octree research. A subsequent SIGGRAPH Asia presentation added some new material, such as a presentation on real-time implementations of Renderman‘s REYES algorithm. This year, presenters include people from NVIDIA, AMD, Intel and Stanford; I expect this course to add significant new material, given the rapid development of the field.
Efficient Substitutes for Subdivision Surfaces: Tessellation is another hot topic; Direct3D 11 is mostly designed around this and Compute Shaders (the topic of the previous course). There has been a lot of work on mapping subdivision surfaces and other types of high-order surface representations to D3D11 hardware. Including presenters from ILM, Valve, and NVIDIA, this course promises to be a great overview of the state of the art.
Color Imaging: Color is one of the most fundamental topics in rendering. This course is presented by some of the leading researchers on color and HDR imaging, and should be well worth attending.
Advanced Material Appearance Modeling: Previous versions of this course were the basis for an excellent book on material appearance modeling. This is a great overview of an important rendering topic, and well worth attending if you haven’t seen it in previous years.
Visual Algorithms in Post-Production: It is well-known that you can find the latest academic rendering research at SIGGRAPH, but there is always a lot of material from the trenches of film effects and animation production as well. A surprisingly large percentage of this is relevant for real-time and game graphics. This course has presenters from film production as well as graphics academia describing ways in which academic research is used for film post-production. I think our community can learn a lot from the film production folks; this course is high on my list.
The Digital Emily Project: Photoreal Facial Modeling and Animation: Last year, Digital Emily was one of the most impressive technology demonstrations; it was the result of a collaboration between Paul Debevec‘s group at USC and Image Metrics, a leading facial capture company. In this course, presenters from both ICT and Image Metrics describe how this was done, as well as more recent research.
Real-Time Global Illumination for Dynamic Scenes: Real-time global illumination is another active topic of research. This course is presented by the researchers who have done some of the best (and most practical-looking) work in this area. It will be interesting to compare the techniques from this course with Crytek’s technique (presented in the “Advances in Real-Time Rendering” course).
SIGGRAPH 2009 papers
Ke-Sen Huang has had a SIGGRAPH 2009 papers page up for a while, and this weekend he’s added a bunch of new papers.
I found two of these to be of direct relevance to real-time rendering:
Gaussian KD-Trees for High-Dimensional Filtering: This paper generalizes the approach used in the SIGGRAPH 2007 bilateral grid paper. Large-scale image filters are typically performed on downscaled buffers for better performance, but this cannot normally be done for bilateral filters (which are used in real-time rendering for things like filtering SSAO). The bilateral grid is a three-dimensional low-resolution grid, where the third dimension is the image intensity (it could also be depth or some other scalar quantity). However, bilateral grids cannot be used to accelerate bilateral filters based on higher-dimensional quantities like RGB color or depth + surface normal; this paper addresses that limitation.
Modeling Human Color Perception under Extended Luminance Levels: An understanding of human color perception is fundamental to computer graphics; many rendering processes are perceptual rather than physical (such as tone mapping and color correction), but even physical computations are affected by the properties of human vision (such as the range of visible wavelengths and the fact that human color perception is trichromatic, or three-dimensional). Most computer graphics people are familiar with color spaces such as CIE XYZ, but color appearance models such as CIECAM02 are less familiar. These are used to model the effects of adaptation, background, etc. on color perception. Current color appearance models are based on perceptual experiments performed under relatively low luminance values; this paper extends the experiments to high values, up to about 17,000 candelas per square meter (white paper in noon sunlight), and propose a new color appearance model based on their findings. I also found the background and related work sections illuminating for their succinct overview of the current state of color science.
Two more papers, although not directly relevant to real-time rendering, are interesting and thought-provoking:
Single Scattering in Refractive Media with Triangle Mesh Boundaries: This paper finds a rapid (although not quite real-time) solution to refraction in objects composed of faceted or smooth triangle meshes. The methods described here are interesting and look like they could inspire some real-time techniques, perhaps on the next generation of graphics hardware.
Fabricating Microgeometry for Custom Surface Reflectance: This one is not useful for rendering, but is just plain cool. Instead of using the microfacet model to predict the appearance of surfaces based on their structure, they turn the idea around and construct surfaces so that they have a desired appearance. One of the examples they show (inspired by Figure 1 in this paper), is a material with a teapot-shaped highlight! Well, with their current fabrication methods it is really a teapot-shaped reflection cast on a wall, but once manufacturers get their hands on this, all kinds of weird and wonderful materials will start showing up.
So far the yield of papers relevant to real-time rendering practitioners is disappointingly low; perhaps more relevant papers will show up when the official list is published. In any case, the early list of courses has a lot of relevant material, and I have reason to believe the final list will have even more good stuff on it. In addition, the Talks (formerly Sketches) program always has useful stuff, Will Wright is giving a keynote speech, and the Electronic Theater (which is back, renamed as the Evening Theater) now has real-time content, so there are more than enough reasons to attend SIGGRAPH this year (and it’s in New Orleans!). Registration has already started!
CryEngine3 presentation
This detailed presentation on Crytek’s latest engine at the regional Triangle Game Conference slipped completely under my radar, but Wolfgang Engel just pointed it out to me. It’s on Crytek’s presentations page, which has a bunch of other good stuff on it as well.
The presentation includes lots of great information on their new deferred lighting system, which is timely since I am just working on a lengthy blog post on this very subject (hopefully to be finished soon). They also tease about their new dynamic global illumination system, to be presented at SIGGRAPH 2009.
Odds and Ends
It’s 5/7/09, a nice odd sequence, so time for a few odds and ends I’ve collected.
OK, this is worth a few minutes of your life: the elevated demo is awe-inspiring. Terrain generation (be patient when you start it), fly-by’s, and music, all in less than 4096 bytes. By way of comparison, an empty MS Word document is 9834 bytes. (thanks to Steve Worley)
Google has put out a browser-based low-level 3D graphics API called O3D. API here. Demos here. Some initial impressions here. It will be interesting to see if they succeed where so many others have failed.
There is a call for participation out for a new book series called “Game Engine Gems“, edited by Eric Lengyel. (thanks to Marwan Ansari)
The main thing I look at on the SIGGRAPH exhibition floor are the book booths. Good books are such a ridiculous bargain: if a book like Geometric Tools saves a programmer 2 hours of time, it’s paid for itself. One new book that I want to see is Real-Time Cameras, by Mark Haigh-Hutchinson, which came out this April. Looking around for more info, I noticed this sad note. I never met Mark, but we corresponded a few times. He came up with a clever idea to avoid performing division when doing a point in polygon test; I folded this into the CrossingsMultiplyTest Graphics Gems code here, crediting him.
I’ve been looking at GPU capabilities and benchmarking information lately. Some nice resources:
- You probably know about the benchmarking group Futuremark. Me, I hadn’t realized they had useful stats at their site: see the Futuremark ORB links at the bottom of the page and start clicking.
- Two applications that tell you a ton about your card’s capabilities: GPU-Z, with a ton of information and a statistics page & cute map of downloads at their site, and GPU Caps, which also includes CUDA-related information and some nice little OpenGL benchmarks.
- Chris Dragan has a web database that provides a fair amount of data on card support for DirectX capabilities and OpenGL extensions.
- The Notebook Check site had way too much information about many laptop graphics accelerators.
- nHancer is a utility for NVIDIA cards. It lets you get at all sorts of different capabilities on your GPU, on a per-game basis. There are also interesting antialiasing and anisotropic filtering comparison pages (click on the radio buttons). (thanks to Mauricio Vives)
Coincidental world: it turns out there’s a different “Eric Haines” out there that made a well-received 3D graphics game for the iPhone, Realmaze 3D. I’m not sure how it compares to his The Magical Flying Pink Pony Game, which looks awesome. (thanks to Nikolai Sander)
I’ve seen similar real-world illusions, but still thought this one was pretty great. (Addendum: Morgan McGuire found this even-better video of the effect.)