The SIGGRAPH Course “Advances in Real-Time Rendering for 3D Graphics and Games” has been held since 2006 with a consistently high level of quality. However the hosting of the materials is scattered around a few different websites, and the older years suffer from broken links and other issues. We are happy to host the course’s new home on a subdomain of this site: http://advances.realtimerendering.com/. At the moment only the SIGGRAPH 2010 course materials are present, but previous years will go up shortly.
Tag Archives: SIGGRAPH
SIGGRAPH 2010 Game Content Roundup
With less than two weeks until the conference, here’s my final pre-SIGGRAPH roundup of all the game development and real-time rendering content. This is to either to help convince people who are still on the fence about attending (unlikely at this late date) or to help people who are trying to decide which sessions to go to (more likely). If you won’t be able to attend SIGGRAPH this year, this might at least help you figure out which slides, videos, and papers to hunt for after the conference.
First of all, the SIGGRAPH online scheduler is invaluable for helping to sort out all the overlapping sessions (even if you just “download” the results into Eric’s lower-tech version). The iPhone app may show up before the conference, but given the vagaries of iTunes app store approval, I wouldn’t hold my breath.
The second resource is the Games Focus page, which summarizes the relevant content for game developers in one handy place. It makes a good starting point for building your schedule; the rest of this post goes into additional detail.
My previous posts about the panels and the talks, and several posts about the courses go into more detail on the content available in these programs.
Exhibitor Tech Talks are sponsored talks by various vendors, and are often quite good. Although the Games Focus page links to the Exhibitor Tech Talk page, for some reason that page has is no information about the AMD and NVIDIA tech talks (the Intel talk on Inspecting Complex Graphics Scenes in a Direct X Pipeline, about their Graphics Performance Analyzer tool, could be interesting). NVIDIA does have all the details on their tech talks at their SIGGRAPH 2010 page; the ones on OpenGL 4.0 for 2010, Parallel Nsight: GPU Computing and Graphics Development in Visual Studio, and Rapid GPU Ray Tracing Development with NVIDIA OptiX look particularly relevant. AMD has no such information available anywhere: FAIL.
One program not mentioned in the Games Focus page is a new one for this year: SIGGRAPH Dailies! where artists show a specific piece of artwork (animation, cutscene sequence, model, lighting setup, etc.) and discuss it for two minutes. This is a great program, giving artists a unique place to showcase the many bits of excellence that go into any good film or game. Although no game pieces got in this year, the show order includes great work from films such as Toy Story 3, Tangled, Percy Jackson, A Christmas Carol, The Princess and The Frog, Ratatouille, and Up. The show is repeated on Tuesday and Wednesday overlapping the Electronic Theater (which also should not be missed; note that it is shown on Monday evening as well).
One of my favorite things about SIGGRAPH is the opportunity for film and game people to talk to each other. As the Game-Film Synergy Chair, my primary responsibility was to promote content of interest to both. This year there are four such courses (two of which I am organizing and speaking in myself): Global Illumination Across Industries, Color Enhancement and Rendering in Film and Game Production, Physically Based Shading Models in Film and Game Production, and Beyond Programmable Shading I & II.
Besides the content specifically designed to appeal to both industries, a lot of the “pure film” content is also interesting to game developers. The Games Focus page describes one example (the precomputed SH occlusion used in Avatar), and hints at a lot more. But which?
My picks for “film production content most likely to be relevant to game developers”: the course Importance Sampling for Production Rendering, the talk sessions Avatar in Depth, Rendering Intangibles, All About Avatar, and Pipelines and Asset Management, the CAF production sessions Alice in Wonderland: Down the Rabbit Hole, Animation Blockbuster Breakdown, Iron Man 2: Bringing in the “Big Gun”, Making “Avatar”, The Making of TRON: LEGACY, and The Visual Style of How To Train Your Dragon, and the technical papers PantaRay: Fast Ray-Traced Occlusion Caching, An Artist-Friendly Hair Shading System, and Smoothed Local Histogram Filters. (unlike much of the other film production content, paper presentation videos are always recorded, so if a paper presentation conflicts with something else you can safely skip it).
Interesting, but more forward-looking film production stuff (volumetric effects and simulations that aren’t feasible for games now but might be in future): the course Volumetric Methods in Visual Effects, the talk sessions Elemental Training 101, Volumes and Precipitation, Simulation in Production, and Blowing $h!t Up, and the CAF production session The Last Airbender: Harnessing the Elements: Earth, Air, Water, and Fire.
Speaking of forward-looking content, SIGGRAPH papers written by academics (as opposed to film professionals) tend to fall in this category (in the best case; many of them are dead ends). I haven’t had time to look at the huge list of research papers in detail; I highly recommend attending the Technical Papers Fast-Forward to see which papers are worth paying closer attention to (it’s also pretty entertaining).
Some other random SIGGRAPH bits:
- Posters are of very mixed quality (they have the lowest acceptance bar of any SIGGRAPH content) but quickly skimming them doesn’t take much time, and there is sometimes good stuff there. During lunchtime on Tuesday and Wednesday, the poster authors are available to discuss their work, so if you see anything interesting you might want to come back then and ask some questions.
- The Studio includes several workshops and presentations of interest, particularly for artists.
- The Research Challenge has an interesting interactive haunted house concept (Virtual Flashlight for Real-Time Scene Illumination and Discovery) presented by the Square Enix Research and Development Division.
- The Geek Bar is a good place to relax and watch streaming video of the various SIGGRAPH programs.
- The SIGGRAPH Reception, the Chapters Party, and various other social events throughout the week are great opportunities to meet, network, and talk graphics with lots of interesting and talented people from outside your regular circle of colleagues.
I will conclude with the list of game studios presenting at SIGGRAPH this year: Activision Studio Central, Avalanche Software, Bizarre Creations, Black Rock Studio, Bungie, Crytek, DICE, Disney Interactive Research, EDEN GAMES, Fantasy Lab, Gearbox, LucasArts, Naughty Dog, Quel Solaar, tri-Ace, SCE Santa Monica Studio, Square Enix R&D, Uber Entertainment, Ubisoft Montreal, United Front Games, Valve, and Volition. I hope for an even longer list in 2011!
Another SIGGRAPH Scheduler
I’ve messed around with various scheduling methods over the years for SIGGRAPH, but find I dislike the form factor of PDA-like devices: you can see a few hours, or maybe a day’s activities at best. Taking notes can be tiresome, you need lots of clicks needed to find stuff, and sometimes the battery dies.
So for the past few years I’ve locked onto classic graphite stick & cellulose technology. Honestly, I like it a lot: folds up and fits in my pocket, it’s easy to see conflicts among events, I can instantly figure out when I’m free, and lots of room on the back for notes and whatnot. At the end of the conference I automatically have a hardcopy, no printing necessary. I mention it here as an honestly useful option, as this low-tech approach works for me. The main drawback is that you look like a nerd to other nerds. Hey, I like my iPod Touch, I’ll put the SIGGRAPH Advanced Program on it with Discover, but the sheet o’ paper will be my high-level quick & dirty way to navigate and write down information. It’s sort of how I like RememberTheMilk for reminders more than Google Calendar: I can enter data very simply, without time wasted navigating the UI. Now if only the sheet of paper would automatically unfold when I take it out of my pocket, I could increase efficiency by 0.43 seconds.
More SIGGRAPH Course Updates
After my last SIGGRAPH post, I spent a little more time digging around in the SIGGRAPH online scheduler, and found some more interesting details:
Global Illumination Across Industries
This is another film-game crossover course. It starts with a 15-minute introduction to global illumination by Jaroslav Křivánek, a leading researcher in efficient GI algorithms. It continues with six 25-30 minutes talks:
- Ray Tracing Solution for Film Production Rendering, by Marcos Fajardo, Solid Angle. Marcos created the Arnold raytracer which was adopted by Sony Pictures Imageworks for all of their production rendering (including CG animation features like Cloudy with a Chance of Meatballs and VFX for films like 2012 and Alice in Wonderland). This is unusual in film production; most VFX and animation houses use rasterization renderers like Renderman.
- Point-Based Global Illumination for Film Production, by Per Christensen, Pixar. Per won a Sci-Tech Oscar for this technique, which is widely used in film production.
- Ray Tracing vs. Point-Based GI for Animated Films, by Eric Tabellion, PDI/Dreamworks. Eric worked on the global illumination (GI) solution which Dreamworks used in Shrek 2; it will be interesting to hear what he has to say on the differences between the two leading film production GI techniques.
- Adding Real-Time Point-based GI to a Video Game, Michael Bunnell, Fantasy Lab. Mike was also awarded the Oscar for the point-based technique (Christophe Hery was the third winner). He actually originated it as a real-time technique while working at NVIDIA; while Per and Christophe developed it for film rendering, Mike founded Fantasy Lab to further develop the technique for use in games.
- Pre-computing Lighting in Games, David Larsson, Illuminate Labs. Illuminate Labs make very good prelighting tools for games; I used their Turtle plugin for Maya when working on God of War III and was impressed with its speed, quality and robustness.
- Dynamic Global Illumination for Games: From Idea to Production, Anton Kaplanyan, Crytek. Anton developed the cascaded light propagation volume technique used in CryEngine 3 for dynamic GI; the I3D 2010 paper describing the technique can be found on Crytek’s publication page.
The course concludes with a 5 minute Q&A session with all speakers.
An Introduction to 3D Spatial Interaction With Videogame Motion Controllers
This course is presented by Joseph LaViola (director of the University of Central Florida Interactive Systems and User Experience Lab) and Richard Marks from Sony Computer Entertainment (principal inventor of the Eyetoy, Playstation Eye, and Playstation Move). Richard Marks gives two 45-minute talks, one on 3D Interfaces With 2D and 3D Cameras and one on 3D Spatial Interaction with the PlayStation Move. Prof. LaViola discusses Common Tasks in 3D User Interfaces, Working With the Nintendo Wiimote, and 3D Gesture Recognition Techniques.
Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations
After an introduction to the topic of collision detection and proximity queries, this course goes over recent research in collision detection for games including articulated, deformable and fracturing models. It concludes with optimization-oriented talks such as GPU-Based Proximity Computations (presented by Dinesh Manocha, University of North Carolina at Chapel Hill, one of the most prominent researchers in the area of collision detection), Optimizing Proximity Queries for CPU, SPU and GPU (presented by Erwin Coumans, Sony Computer Entertainment US R&D, primary author of the Bullet physics library, which is widely used for both games and feature films), and PhysX and Proximity Queries (presented by Richard Tonge, NVIDIA, one of the architects of the AGEIA physics processing unit – the company was bought by NVIDIA and their software library formed the basis of the GPU-accelerated PhysX library).
Advanced Techniques in Real-Time Hair Rendering and Simulation
This course is presented by Cem Yuksel (Texas A&M University) and Sarah Tariq (NVIDIA). Between them, they have done a lot of the recent research on efficient rendering and simulation of hair. The course covers all aspects of real-time hair rendering: data management, the rendering pipeline, transparency, antialiasing, shading, shadows, and multiple scattering. It concludes with a discussion of real-time dynamic simulation of hair.
Ray Tracing Solution for Film Production Rendering Fajardo 2:40 pm Point-Based Global Illumination for Film Production Christensen 3:05 pm Ray Tracing vs. Point-Based GI for Animated Films Tabellion 3:30 pm Break 3:45 pm Adding Real-Time Point-based GI to a Video Game Bunnell 4:15 pm Pre-computing Lighting in Games Larsson 4:45 pm Dynamic Global Illumination for Games: From Idea to Production Kaplanyan 5:10 pm Conclusions, Q & A Ray Tracing Solution for Film Production Rendering Fajardo 2:40 pm Point-Based Global Illumination for Film Production Christensen 3:05 pm Ray Tracing vs. Point-Based GI for Animated Films Tabellion 3:30 pm Break 3:45 pm Adding Real-Time Point-based GI to a Video Game Bunnell 4:15 pm Pre-computing Lighting in Games Larsson 4:45 pm Dynamic Global Illumination for Games: From Idea to Production Kaplanyan 5:10 pm Conclusions, Q & A All All
SIGGRAPH Scheduler & Course Update
For anyone still working on their SIGGRAPH 2010 schedule, SIGGRAPH now has an online scheduler available. They are also promising an iPhone app, but this has not yet materialized. Most courses (sadly, only one of mine) now have detailed schedules. These reveal some more detail about two of the most interesting courses for game and real-time rendering developers:
Advances in Real-Time Rendering in 3D Graphics and Games
The first half, Advances in Real-Time Rendering in 3D Graphics and Games I (Wednesday, 28 July, 9:00 AM – 12:15 PM, Room 515 AB) starts with a short introduction by Natalya Tatarchuk (Bungie), and continues with four 45 to 50-minute talks:
- Rendering techniques in Toy Story 3, by John Ownby, Christopher Hall and Robert Hall (Disney).
- A Real-Time Radiosity Architecture for Video Games, by Per Einarsson (DICE) and Sam Martin (Geomerics)
- Real-Time Order Independent Transparency and Indirect Illumination using Direct3D 11, by Jason Yang and Jay McKee (AMD)
- CryENGINE 3: Reaching the Speed of Light, by Anton Kaplayan (Crytek)
The second half, Advances in Real-Time Rendering in 3D Graphics and Games II (Wednesday, 28 July, 2:00 PM – 5:15 PM, Room 515 AB) continues with five more talks (these are more variable in length, ranging from 25 to 50 minutes):
- Sample Distribution Shadow Maps, by Andrew Lauritzen (Intel)
- Adaptive Volumetric Shadow Maps, by Marco Salvi (Intel)
- Uncharted 2: Character Lighting and Shading, by John Hable (Naughty Dog)
- Destruction Masking in Frostbite 2 using Volume Distance Fields, by Robert Kihl (DICE)
- Water Flow in Portal 2, by Alex Vlachos (Valve)
And concludes with a short panel (Open Challenges for Rendering in Games and Future Directions) and Q&A session by all the course speakers.
Beyond Programmable Shading
The first half, Beyond Programmable Shading I (Thursday, 29 July, 9:00 AM – 12:15 PM, Room 515 AB) includes seven 20-30 minute talks:
- Looking Back, Looking Forward, Why and How is Interactive Rendering Changing, by Mike Houston (AMD)
- Five Major Challenges in Interactive Rendering, by Johan Andersson (DICE)
- Running Code at a Teraflop: How a GPU Shader Core Works, by Kayvon Fatahalian (Stanford)
- Parallel Programming for Real-Time Graphics, by Aaron Lefohn (Intel)
- DirectCompute Use in Real-Time Rendering Products, by Chas. Boyd (Microsoft)
- Surveying Real-Time Beyond Programmable Shading Rendering Algorithms, by David Luebke (NVIDIA)
- Bending the Graphics Pipeline, by Johan Andersson (DICE)
The second half, Beyond Programmable Shading II (Thursday, 29 July, 2:00 PM – 5:15 PM, Room 515 AB) starts with a short “re-introduction” by Aaron Lefohn (Intel) continues with five 20-35 minute talks:
- Keeping Many Cores Busy: Scheduling the Graphics Pipeline, by Jonathan Ragan-Kelley (MIT)
- Evolving the Direct3D Pipeline for Real-Time Micropolygon Rendering, by Kayvon Fatahalian (Stanford)
- Decoupled Sampling for Real-Time Graphics Pipelines, by Jonathan Ragan-Kelley (MIT)
- Deferred Rendering for Current and Future Rendering Pipelines, by Andrew Lauritzen (Intel)
- PantaRay: A Case Study in GPU Ray-Tracing for Movies, by Luca Fascione (Weta) and Jacopo Pantaleoni (NVIDIA)
and closes with a 15-minute wrapup (What’s Next for Interactive Rendering Research?) by Mike Houston (AMD) followed by a 45-minute panel (What Role Will Fixed-Function Hardware Play in Future Graphics Architectures?) by all the course speakers Mike Houston, Kayvon Fatahalian, and Johan Andersson, joined by Steve Molnar (NVIDIA) and David Blythe (Intel) (thanks to Aaron Lefohn for the update).
Both of these courses look extremely strong, and I recommend them to any SIGGRAPH attendee interested in real-time rendering (I definitely plan to attend them!)
Four presentations by DICE is an unusually large number for a single game developer, but that isn’t the whole story; they are actually doing two additional presentations in the Stylized Rendering in Games course, for a total of six!
SIGGRAPH 2010 Panels
I don’t often go to SIGGRAPH panels, but this year’s list includes three that look very tempting. Here they are, sorted by date:
Future Directions in Graphics Research
Sunday, 25 July, 3:45 PM – 5:15 PM
The SIGGRAPH website description says, “This panel presents the results of an NSF-funded workshop on defining broader, fundamental long-term research areas for potential funding opportunities in medical imaging and device design, manufacturing, computational photography, scientific visualization, and many other emerging areas in graphics research.” It’s important to know where the funding is going into computer graphics research, and what the researchers think the most promising future directions are. The panelists include some of the most prominent and influential computer graphics professors: Jessica Hodgins from Carnegie Mellon, James Foley (first author of “Computer Graphics: Principles and Practice”) from Georgia Tech, Pat Hanrahan (who probably has his name on more SIGGRAPH papers than anyone in the world) from Stanford University, and Donald P. Greenberg (whose list of former students would make a great first draft for a “who’s who” of computer graphics) from Cornell.
CS 292: The Lost Lectures; Computer Graphics People and Pixels in the Past 30 Years
Monday, 26 July, 3:45 PM – 5:15 PM
This is a unique idea for a panel – in the 1980’s, Ed Catmull and Jim Blinn taught a hugely influential course on computer graphics. Among many others, it inspired Richard Chuang who went on to found PDI. While teaching the course, Ed Catmull was building Lucasfilm’s computer graphics group, which later became Pixar. The panelists are Ed Catmull and Richard Chuang, who according to the website description “use video from the course to reflect on the evolution of computer graphics – from the genesis of Pixar and PDI to where we are today.” Catmull in particular is an amazing speaker – this looks well worth attending.
Large Steps Toward Open Source
Thursday, 29 July, 9:00 AM – 10:30 AM
Several influential film industry groups have open-sourced major bits of internal technology recently. This panel discusses why they did it, what were the benefits and where were the challenges. This is definitely relevant to the game industry – would it make sense for us to do the same? (Insomniac is already leading the way – I wish they had a representative on this panel). Panelists include Rob Bredow (CTO of Sony Pictures Imageworks, which has recently launched several important open source initiatives), Andy Hendrickson (CTO of Walt Disney Animation Studios, which has recently done the same, most notably including the Ptex texture mapping system), Florian Kainz (Principal R&D Engineer at Industrial Light & Magic and the key individual behind OpenEXR, which ILM open-sourced in 2003), and Bill Polson (Lead of Production Engineering at Pixar Animation Studios). Pixar doesn’t currently have any open-source initiatives that I know of – does Bill’s participation mean that they are about to announce one?
SIGGRAPH 2010 Talks
After the courses, the next best source of good SIGGRAPH material for games and real-time graphics professionals is the Talks (formerly called Sketches), and this year is no exception. The final list of Talks can be found on the SIGGRAPH Talks webpage, as well as in the Advance Program PDF. I will summarize the most relevant sessions here, sorted by date:
Avatar for Nerds
Sunday, 25 July, 2-3:30 pm
- A Physically Based Approach to Virtual Character Deformations (Simon Clutterbuck and James Jacobs from Weta Digital Ltd.) – I saw an early version of this presentation at Digital Domain a few weeks ago – although they use an expensive physical muscle simulation, they bake the results into a pose-space deformation-like representation; this kind of approach could work for games as well (pose-space deformation approaches in general offer a useful way to “bake” expensive deformations; their use in games should be further explored).
- Rendering “Avatar”: Spherical Harmonics in Production (Nick McKenzie, Martin Hill and Jon Allitt from Weta Digital Ltd.) – The website says “Application of spherical harmonics in a production rendering environment for accelerated final-frame rendering of complex scenes and materials.” This sounds very similar to uses of spherical harmonics in games; making this talk likely to yield applicable ideas.
- PantaRay: Directional Occlusion for Fast Cinematic Lighting of Massive Scenes (Jacopo Pantaleoni, Timo Aila, and David Luebke from NVIDIA Research; Luca Fascione, Martin Hill and Sebastian Sylwan from Weta Digital Ltd.) – the website mentions “…a novel system for precomputation of ray-traced sparse, directional occlusion caches used as a primary lighting technology during the making of Avatar.” Like the previous talk, this sounds very game-like; these are interesting examples of the convergence between graphics techniques in film and games going in the less common direction, from games to film rather than vice-versa. Note that several of the authors of this talk are speaking at the “Beyond Programmable Shading” course, and there is also a paper about PantaRay (called “A System for Directional Occlusion for Fast Cinematic Lighting of Massive Scenes”).
Split Second Screen Space
Monday, 26 July, 2-3:30 pm
- Screen Space Classification for Efficient Deferred Shading (Neil Hutchinson, Jeremy Moore, Balor Knight, Matthew Ritchie and George Parrish from Black Rock Studio) – website sez, “This talk introduces a general, extendible method for screen classification and demonstrates how its use accelerated shadowing, lighting, and post processing in Disney’s Split/Second video game.” This sounds like a useful extension to SPU-based screen tile classification methods; I wonder if it is cross-platform.
- How to Get From 30 to 60 Frames Per Second in Video Games for “Free” (Dmitry Andreev from LucasArts) – well, this title is promising a lot! The website description doesn’t say much more than the title, but if LucasArts actually uses it in production this might be useful.
- Split-Second Motion Blur (Kenny Mitchell, Matt Ritchie and Greg Modern from Black Rock Studio) – the description mentions “image and texture-space sampling techniques”, so this is probably a combination of blurring road textures in the direction of motion with screen-space techniques. Split-Second looks good; an overall description of their motion blur system should be interesting to hear.
- A Deferred-Shading Pipeline for Real-Time Indirect Illumination (Cyril Soler and Olivier Hoel from INRIA Rhone-Alpes; Frank Rochet from EDEN GAMES) – there have been screen-space indirect illumination (approximation) techniques published before, but none used in games that I know of; there could be some useful ideas here.
APIs for Rendering
Wednesday, 28 July, 2-3:30 pm
- Open Shading Language (Larry Gritz, Clifford Stein, Chris Kulla and Alejandro Conty from Sony Pictures Imageworks) – this Open-Source project from Sony Pictures Imageworks is interesting in that it is a shading language designed from the ground up for ray-tracing renderers. Probably not of immediate relevance to games, but some day…
- REYES using DirectX 11 (Andrei Tatarinov from NVIDIA Corporation) – the website summary claims that this REYES implementation uses “not only the compute power of GPU, but also the fixed-function stages of the graphics pipeline.” This is something I have wanted to see someone try for a long time; the typical pure-Compute approaches to GPU-accelerated REYES seem wasteful, given the similarities between the existing fixed function units and some of the operations in the REYES algorithm. It will be interesting to see how efficient this implementation ends up being.
- WebGLot: High-Performance Visualization in the Browser (Dan Lecocq, Markus Hadwiger, and Alyn Rockwood from King Abdullah University of Science and Technology) – although anything that makes it easier for browser-based games to use the GPU is interesting, I’m not familiar enough with the existing approaches to judge how new this stuff is.
Games & Real Time
Thursday, 29 July, 10:45 am-12:15 pm
- User-Generated Terrain in ModNation Racers (James Grieve, Clint Hanson, John Zhang, Lucas Granito and Cody Snyder from United Front Games) – from all accounts, the system for user-generated tracks and terrain in ModNation Racers is impressive; a description of this system by its developers is well worth attending.
- Irradiance Rigs (Hong Yuan from University of Massachusetts Amherst; Derek Nowrouzezahrai from University of Toronto; Peter-Pike Sloan from Disney Interactive Studios) – this looks like an extension of light-probe lighting techniques; it promises better results for large objects and / or near lighting. These techniques are very common in games, and this talk looks likely to be useful.
- Practical Morphological Anti-Aliasing on the GPU (Venceslas Biri and Adrien Herubel from Université Paris-Est; Stephane Deverly from Duran Duboi Studio) – since God of War III produced great visuals from an SPU implementation of Morphological Antialiasing, there has been much interest in the games industry for more GPU-friendly version of the algorithm, for use on XBox 360 or high-end PCs. Its hard to tell from the short description on the website whether the version in this talk is any good, but it might well be worth attending the talk to find out.
- Curvature-Dependent Reflectance Function for Rendering Translucent Materials (Hiroyuki Kubo from Waseda University; Yoshinori Dobashi from Hokkaido University; Shigeo Morishima from Waseda University) – this sounds similar to the paper Curvature-Based Shading of Translucent Materials, such as Human Skin by Konstantin Kolchin (we discuss it in the section on “Wrap Lighting” in RTR3, since it is essentially an attempt to put wrap lighting on a physically sound footing). Since in most cases curvature can be precomputed, this could be a cheap way to get more accurate subsurface scattering effects.
A lot of the film production talk sessions also look interesting, even without an explicit game or real-time connection; I have often found useful information at such talks in previous years. These sessions include “Elemental Training 101”, “All About Avatar”, “Rendering Intangibles”, “Volumes and Precipitation”, “Simulation in Production”, “Blowing $h!t Up”, “Pipelines and Asset Management” and “Fur, Feathers and Trees”.
SIGGRAPH 2010 Courses Update
Since my original post about the SIGGRAPH 2010 courses, some of the courses now have updated speaker lists (including mine – regardless of what Eric may think, I’m not about to risk Hyper-Cerebral Electrosis by speaking for three hours straight). I’ll give the notable updates here:
Stylized Rendering in Games
Covered games will include:
- Borderlands (presented by Gearbox cofounder and chief creative officer Brian Martel as well as VP of product development Aaron Thibault)
- Brink (presented by lead programmer Dean Calver)
- The 2008 Prince of Persia (presented by lead 3D programmer Jean-François St-Amour)
- Battlefield Heroes (presented by graphics engineer Henrik Halén)
- Mirror’s Edge (also presented by Henrik Halén).
- Monday Night Combat (presented by art director Chandana Ekanayake) – thanks to Morgan for the update!
Physically Based Shading Models in Film and Game Production
- I’ll be presenting the theoretical background, as well as technical, production, and creative lessons from the adoption of physically-based shaders at the Activision studios.
- Also on the game side, Yoshiharu Gotanda (president, R&D manager, and co-founder of tri-Ace) will talk about some of the fascinating work he has been doing with physically based shaders.
On the film production side:
- Adam Martinez is a computer graphics supervisor at Sony Pictures Imageworks whose film work includes the Matrix series and Superman Returns; his talk will focus on the use of physically based shaders in Alice in Wonderland. Imageworks uses a ray-tracing renderer, unlike the micropolygon rasterization renderers used by most of the film industry; I look forward to hearing how this affects shading and lighting.
- Ben Snow is a visual effects supervisor at Industrial Light and Magic who has done VFX work on numerous films (many of them as CG or VFX supervisor) including Star Trek: Generations, Twister, The Lost World: Jurassic Park, The Mummy, Star Wars: Episode II – Attack of the Clones, King Kong, and Iron Man. Ben has pioneered the use of physically based shaders in Terminator Salvation and Iron Man 2, which I hope to learn more about from his talk.
Color Enhancement and Rendering in Film and Game Production
The game side of the course has two speakers in common with the “physically-based shading” course:
- Yoshiharu Gotanda will talk about his work on film and camera emulation at tri-Ace, which is every bit as interesting as his physical shading work.
- I’ll discuss my experiences introducing filmic color grading techniques at the Activision studios.
And one additional speaker:
- While working at Electronic Arts, Haarm-Pieter Duiker applied his experience from films such as the Matrix series and Fantastic Four to game development, pioneering the filmic tone-mapping technique recently made famous by John Hable. He then moved back into film production, working on Speed Racer and 2012 (for which he won a VES award). Haarm-Pieter also runs his own company which makes tools for film color management.
The theoretical background and film production side will be covered by a roster of speakers which (although I shouldn’t say this since I’m organizing the course) is nothing less than awe-inspiring:
- Dominic Glynn is lead engineer of image mastering at Pixar Animation Studios. He has worked on films including Cars, The Wild, Ratatouille, Up and Toy Story 3. Dominic will talk about how color enhancement and rendering is done at different stages of the Pixar rendering pipeline.
- Joseph Goldstone (Lilliputian Pictures LLC) is a prominent consulting color scientist; his film credits include Terminator 2: Judgment Day, Batman Returns, Apollo 13, The Fifth Element, Titanic, and Star Wars: Episode II – Attack of the Clones. He has contributed to industry standards committees such as the International Color Consortium (ICC) and the Academy of Motion Pictures Arts and Science’s Image Interchange Framework.
- Joshua Pines is vice president of color imaging R&D at Technicolor; between his work at Technicolor, ILM and other production companies he has over 50 films to his credit, including Star Wars: Return of the Jedi, The Abyss, Terminator 2: Judgment Day, Jurassic Park, Schindler’s List, Forrest Gump, Twister, Mission: Impossible, Titanic, Saving Private Ryan, The Mummy, Star Wars: The Phantom Menace, The Aviator, and many others. Joshua lead the development of ILM’s film scanning system and has a Technical Achievement Award from the Motion Pictures Academy of Arts & Sciences for his work on film archiving.
- Jeremy Selan is the color pipeline lead at Sony Pictures Imageworks. He has worked on films including Spider-Man 2 and 3, Monster House, Surf’s Up, Beowulf, Hancock, and Cloudy with a Chance of Meatballs. Jeremy has contributed to industry standards committees such as the Digital Cinema Initiative (DCI), SMPTE, and the Academy of Motion Picture Art and Science’s Image Interchange Framework. At the course, Jeremy will unveil an exciting new initiative he has been working on at Imageworks.
- The creative aspects of color grading will be covered by Stefan Sonnenfeld, senior vice president at Ascent Media Group as well as president, managing director, and co-founder of Company 3. An industry-leading DI colorist, Stefan has worked on almost one hundred films including Being John Malkovich, the Pirates of the Caribbean series, War of the Worlds, Mission: Impossible III, X-Men: The Last Stand, 300, Dreamgirls, Transformers, Sweeney Todd, Cloverfield, The Hurt Locker, Body of Lies, The Taking of Pelham 1 2 3, Transformers: Revenge of the Fallen, Where the Wild Things Are, Alice in Wonderland, Prince of Persia: The Sands of Time, and many others, as well as numerous high-profile television projects.
SIGGRAPH 2010 Course Scheduling
One of the challenges of SIGGRAPH is doing it all. My own method is to take a sheet of lined paper (remember that stuff?) and make columns for the days, each line being a half hour. One whole sheet holds it all, vs. me dorking around with my Palm/phone/Touch/whatever, scrolling around to see what’s what. Old school, but it works great.
Anyway, Naty’s recent summary of courses didn’t have course times. Here goes, mostly for my own benefit, in time order. Bolded are the ones I personally plan to attend and why, FWIW:
Perceptually Motivated Graphics, Visualization, and 3D Displays – Sunday afternoon
Physically Based Shading Models in Film and Game Production – Sunday afternoon. Toss up for me between this and the previous course. Naty’s the only speaker for this one, so it’s tempting to go, just to see his head explode after lecturing for 3+ hours.
Stylized Rendering in Games – Monday morning. I’m particularly pumped for this one, having done NPR work this last year.
Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations – Monday afternoon
Color Enhancement and Rendering in Film and Game Production – Tuesday morning. Naty’s a speaker.
Filtered Importance Sampling for Production Rendering – Tuesday morning
An Introduction to 3D Spatial Interaction With Videogame Motion Controllers – Tuesday afternoon
Advances in Real-Time Rendering in 3D Graphics and Games – all Wednesday. Traditional course, usually quite good.
Volumetric Methods in Visual Effects – Wednesday morning
Gazing at Games: Using Eye Tracking to Control Virtual Characters – Wednesday afternoon
Beyond Programmable Shading – all Thursday. The DICE talk last year was amazing, the others were also worthwhile.
Advanced Techniques in Real-Time Hair Rendering and Simulation – Thursday morning
Global Illumination Across Industries – Thursday afternoon
The “Advances” course used to always be Monday. Which was terrible last year, as it was scheduled against the last day of the colocated HPG conference (not a problem this year, since HPG is in Europe alternate years). I suspect someone realized that putting Advances and Beyond next to each other, and alongside the exhibition floor days, was good for pulling in game devs. Anyway, looks to be a great set of courses, other than the risk of head explosion.
If you want something lighter to start with on Sunday, try Glassner’s “Processing for Visual Artists and Designers” course. The Processing language is easy to learn and fun for quick bit-twiddling or other 2D effects, with all the usual 2D primitives and mouse support (and much else) built in.
SIGGRAPH 2010 Courses
This year, SIGGRAPH is making a very strong push to include more game and real-time content. A lot of the programs are yet to be published, but the full list of courses is now up on the conference website, and many of them are of interest. The courses have always been the SIGGRAPH program with the most relevant material for film and game production; this year the game side is particularly strong. If you are doing game graphics, the courses by themselves are reason enough to attend the conference.
Full disclosure – I am organizing two of these courses, so my description of them may not be fully objective 🙂
The courses which are most directly relevant to game developers:
- Advances in Real-Time Rendering in 3D Graphics and Games – this full-day course, organized by Natasha Tatarchuk, has been a highlight of SIGGRAPH since it was first presented in 2006 (the name’s a bit clunky, though). Each year Natasha solicits top-notch game and real-time rendering content for her course. SSAO was first presented at this course, as were cascaded light volumes and many other important techniques. This year includes presentations from game powerhouses Bungie, Naughty Dog, Crytek, DICE, and Rockstar, among others.
- Beyond Programmable Shading – another very strong full-day course, now in its third year. Like Natasha’s course, this course includes brand-new material every year. Focusing on GPU compute APIs such as CUDA, DirectCompute and OpenCL, the presentations tend to skew towards GPU vendors but have also included some groundbreaking game developer talks on topics like sparse voxel octrees (by id software) and parallelism in graphics engines (DICE) . This year, besides the usual suspects (NVIDIA, AMD, Intel, Microsoft), there will be a talk by Johan Andersson from DICE (he gave the parallelism talk last year and I can’t wait to hear what he’s been up to since), one from Kayvon Fatahalian from Stanford (who has been doing some fascinating research on GPU-accelerated micropolygon rendering), and finally one from Luca Fascione of Weta. Hopefully Luca will be talking about the GPU-accelerated PantaRay system he helped design to render the jungles in Avatar. PantaRay is used to precompute occlusion; a very game-like thing to do.
- Stylized Rendering in Games – in recent years, games have started to explore the universe of possible styles beyond photorealism. The course is organized by Morgan McGuire, who is also chairing this year’s NPAR conference, and includes presentations by the developers of some of the most prominent stylized games.
- Physically Based Shading Models in Film and Game Production – this is one of two courses I am organizing. This topic has fascinated me for years and was a major focus of my work on RTR3. Physically based shading is currently a hot topic in film production, making this a natural film-games crossover topic (my primary focus on the conference committee). I’ve been able to get speakers with really strong film production backgrounds, so I’m optimistic that this course will turn out well.
- Color Enhancement and Rendering in Film and Game Production – this is my other course. Most of my work in this area is more recent than the physical shader stuff so RTR3 doesn’t have as much material on it; perhaps I can remedy this in RTR4. Although this topic is well-established in film production (a field from which I’ve been able to get good speakers for this course as well), it is still an area of active development in games, as attested by the excellent GDC 2010 talk by John Hable.
- Global Illumination Across Industries – this is another film-games crossover course, with presentations by top people working on global illumination in both industries (the games side is represented by Illuminate Labs for precomputed GI and Crytek for dynamic GI).
- An Introduction to 3D Spatial Interaction With Videogame Motion Controllers – between Microsoft’s Project Natal, Sony’s Playstation Move, and the Wii MotionPlus, motion controllers are an extremely timely topic. The speakers include Richard Marks, the brains behind the Eyetoy, Playstation Eye and Playstation Move.
- Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations – this is an important area, and the speakers are leading researchers in the field. Among other topics, the course will cover the the collision detection systems in the PhysX and Bullet libraries.
- Advanced Techniques in Real-Time Hair Rendering and Simulation – while this topic is a bit more of a niche, it is of interest for many games and the speakers have done some of the leading work in this area.
- Volumetric Methods in Visual Effects – one of the main differences between game and film graphics is the amount and quality of atmospheric effects. Film VFX houses have been actively developing their own systems for modeling and rendering clouds, fog, fire, ocean spray, etc. This course includes a stellar cast of speakers from Digital Domain, Sony Pictures Imageworks, Rhythm & Hues, Side Effects (developers of Houdini), PDI/DreamWorks and Double Negative; anything these people don’t know about volumetric effects isn’t worth knowing. This course is likely to have lots of good ideas for stuff that isn’t possible in real-time yet, but will be in the near future.
- Filtered Importance Sampling for Production Rendering – another film rendering course which is likely to yield good medium- and long-term real-time ideas. Importance sampling is crucial for efficient, high-quality reflections from arbitrary BRDFs and lighting; it can be used with environment maps as well as ray tracing. Filtered importance sampling is a more general, correct, and expensive version of the common game trick of prefiltering cubemaps for glossy reflections. It has recently found wide use in film production, a topic about which the speakers (from major visual effects houses such as ILM, Image Movers Digital and MPC) are well-qualified to speak.
- Perceptually Motivated Graphics, Visualization, and 3D Displays – Understanding human visual perception and how it relates to graphics is important for knowing which corners can be safely cut and which ones will yield distracting artifacts; 3D displays are a timely topic for game developers as well, now that TV and console manufacturers are getting into the act.
- Gazing at Games: Using Eye Tracking to Control Virtual Characters – I’m not aware of any commercial games that use gaze tracking as an input method (the course is presented by academic researchers). If existing cameras such as Playstation Eye and Project Natal can track eyes with sufficient precision, this may be an important trend going forward, but if new equipment is needed this might not be relevant for a long time (if ever).
Although not as directly relevant, some of the other courses appear to be informative and fun, such as Andrew Glassner’s course about the Processing graphics programming language, and the course on how to Build Your Own 3D Display.