OK, so I like the publisher A.K. Peters, for obvious reasons. They’re also kind/smart enough to send me review copies of upcoming graphics-related books. I’ve received two recently, with one of particular interest:
Update on SIGGRAPH 2011 Beyond Programmable Shading Course
I have recently been notified by Aaron Lefohn that there have been some changes to the Beyond Programmable Shading course since I last described it here.
The new schedule is below. I’m especially interested to see the presentation by Raja Koduri (former CTO of AMD’s graphics division and now a graphics architect at Apple) – according to Aaron, it’s “an introduction to reasoning about power for rendering researchers”. Power is a very important constraint which is little-understood by most algorithm researchers and software developers. We are not too far from regularly having to take account of power consumption in graphics algorithm design (since an algorithm which causes the GPU to burn too much power may force clock speed reduction, negatively affecting performance). The topic of the closing panel is also an interesting one – graphics APIs have undergone some interesting changes, and I suspect will undergo more profound ones in the near future.
Beyond Programmable Shading I
9:00 Introduction [Aaron Lefohn, Intel]
9:20 Research in Games [Peter-Pike Sloan, Disney Interactive]
9:45 The “Power” of 3D Rendering [Raja Koduri, Apple]
10:15 Real-Time Rendering Architectures [Mike Houston, AMD]
10:45 Scheduling the Graphics Pipeline [Jonathan Ragan-Kelley, MIT]
11:15 Parallel Programming for Real-Time Graphics [Aaron Lefohn, Intel]
11:45 Software rasterization on GPUs [Samuli Laine and Jacopo Pantaleoni, NVIDIA]
Beyond Programmable Shading II
14:00 Welcome and Re-Introduction [Mike Houston, AMD]
14:05 Toward a blurry rasterizer (state of the art) [Jacob Munkberg, Intel]
14:45 Order-independent transparency (state of the art) [Marco Salvi, Intel]
15:15 Interative global illumination (state of the art) [Chris Wyman, Univ. of Iowa]
15:45 User-defined pipelines for ray tracing [Steve Parker, NVIDIA]
16:30 Panel: “What Is the Right Cross-Platform Abstraction Level for Real-Time 3D Rendering?”
- Peter-Pike Sloan, Disney Interactive (Moderator)
- David Blythe, Intel (Panelist)
- Raja Koduri, Apple (Panelist)
- Henry Moreton, NVIDIA (Panelist)
- Mike Houston, AMD (Panelist)
- Chas Boyd, Microsoft (Panelist)
… and free to veterans and unemployed professionals
Mauricio Vives pointed out that the Autodesk program I mentioned yesterday, where students and educators can get Autodesk products and training for free, also applies to veterans and “displaced professionals.” See this page for the logic. The fine print on the registration page is:
An Autodesk Assistance Program participant is either a veteran or unemployed individual who has (a) previously worked in the architecture, engineering, design or manufacturing industries, has completed the online registration for the Autodesk Assistance Program, and upon request by Autodesk is able to provide proof of eligibility for that program.
This is a nice thing.
All Autodesk software free to students and educators, and betas for everyone
I think I need to pop my head out of my gopher-hole more often and see what my company’s doing. It turns out Autodesk software – including Maya, 3DS Max, Mudbox, AutoCAD, and everything else – is now free to students and educators. Just register and you’re good to go. Wow, this is a big change from the old system, and is definitely great to see.
There are also a number of betas from Autodesk free to anyone: one is 123D, a modeler that is aimed to help out the Maker crowd and 3d printing. I’ve installed this but haven’t played with it yet.
Another project is Photofly 2.0, where you upload a number of images and it makes a 3D model from the data (i.e., photogrammetry). This is similar to My3dScanner. I tried these two out on a set of photos of a bunch of bananas, some taken with a flash and some without, a hard test case. I definitely didn’t follow the guidelines. My3dScanner threw up its hands, Photosynth’s point cloud was incomprehensible, Photofly gave it a sporting chance, getting a cloud and making a mesh – no magic bullet yet, but fun to try. I’m now even tempted to RTFM, as results were better than I thought.
Photosynth (examine set of photos here):
Photofly’s cubist rendering – it did output an interesting Wavefront OBJ model:
Some Info on the SIGGRAPH 2011 Papers
The SIGGRAPH 2011 papers were recently made available in the ACM Digital Library (see here). Although I recommend using Ke-Sen’s excellent papers links page when possible (it links to the freely accessible author copies and often to additional information provided by the authors), not all authors have chosen to make their papers available in this way. The Digital Library itself is pretty expensive (unless you’re a full-time student – see below), but if all you want is the SIGGRAPH stuff (including other conferences sponsored by SIGGRAPH), then a SIGGRAPH membership can get you access. SIGGRAPH memberships are only $42 ($30 for students, but students can get full Digital Library access with an ACM student membership for $42, which looks like a better deal).
In addition, for the first time ACM has published a document which includes the first page of each paper – kind of a paper version of the SIGGRAPH Papers Fast-Forward. This document is freely accessible here, and should be useful for people who just want to skim the papers program and see which papers to read in full. Be warned that the document is a bit large though (184MB). The video preview of the papers program might also be of interest (note that it only covers a few of the papers).
[Eric chiming in: here’s the link for how to access the Digital Library if you’re a SIGGRAPH member. Note that you can access all issues of ACM TOG as well as all SIGGRAPH-sponsored conference material and journals, not just SIGGRAPH, which is just about everything you’d want for graphics conferences: I3D, HPG, EGSR, NPAR, etc. Also, remember that the cool kids say SIG-GRAPH, not SEE-GRAPH.]
Seven Things for July 26th, 2011
- First, if you’re going to HPG 2011, I’ll save you five minutes of searching for where it is: it’s at the Goldcorp Centre for the Arts, Google map here. Note also that things don’t start until 1:30 on Friday.
- SIGGRAPH parties? I know nothing, except that the official SIGGRAPH reception is 9 to 11 PM Monday at the convention center, and the ACM SIGGRAPH Chapters Party is 8:30 PM to 2 AM on, oh, Monday again. Odd scheduling.
- Timothy Lottes cannot be stopped: FXAA 3.11 is out (with improvements for thin lines), and 3.12 will soon appear. Note that the shader has a signature change, so your calling shader code will have to change, too.
- At the Motorola developer site there’s a quick summary of various image compression types used for mobile phones and PCs.
- Sebastien Hillaire implement the God Rays effect from GPU Gems 3, showing results and problems. Code and executable available for download.
- I’ve been enjoying some worthwhile articles on patents and copyrights lately, both new and old. Worth a mention: Myrhvold madness; a comic (a bit old but useful) on copyright – a good overview; The Public Domain, a free book by a law professor who helped establish Creative Commons; the July 2011 CACM (behind the paywall, though) had a nice article on why the U.S. dropped “opt-in” copyright back in 1989 (blame Europe). Best idea gleaned, from The Public Domain: the length of copyright is meant to motivate people to create works for payment, so a retroactive increase in the length of copyright (e.g., to protect Mickey Mouse) makes no sense – it creates no motivation for works already created.
- Polygon Pictures’ office corridor would be a bad place to be if you worked way too many hours. Otherwise, nice!
Seven Things for July 24th, 2011
Eric has done these until now, but I now find myself with a few small things that fit well into such a post.
- Older SIGGRAPH Courses often have great material in them, but are tough to track down. This website has a bunch of links to course notes from 1999 to 2007.
- The SIGGRAPH Education Committee has a page with links to a few even older courses, going back to 1996. The “Pixel Cinematography” course from 1996 looks especially interesting.
- Fabian “ryg” Giesen is doing a great series of posts (as yet unfinished) on his blog, which take the reader on A trip through the graphics pipeline. He recently started reposting a slightly cleaned up version of the series on AltDevBlogADay.
- A variant on a previously published paper (video here), Deferred Screen-Space Directional Occlusion by Yuriy O’Donnell has increased performance and plugs relatively easily into deferred shading pipelines.
- Emil “Humus” Persson has recently released a demo of his Geometry Buffer Anti-Aliasing technique, which he will also be presenting at an upcoming SIGGRAPH course.
- I’ve long been interested in the problem of filtering normals in a way that correctly accounts for surface appearance; we also discuss this in Section 7.8.1 of Real-Time Rendering. Stephen Hill has kicked off his new blog with an excellent post summarizing various solutions to the problem, including his own solution as well as a WebGL demo. The comments to the post are also well worth reading; a lively discussion has developed, with Brian Karis of Human Head Studios describing the solution used on the upcoming game Prey 2.
- One of the techniques discussed in the aforementioned post was LEAN mapping and its lighter-weight variant CLEAN mapping. Inspired by that post, Marc Olano (first author on the LEAN mapping paper) has posted some of his own thoughts on those techniques.
“OpenGL Insights” CFP Reminder
The call for participation for the “OpenGL Insights” book ends in a month. If you have a good tutorial or technique about OpenGL that you’d like to publish, please send on a proposal to them for consideration.
SIGGRAPH 2011 Talks – Part 3
This is the third and last in a series of posts about the SIGGRAPH 2011 Talk program – see Part 1 and Part 2. If you found these useful you may also want to check out my previous series of posts about the SIGGRAPH 2011 Courses program (see Part 1, Part 2, Part 3, and Part 4). These posts are not intended as a general SIGGRAPH survey – they are focused on content related to real-time rendering and game development.
Show Me The Pixels
Three of the talks in this session have possibly relevant content:
- Slow Art With a Trillion Frames Per Second Camera – I guess this one stretches the definition of “relevant” somewhat, but I just find it extremely cool and interesting. The talk describes some research done at MIT (in collaboration between the Media Lab and Department of Chemistry) in which a “trillion frames per second camera” captures how pulses of light travel within a scene, including bouncing off surfaces and scattering inside objects. Besides the general coolness factor, this may impart some insight into light behavior which could be useful when working on shading and lighting models.
- Device-Independent Imaging System for High-Fidelity Colors – color management (including display calibration, color space management of data, etc.) is important for both game and film production. It turns out that getting good device-independent color reproduction is far from simple. This talk covers some advances in this field by SHARP Corporation and Shizuoka University.
- Who Do You Think You Really Are? – augmented reality is becoming an important technology for handheld games (see examples on the Nintendo 3DS and iPhone); this talk discusses an interactive media installation at London’s Natural History Museum (in partnership with BBC Television) which includes augmented reality elements.
Hiding Complexity
This entire session is comprised of game industry talks:
- Occlusion Culling in Alan Wake – occlusion culling is a key technology for many games, especially first-person shooters. This talk discusses the occlusion culling system (developed by Umbra Software) used in the game Alan Wake by Remedy Entertainment. Topics include visibility culling as well as shadow-caster culling for dynamic light sources.
- Increasing Scene Complexity: Distributed Vectorized View Culling – another talk on visibility culling, this time focusing on the technical issues involved in parallelizing culling computations on current game platforms. The talk is given by Electronic Arts Blackbox.
- Practical Occlusion Culling in Killzone 3 – the third occlusion culling talk of the session focuses on the implementation used by Guerrilla Games for the game Killzone 3. This implementation uses PlayStation 3 SPUs to rasterize a conservative depth buffer, against which occlusion queries are performed.
- High-Quality Previewing of Shading and Lighting for Killzone 3 – another Killzone 3 talk but unrelated to occlusion culling, this talk by Guerrilla Games covers a content creation framework which supports high-fidelity previews of assets in Autodesk Maya.
Smokin’ Fluids
The talks in this session (three from the film industry and one from the academic research community) cover topics related to smoke and fluid simulation. Such simulations are currently too costly to be feasible for most games, though games such as the LittleBigPlanet and PixelJunk Shooter series (both featured at SIGGRAPH this year) include two-dimensional versions. In VFX and CG animation work smoke, fluid and fire simulations are common, forming one of the key elements differentiating film and game visuals. I firmly believe that as game platforms increase in computational power, we will start seeing full 3D simulations of this kind in games.
- DB+Grid: A Novel Dynamic Blocked Grid For Sparse High-Resolution Volumes and Level Sets – The author, Ken Museth, has a history of developing novel data structures for level set and volumetric data and applying them for VFX, first at Digital Domain and now at DreamWorks Animation. His data structures have been constantly improving, from DT-Grid to DB-Grid and now DB+Grid, which is described in this talk.
- Capturing Thin Features in Smoke Simulations – In production simulation work, there is a constant tension between the need to speed up simulation times for faster iteration (which implies reducing the resolution of the simulation grid) and the desire to simulate finer detail (which implies increasing the resolution). This talk covers a system developed by Sony Pictures Imageworks that allows thin smoke features to be captured even with low resolution simulation grids.
- Implicit FEM and Fluid Coupling on GPU for Interactive Multiphysics Simulation – typically distinct simulation methods are used for fluids, rigid objects, deformable objects, etc. This can pose problems when different types of objects can affect each other, which requires coupling different simulation methods. This talk from INRIA and Université de Grenoble covers a GPU-based method for coupled simulation of deformable objects and fluids – interestingly “screen-space collision” is mentioned as one of the techniques employed.
- Correcting Low-Frequency Impulses in Distributed Simulations – production rendering is typically distributed over a large number of machines. It is desirable to do the same for simulations, but often this is difficult since the simulation domain is not easily separable – each part of the simulation affects all other parts. This talk from Side Effects Software (developers of Houdini) describes a method for distributing level-set fluid simulations while keeping them coupled via a shared low-resolution pressure projection.
Volumes and Rendering
All four talks in this session (three from the film industry and one from the academic research community) contain potentially relevant content:
- Gaussian Quadrature for Photon Beams in “Tangled” – rendering lighting effects in participating media (often called “light beams” or “god rays”) is a common problem in games and film, typically solved with various hacks. A recent Transactions on Graphics (ToG) paper presented a comprehensive analysis of the problem as well as a new rendering approach called “photon beams” which is both physically correct and efficient – it appears potentially feasible for real-time implementation. This talk (with authors from the University of Central Florida, Disney Research Zürich, and Walt Disney Animation Studios – including the first author of the aforementioned ToG paper) presents an efficient implementation of the photon beams technique in Renderman, extending it to artist-specified non-physical light attenuation curves. A broader overview of the artist-driven volumetric lighting in Tangled (of which this work is a part) is given in a Technical Paper.
- Importance Sampling of Area Lights in Participating Media – in principle, ray tracers like the Arnold rendering engine (developed by Solid Angle SL and used by Sony Pictures Imageworks, among others) solve the participating media lighting problem in a straightforward manner by sampling the underlying integrals. In practice, achieving noise-free images in reasonable time requires a lot of engineering effort, mostly relying on various forms of importance sampling. This talk (with authors from both Solid Angle and Imageworks) presents an importance sampling method for single scattering of light from arbitrary area lights in homogeneous participating media.
- Decoupled Ray Marching of Heterogeneous Participating Media – after two talks on the relatively easy problem of lighting homogeneous participating media, this talk (also from Sony Pictures Imageworks) covers heterogeneous media such as smoke. It covers a method for speeding up ray marching by decoupling lighting calculations from the sampling of volume properties. Ray marching is amenable to real-time implementation since it is easy to scale down (albeit with reduced visual quality) by reducing the number of samples – several companies have demonstrated real-time implementations (though I’m not sure if any shipping games yet use it). The technique presented in this talk can make raymarching for volumetric lighting even faster, so is definitely of interest.
- Demand-Driven Volume Rendering of Terascale EM Data -unlike the other talks in this session which focus on volumetric lighting, this talk (from King Abdullah University of Science and Technology and Harvard University) focuses on a different issue – rendering volumetric datasets which are too large to fit in memory. Given a good solution to this problem, games should be able to precompute volumetric effects in certain situations and stream them from disk, so this looks interesting.
Heads or Tails
Rigging game or movie characters for animation is a very tricky problem – the rig needs to be powerful enough to handle all needed motions and deformations, while also being easy to control either via hand-keying or motion capture. This session includes two CG feature animation talks and one research talk, all covering the rigging problem from different angles (note that the game industry talk Modular Rigging in Battlefield 3 has been cancelled). Character rigging is one of the areas where film and game production are quite similar – there are differences in scale and complexity, but even these are not so large as differences in say, triangle count or shader instructions.
- Building the Birds of “Rio” – this talk covers the process and technology used at Blue Sky Studios to build control systems for the bird characters in the movie Rio – using the main character “Blu” as a case study.
- “Kung Fu Panda 2”: Rigging a Peacock Tail – this talk describes the approach DreamWorks Animation used to create the tail rig for the peacock character in the film Kung Fu Panda 2.
- Optimized Local Blendshape Mapping for Facial-Motion Retargeting -this talk from the Graphics Lab at the USC Institute for Creative Technologies details an automatic facial-motion retargeting method for blendshapes.
Speed of Light
Three of the talks in this session contain potentially relevant content:
- Run-Time Implementation of Modular Radiance Transfer – Precomputed Radiance Transfer is a powerful rendering technique which has spun off many variations. This talk from Disney Interactive Studios, Disney Research Zürich, the University of Utah and the University of North Carolina at Chapel Hill covers a modular variant which enables warping and combining precomputed transport from a small library of simple shapes. The technique was implemented for platforms from mobile devices to high-end GPUs – the talk discusses various implementation issues involved.
- Next-Generation Image-Based Lighting Using HDR Video – image-based lighting is becoming a key rendering technique in both film and games. This talk from Linköping University and Spheron VR describes a system for high-dynamic-range video capture, reconstruction, and modeling of real-world scenes for use in image-based lighting of synthetic objects placed in the scene.
- Triple Depth Culling – real-time rendering applications such as games rely heavily upon hardware features such as hierarchical Z-culling for performance. However, this has some drawbacks – it requires either depth sorting or a previous depth prepass, and it doesn’t work well with shaders that modify depth. This talk proposes a technique to avoid these drawbacks – the authors show a pixel shader implementation, though for best performance they suggest that the technique be implemented in hardware. The talk abstract and video are both available online.
Capture and Construction
This session has one film talk of relevance: Building and Animating Cobwebs for Antique Sets. It describes a workflow used at DreamWorks Animation to model and animate cobwebs, including a specialized modeling tool, a physics-based solver, and a procedural-modeling engine. These types of specialized asset workflows can be extremely effective for games or movies which require many examples of a given kind of asset.
Light My Fire
This session has one game talk, as well as three relevant film talks:
- Simulating Massive Dust in “Megamind” – in film production, there is a constant push for fluid simulations to continually increase in size and complexity, but the need for fine control by artists implies fast turnaround times. For this reason a lot of research and development is spent on making these simulations faster – research that I hope will eventually benefit real-time applications as well. This talk from t DreamWorks Animation covers a fast fluid simulation framework used for the movie Megamind. The presentation covers the specific numerical methods used to ensure efficiency and quality, as well as the setup and control framework that allowed artists to work efficiently.
- “Megamind”: Fire, Smoke, and Data – another Megamind talk, this time focusing on the specific case study of an especially large and involved explosion effect. I like attending such “war story” talks – the most interesting film and game work is done when trying to push boundaries, and the solutions are often a mixture of technical cleverness and artistic inspiration.
- Volumetric Effects in a Snap – grid-based simulation and volumetric rendering frameworks have become a staple of VFX and CG feature animation work; every studio has its own system with different strengths. I suspect similar systems will start cropping up in game studios when the hardware becomes a bit faster and memory capacities increase a bit more. This talk describes the creation of the “Snap” system developed at Animal Logic and used in the films Legend of the Guardians: The Owls of Ga’Hoole and Sucker Punch.
- Fluid Dynamics and Lighting Implementation in PixelJunk Shooter 2 – games rarely incorporate fluid simulations – including 2D games, though current platforms can run two-dimensional simulations quite quickly. LittleBigPlanet notably incorporated 2D fluid simulations in its fire and smoke effects, but these did not affect gameplay. The game PixelJunk Shooter incorporated some very nice fluid-simulation-driven gameplay, including several types of fluids and gases that affected each other in different ways. The recent sequel expanded this gameplay element, adding some novel light/darkness gameplay as well. This talk from independent developer Q-Games covers the technical aspects of these elements.
Now that I’ve finished the courses and talks, my next few blog posts will cover the remaining SIGGRAPH 2011 programs.
SIGGRAPH 2011 Talks – Part 2
This is the second in a series of posts on the SIGGRAPH 2011 Talk program – Part 1 can be found here. These posts focus on talks with relevant content for real-time rendering researchers and practitioners, including game developers.
Building Blocks
One of the talks in this session looks relevant – KinectFusion: Real-Time Dynamic 3D Surface Reconstruction and Interaction describes the use of a Kinect camera to acquire real-time dense 3D models of an entire room and its contents, enabling some interesting augmented reality and interaction possibilities. The reconstruction appears to require a high-end GPU to achieve real-time performance so this isn’t something for current generation consoles, but it definitely could be feasible on future platforms. It may also be interesting in the context of digitizing real-world objects as part of the film or game modeling process. The authors are from Microsoft Research Cambridge, except for one from Imperial College London.
Walk the Line
Two academic research talks in this session are potentially relevant for games and other real-time applications that use stylized rendering or deformations:
- Parameterizing Animated Lines for Stylized Rendering – this talk describes a paper from the 2011 NPAR (Non-Photorealistic Animation and Rendering) conference (which is co-located with SIGGRAPH this year). It shows a way to have details along an outline track the geometry cleanly as the scene animates in 3D. Material from the NPAR paper can be found here. The authors are from École d’Ingénieurs Télécom ParisTech, except for one from Adobe.
- Multiperspective Rendering for Anime-Like Exaggeration of Joint Models – this talk describes a more unusual type of stylization, where the model deforms in a stylized way as it animates, inspired by anime visual conventions. The authors are from Hitachi, except for one from The University of Tokyo.
1000 Points of Light
This session contains one game talk, as well as two relevant CG feature animation talks:
- Lighting Tokyo for Pixar’s “Cars 2” – rendering cities at night is challenging (definitely for games, but even for CG feature animation) due to the extreme dynamic range and large number of lights. Tokyo, with its massive quantities of illuminated billboards and neon signs, is one of the most famous and extreme examples of this type of lighting situation. This talk covers the techniques used by Pixar Animation Studios to light a stylized version of nighttime Tokyo for the movie Cars 2 – note that the speaker will also present a Studio Talk on a similar topic.
- “Megamind” – Lighting Metro City at Night – this task covers a similar challenge as the previous one, but with a distinct set of solutions from a different company (DreamWorks Animation) for a different film (Megamind).
- Deferred Shading Techniques Using Frostbite in Need for Speed The Run – this talk will cover the tile-based deferred lighting architecture used in the Frostbite 2 engine, with emphasis on the PS3 implementation as used in the Electronic Arts game Need for Speed The Run (the talk was originally intended to cover the XBox 360 and Battlefield 3 as well, but has been refocused – the removed material will be covered in more depth in this course). It makes for an interesting combination with the previous two talks since it will show how the current state-of-the-art in game technology solves a similar problem as film (albeit at smaller scale) in real time.
Fur and Feathers
The three CG feature animation talks in this session cover fur and feather techniques which are too computationally costly to be feasible for most real-time applications today. They also don’t seem amenable to “animation baking” precomputation approaches since the resulting data would most likely be too heavy. However, these techniques should be able to run in real-time on future hardware platforms, making these talks of interest to forward-looking real-time researchers:
- Quill: Birds of a Feather Tool – this talk describes a specialized pipeline developed by Animal Logic to procedurally model, animate and simulate feathers while avoiding intersections and rendering at various levels of detail.
- Dynamic, Penetration-Free Feathers in “Rango” – somewhat similar to the previous talk, but focusing more narrowly on interpenetration avoidance and from the perspective of a different company (Industrial Light and Magic).
- Accurate Contact Resolution for Interpolated Hairs – another ILM / Rango talk, but focusing on a different problem – handling collision between hairs and other geometry. The solution needed to be very fast and cheap since it was intended for use on interpolated hairs (it is common in CG feature animation and VFX to fully simulate a relatively small number of “guide hairs” and then interpolate a much larger number of cheap “interpolated hairs” between them).
Mixed Grill
This session contains two film talks, one game talk, and one academic research talk; all four are relevant:
- The Power of Atomic Assets: An Automated Approach to Pipeline on “Legend of the Guardians: The Owls of Ga’Hoole” – games and movies share the challenge of structuring a production pipeline (software tools as well as workflow practices) to handle large numbers of assets. This talk will describe the system used at Animal Logic to handle the assets for the film Legend of the Guardians: The Owls of Ga’Hoole.
- Animation Workflow in Killzone 3: A Fast Facial Retargeting System for Game Characters – handling facial motion capture data is tricky, especially retargeting to (possibly multiple) in-game models. This talk describes a technique used by Guerrilla Games to animate a large number of different faces for the extensive cut-scenes in the game Killzone 3.
- Adaptive Importance Sampling for Multi-Ray Gathering – importance sampling (basically sampling a function more densely in areas that are estimated to have higher impact on the result) has recently become a key technology for production rendering. There was a whole SIGGRAPH course about it last year, and Pixar has added native support to the latest version of Renderman. Importance sampling is typically thought of as a ray tracing technique, but it is also important for image-based lighting (IBL) sources such as environment maps. Importance-sampled IBL is currently useful for game light baking tools, and is likely to be done in real-time on future platforms. This talk describes importance sampling improvements developed at Rhythm & Hues. Talk materials including an abstract and movie are available here.
- High-Resolution Relightable Buildings From Photographs – efficient digitization of real-world scenes and objects is useful for both film and game development. Tools such as Crazybump are widely used in the game industry to infer relightable surface details from photographs, but do not always work as well as could be hoped. This research talk looks like it could offer some improvements in this area, making it of wide interest. The authors are from The University of Manchester, Loughborough University, and Dolby Canada.
From the Ground Up
All three CG feature animation talks in this session are relevant for game developers:
- We Built This City: Big City Design and Implementation in “Kung Fu Panda 2” – games and movies sometimes contain large urban environments, which are very difficult to construct within reasonable time and staffing constraints. This talk will detail how DreamWorks Animation solved this problem for the film Kung Fu Panda 2.
- The Visual Style of “Legend of the Guardians: The Owls of Ga’Hoole” – finding a good visual style is another difficult task shared by film and games; my feeling is that films tend to have more established processes for look and style development. This talk will detail the visual style established by Animal Logic for the movie Legend of the Guardians: The Owls of Ga’Hoole. I saw a similar presentation at FMX 2011, and it was full of interesting and relevant content.
- Clouds in the Skies of Rio – in most games and films, clouds are off in the distance and can be handled with straightforward methods. But sometimes the camera needs to get up close and personal with the clouds, which can pose some interesting modeling and rendering challenges. Although cloud rendering techniques used in film can rarely run in real-time on current platforms, the way in which the clouds are art-directed and authored can be of interest. This talk discusses how Blue Sky Studios handled cloud authoring and rendering for the movie Rio.
Directing Destruction
Mixing simulation with manual control to create large, physically-believable and art-directed effects is a tough challenge which VFX and CG feature animation professionals have been focusing on for some time. The techniques used rarely lend themselves to real-time computation on current hardware. However, in many cases these effects can be pre-computed, and on future hardware they are likely to run in real time (perhaps with some reduction in scale). The four talks in this session discuss various case studies of this type:
- End of Line: Character Destruction in “Tron: Legacy” – this talk discusses the tools developed by Digital Domain for the character destruction effects in Tron: Legacy.
- Kali: High-Quality FEM Destruction in Zack Snyder’s “Sucker Punch” – in this talk, The Moving Picture Company discusses a finite-element simulation toolkit developed in partnership with Pixelux, with examples of its use in the film Sucker Punch. It is interesting to note that the tool is based on the same Digital Molecular Matter technology used in the games Star Wars: The Force Unleashed and Star Wars: The Force Unleashed II.
- Directing Hair Motion on “Tangled” – this talk discusses the system developed by Walt Disney Animation Studios to animate the main character’s hair (almost a character in itself) in the movie Tangled.
- Choreographing Destruction: Art Directing a Dam Break in “Tangled” – another Tangled talk from Walt Disney Animation, this one describes the way in which a complex water and rigid body simulation was art-directed for the “dam break” sequence.
Crowds
Scenes with large crowds are another differentiating factor between film and games. Sufficiently large crowds pose authoring and rendering challenges even for film; the solutions to these may be of interest to game developers working with smaller real-time crowds on next generation platforms. The three talks in this session discuss crowd case studies from three CG animated feature films:
- Crowds on “Cars 2” – this talk discusses how Pixar Animation Studios improved their production pipeline to enable higher productivity when managing assets and controlling agent behaviors for Cars 2 crowd shots.
- Synthesizing Complexity for Characters and Landscapes in “Rio” – this talk covers the systems used at Blue Sky Studios to procedurally generate large varied crowds of people and flocks of birds for the movie Rio, as well as the renderer enhancements done to efficiently ray-trace the resulting massive geometric detail.
- Staging Carnival: Ray Tracing Crowds in “Rio” – another Blue Sky Studios talk about Rio, this time focusing on a specific case study (the carnival crowds).
There are eight more Talk sessions with relevant content, which I will cover in a subsequent blog post.