FXAA Rules, OK?

So there are those people out there that punch other people’s punchlines. Someone’s three quarters of the way through telling a joke, and a listener says, “oh, right, this one ends ‘to get to the other side'”. You don’t want to be that guy, but that’s a little bit how I feel writing about FXAA, given that there’s a whole course at SIGGRAPH next month about these sorts of antialiasing techniques. I blame Morgan McGuire’s Twitter feed, as he (and 17 others) retweeted Timothy Lottes’ posting that he had released shader code for FXAA. I’d seen FXAA mentioned before, NVIDIA put it in their DirectX 11 SDK. Which, frankly, is sadly misleading – the implication is that it works only on GTX 200-level hardware and above, when in fact it works on DirectX 9 shader model 3.0 hardware, GLSL 1.20, XBox 360, and Playstation 3, to name a few, and is optimized in various ways for newer GPUs. Anyway, seeing this shader code available, I was interested to try it out. Morgan mentioning that he liked it a lot got me a lot more interested. A few hours later…

So what the heck am I blathering about? To start, there are a number of these ??AA methods that are based on post-processing a color (and sometimes, also normal and depth) buffer. MLAA, morphological antialiasing, was the first used for 3D images, back in 2009. The basic idea is “find edges and smooth them”. The devil’s in the details, which is what the SIGGRAPH course will delve into (and I’ll certainly attend): how wide an area do you search to try to find a straight edge? how do you deal with curves and corners? how do you avoid oversmoothing thin edges, blurring them twice? how does it look frame to frame? and, most important if you want to use it interactively, how do you do this efficiently?

I’ve wanted an MLAA-like solution for two years, since before HPG 2009 when I noticed the MLAA paper on Ke-Sen’s pages and talked to Alexander Reshetov about it (who was very helpful and forthcoming). I even got a junior programmer to attempt to implement it in a shader, but the implementation was quite slow (due to a very wide search area) and ultimately flawed, and we didn’t have time to get back to it. Last year at SIGGRAPH there was a talk by a group in France, led by Venceslas Biri and Adrien Herubel, about implementing MLAA on the GPU, and they released source code. I spent a bit of time with their code, but it was developed on Linux and I had some problems getting it to work on Windows properly. My “I’ll just take a few hours and see where I get” time was gone, and still no easy solution. There were some other interesting bits out there, like the article in GPU Pro 2, Practical Morphological Anti-Aliasing, with even a github project, but there were different versions for DX9 and 10 (and not OpenGL), lots of files involved, and I didn’t want to get involved. Even Humus had a code sample, but I was still a bit shy to committing more time. (Also, his needs geometric information, and I wanted to antialias NPR edges formed by dilation, i.e., image processing, which have no underlying geometry).

Then the FXAA shader code was released: well-commented, with clear integration instructions, just needs a color buffer, and all in one shader file. FXAA is not the solution to all of life’s problems (or is it?), but for me, it’s wonderful. It took me all of an hour to fold into our system as a shader (and then another three debugging why the heck it wasn’t registering properly – our shader system turns out to be very particular about path names). The code runs on just about everything and has extensive comments. There are control knobs for the fiddlers out there, but I haven’t messed with these – it looks great out of the box to me.

So, after all that breathless buildup, here’s the punchline:

On the left is your typical jaggy image, on the right is FXAA. Sure, it’s not perfect – nearly-vertical lines can look considerably better with a wider edge search area (as seen in MLAA), dropouts could be picked up by supersampling or MSAA, thin lines can have problems – but this shader gives a huge improvement with no extra samples, and just one pretty-quick pass (plus – full disclosure – a preprocess of computing the luminance/luma (grayscale) and shoving it in the alpha channel). Less than 1 millisecond cost per frame on a GTX 480? Works on sRGB and linear? Code’s in the public domain? Sign me up!

See lots more examples on Timoty Lottes’ page. Read his whitepaper for algorithm details and his newer posts for tweaks and improvements. An easy-to-use demo of an earlier version of his shader can be downloaded here – just hit the space bar to toggle FXAA on and off. Enjoy!

SIGGRAPH 2011 Courses – Part 3

Third post in a series about the SIGGRAPH 2011 courses (Part 1 and Part 2).

Stereoscopy From XY to Z

Although there had been fits and starts since the mid-1950’s, stereoscopic (“3D”) feature films really kicked off in 2009. This was primarily due to the convergence of two factors: CG animation and Avatar. CG animated features are easier for stereoscopy since they don’t require bulky and expensive stereoscopic cameras; Disney Animation had been doing all their CG animated films in 3D since Chicken Little (2005), joined in 2009 by Pixar and Dreamworks with Up and Monsters vs. Aliens respectively. Avatar‘s huge box-office success in the same year goosed studio executives into mandating stereoscopic releases of VFX-heavy live-action films as well. Although somewhat controversial among experts (mostly due to brightness issues), the increase in stereoscopic theatrical content resulted in a push for compatible televisions, Blu-ray players and game consoles at home. Around the same time, the PC side of the game market also saw an increase in stereoscopic support (mostly led by NVIDIA). By 2011, stereoscopy had become a dominant trend in computer graphics, with implications in areas ranging from videogame user interfaces to feature shot editing. Many of these implications are as yet not commonly understood, which increases the need for courses like this one.

The course is presented by Samuel Gateau (3D Software Engineer, NVIDIA) and Robert Neuman (Stereoscopic Supervisor, Walt Disney Animation Studios) who have presented earlier versions of it at SIGGRAPH Asia 2010 and at FMX 2011. This time Samuel and Robert are joined by Marc Salvati (R&D Software Engineer, OLM Digital). It appears that the course will cover both the technical and aesthetic aspects of stereoscopy, for games as well as film. The speaker lineup is well-suited for this scope; Samuel has helped many game developers integrate stereoscopy into their titles, Marc has worked on tools for converting Japanese animation to 3D (the topic of a separate talk this year), and Robert has supervised stereoscopy for several films at Disney Animation, most recently working on the stereoscopic conversions of classic hand-animated Disney films (also the topic of a separate talk).

Production Volume Rendering (Part I and Part II)

The SIGGRAPH 2010 course Volumetric Methods in Visual Effects was a great look into an important and little-understood area of production rendering, so I was happy to see that an updated and expanded version will be presented this year. Both courses are organized by Magnus Wrenninge (Senior Technical Director, Sony Pictures Imageworks) and Nafees bin Zafar (Senior Production Engineer, Dreamworks Animation). Magnus has been working on visual effects software at Imageworks (and previously at Digital Domain) for almost a decade, in later years mostly focusing on volumetric modeling and rendering. He is currently in the process of writing a book on the topic, which will include source code for a fully functional volume renderer. Nafees has worked on simulation and volumetrics tools (at Dreamworks and previously at Digital Domain) for over ten years, winning a Scientific and Engineering Academy Award in 2007. The course is divided into two parts. Part I (“Fundamentals”) is presented by Magnus and Nafees, and is an overview of the fundamental technologies behind computer generated volumetric elements such as clouds, fire, and whitewater. At 90 minutes, Part I is an expansion of the first hour of last year’s course, and includes an introduction to the subject, followed by in-depth explanations of how volumetric effects are modeled and rendered.

Over three hours long, Part II (“Systems”) is a greatly expanded version of the second half of last year’s course. It will focus on specific VFX volumetric technologies, tools, workflows and case studies. Nafees and Magnus will each give a presentation on the systems used at their respective studios. In addition, there will be presentations by speakers from the following companies:

  • Double Negative: presented by Ollie Harding (R&D Programmer) and Gavin Graham (CG Supervisor). I wasn’t able to find out much about Ollie; Gavin has worked at Double Negative for over ten years, during which he did various shot based effects work, assisted R&D in battle testing in-house volumetric rendering and fluid simulation tools, and CG-supervised several effects heavy feature films.
  • Rhythm & Hues: Jerry Tessendorf (former Principal Graphics Scientist) and Victor Grant (FX Supervisor). Jerry Tessendorf is currently Director of the Digital Production Arts Program at Clemson University, following an extensive and highly influential body of work in simulation and VFX production spanning three decades. Notable achievements include a Technical Achievement Academy Award and a series of hugely influential SIGGRAPH presentations on ocean wave simulation (the latest version of the notes and slides are well worth reading). Victor Grant has worked on VFX for many feature films over the past decade, specializing in volumetric modeling and rendering as well as particle and fluid simulation.
  • Side Effects Software: Andrew Clinton (Software Developer). Side Effects’ Houdini software is used extensively in the VFX industry; Andrew is responsible for the research and development of Houdini’s Mantra renderer. He has worked on improvements to the volumetric rendering engine, a micropolygon-like approach to volume rendering, a physically-based renderer, and a port of the renderer to the Cell processor.
  • Weta Digital: Antoine Bouthors (R&D Engineer): Weta is a new addition over last year’s course. Before joining Weta, Antoine worked on research including realistic rendering of clouds in real-time.

Volumetric effects are one of the areas where the gap between game and film visuals is biggest; as game platforms become more powerful, game developers will start focusing R&D efforts on this topic. In parallel, VFX houses will develop ways to rapidly previsualize feature film volumetric effects, to allow for better artist control and directability. I predict that in the next few years these converging lines of research will “meet in the middle”, enabling unprecedented scale and quality of volumetric effects in games. Attending this course is a good way for game developers and real-time rendering researchers to get a head start on this process.

Compiler Techniques for Rendering

This course is a bit more specialized than the others I’ve discussed. It is focused on the uses of advanced compiler technology for rendering, covering five different projects which are on the cutting edge of this technology trend. Most of the techniques use LLVM and/or involve the compilation of shading languages. The course is comprised of five talks:

  • Intro to LLVM, and Native RSL Shader Compilation, presented by Mark Leone (Researcher, Weta Digital):  Before joining Weta, Mark led development at Intel of a new shading language for native rendering on Larrabee, and previously worked on the RenderMan shading system at Pixar. His talk will begin with an overview of LLVM (useful background for several of the other talks), and continue with a description of the implementation of the PostHaste system, which analyzes RenderMan shaders and automatically identifies kernels within them that can be compiled for x86 native execution using LLVM.
  • Open Shading Language, presented by Larry Gritz (Principal Engineer, Sony Pictures Imageworks): Larry Gritz is the chief architect of the Imageworks in-house renderer, as well as the designer and open source administrator of the Open Shading Language (OSL) and OpenImageIO projects. Other rendering systems for which he’s had a leading architectural role include NVIDIA’s Gelato GPU-accelerated film-quality renderer, Exluna’s Entropy renderer, Pixar’s PhotoRealistic RenderMan, and BMRT. Larry’s talk describes the design and implementation of OSL, which was developed by Imageworks for use in its in-house renderer, and released as open source software. OSL is specifically designed for advanced rendering algorithms and has a number of key technologies whose implementations will be discussed: radiance closures, light path expressions, automatic differentiation, and LLVM just-in-time compilation.
  • AnySL: Efficient Portable Multi-Language Shading, presented by Philipp Slusallek (Scientific Director, German Research Center for Artificial Intelligence – DFKI): Philipp leads the “Agents and Simulated Reality” research lab at DFKI. He is also a full professor for Computer Graphics at Saarland University, where he holds the additional positions of Director of Research at the Intel Visual Computing Institute, principal investigator at the Cluster of Excellence in Multimodal Computing and Interaction, and founding speaker of the Competence Center for Computer Science. Philipp’s talk will describe the AnySL system, which compiles shaders from different languages into a common, portable representation, using a generic shading library. AnySL also incorporates an embedded compiler based on LLVM that instantiates this generic code in terms of the renderers native types and operations. AnySL also supports programmable kernels for tasks other than shading – such as animation, geometry processing, tesselation, and image processing.
  • Automatic Shader Bounding for Efficient Global Illumination, presented by Bruce Walter (Research Associate, Cornell University Program of Computer Graphics): Bruce’s research focuses on expanding the capabilities of physically-based rendering and global illumination algorithms with respect to robustness, scalability, and generality. He has published many related research papers at SIGGRAPH and elsewhere, including my favorite BRDF paper. This talk will discuss research that was published in a SIGGRAPH Asia 2009 paper, which uses a compiler to automatically generate interval versions of programmable shaders. These interval versions can be used to provide the high level query functions needed by physically-based rendering systems (such as ray tracers).
  • Compilation for GPU Accelerated Ray Tracing in OptiX, presented by Steven Parker (Director of HPC & Computational Graphics, NVIDIA): Steven also leads the OptiX ray tracing team; prior to joining NVIDIA he developed a long history of research and publication in interactive ray tracing and scientific computing. Steven’s talk will discuss the domain-specific just-in-time compiler that lies at the core of the NVIDIA OptiX ray tracing engine. This compiler generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. The CUDA C compiler is used for writing shader programs with function overloading, templates, and full pointer support while a just-in-time compiler provides ray tracing specific optimizations. Steven will discuss some of the compiler analysis techniques that enables a natural programming model, supports a rich object model designed for compact scene representation, provides dynamic dispatch for complex scenes, and continuations for recursion while executing efficiently on a CUDA-enabled GPU.

Another project which seems to fit in with this “compilers for rendering” trend (though not covered in this course) is Microsoft’s recent work to enable symbolic differentiation in HLSL.

SIGGRAPH 2011 Courses – Part 2

This is a continuation of the series of posts started here.

Character Rigging, Deformations, and Simulations in Film and Game Production

Character animation is one of those areas where film and game production have intriguing similarities as well as differences; especially in the ways that the character meshes deform in response to animation and simulation. This course includes three talks, each covering a different application domain: games, visual effects, and feature animation. These talks will be presented by:

  • David Coleman (Senior CG Supervisor, Electronic Arts Canada). David (who has worked at Electronic Arts for 15 years and is currently responsible for the central team that provides rigging for many of EA’s sports titles) will present the games portion of the course. He will discuss character rigging, deformations and simulations in game production, emphasizing the technical restrictions imposed due to the real-time and interactive nature of games. This talk will also cover some strategies for setting up procedural secondary rigging systems in Maya, MotionBuilder and at run-time in games.
  • Tim McLaughlin (Department Head and Associate Professor, Department of Visualization at Texas A&M University). Tim (who had 13 years of experience at ILM – most of it on digital creatures – before heading the Texas A&M Department of Visualization) will discuss rigging for visual effects. He will cover the unique requirements brought on by integration with live action but also the affordances offered by the limited range of scope of performance requirements relative to feature animation and games. Tim will discuss rigging modularity, provisions for animator control, non-linear deformations, areas of highest importance for deformations, and the efficient use of muscle systems.
  • Larry Cutler (Supervising Character TD, DreamWorks Animation). Larry (who worked at Dreamworks Animation for 10 years, and at Pixar for four years previously) will be discussing rigging issues for feature animation. Larry’s talk will deal with the impact of character design, modeling, and scalability for thousands of shots on rigging, deformation, and simulation. He will discuss the issues arising from the unique needs of feature animation: accommodating for extreme range of motion, and increased emphasis on art directability and animator control. Larry will also cover hair, cloth, and facial animation systems.

Destruction and Dynamics for Film and Game Production

Another “X for film and games” course, this time focusing on rigid body dynamics and destruction / fracturing methods. The course will cover production aspects such as authoring tools and game engine integration, in addition to the computational and algorithmic aspects. Like the last course, this one will highlight interesting commonalities and differences between film and game practice. There are areas where each can learn from the other: the film techniques can point the way to future methods for games running on more powerful platforms; and the efficient game methods are useful for fast prototyping, previsualization and even speeding up final shots in film.

The course will start with a 30-minute presentation by the course organizer, Erwin Coumans (Principal Physics Engineer, AMD). Erwin has worked on physics in games for over a decade, and is also the main author of the open-source Bullet Physics Library. Although Bullet was originally designed for game use, it has been used on many films as well, including big-budget Hollywood blockbusters such as How to Train Your Dragon, Sherlock Holmes and 2012. Erwin will give an overview of the course, as well as a brief introduction to the basic theory of rigid body dynamics and destruction/fracturing methods. He will also cover collision detection and handling contacts, approximate methods for the modeling of stress and strain, and how to decide when and where to break rigid bodies into several parts. The course will continue with the following talks:

  • Authoring Destruction With the Dynamica Bullet Maya Plugin (15 minutes), by Michael Baker (Faculty, Art Institute of Las Vegas): Michael has worked on Las Vegas casino games, visual effects for various short films and games, and the Bullet Physics Library (in particular the Dynamica Maya plugin which is the primary topic of his talk). Michael will discuss the development and use of Dynamica to support choreographed rigid body behavior such as progressive crumbling of pre-shattered objects, sequential structural failure and timed directional explosions.
  • Destruction and Dynamics Artist Tools for Film (45 minutes), by Nafees Bin Zafar (Senior Production Engineer, Dreamworks Animation) and Mark T. Carlson (Lead Engineer, Dreamworks Animation): Nafees has worked on simulation and volumetrics tools (both at Dreamworks and in his previous job at Digital Domain) for over ten years, winning a Scientific and Engineering Academy Award in 2007. Mark has worked on cloth, fluid and crowd simulation for six years at DNA Productions, Walt Disney Animation and Dreamworks Animation. This talk will cover 3rd party software integration in the movie pipeline, building artist tools with Bullet, and authoring of destruction using Maya and Houdini. Examples from recent Dreamworks Animation movies will showcase the techniques described.
  • Deformable Rigid Bodies and Fragment Clustering for Film (45 minutes), by Brice Criswell (Senior Software Engineer, Industrial Light & Magic): Brice has been developing production related software for 12 years with ILM, and specializes in rigid body and crowd dynamics. Brice’s talk is divided into three presentations. The first discusses a deformable rigid system which efficiently simulates on-impact bending and denting of normally rigid bodies. The second covers a fragment clustering system which allows artists to initialize sets of geometry as a single rigid body, then dynamically break the objects during the progression of the simulation. The third presentation covers the challenges involved in animating, simulating, and deforming the tentacle beard of the Davy Jones character in the Pirates of the Caribbean movies. Each of the talks will detail algorithms as well as production issues, and will include VFX production examples from prominent feature films.
  • Procedurally Generating Fragmented Meshes for Games (15 minutes), by Phil Knight (Lead Programmer, Avalanche Software – a division of Disney Interactive Studios): Phil has 13 years of game development experience, working most recently on Cars 2, Toy Story 3, and Bolt, and previously on the Links and Amped series. His talk will cover a procedural technique for automatically generating fragmented meshes, especially useful for modeling large explosions with lots of fragmentation and debris. Besides detailing the technique itself, Phil will also describe the fragmentation tool (‘Frag’) which implements it, and its use in game production at Disney Interactive Studios.
  • Accelerating Rigid Body Simulation and Fracture Using the GPU (30 minutes), by Takahiro Harada (Researcher, AMD): Takahiro Harada has performed research and development into physics simulation at The University of Tokyo and Havok as well as his current position at AMD (where he focuses on the use of GPU computing for physics simulation). He will present a GPU-based rigid body simulation which can be used to quickly simulate the large numbers of rigid bodies typically created by object destruction. The talk starts with an overview of the simulation and proceeds to the detailed GPU implementation of each stage of the simulation.

PhysBAM: Physically Based Simulation

Similarly to the previous course, this is targeted at physics simulation and has strong ties to film production. However, its structure is very different; instead of covering a variety of production examples, it focuses on one code library – PhysBAM, initially developed by Ronald Fedkiw and continued by him and many others at Stanford. PhysBAM is used by many VFX and feature animation houses including ILM, Disney Animation, and Pixar; large portions were recently released under an open-source license. This course is presented by Craig Schroeder (PhD Student, Stanford Computer Science Department); it will cover information on the PhysBAM library release: how to obtain the source code, set up the library, and use it to run example smoke and water simulations, as well as descriptions of visualization and rendering tools included in the release. In addition to the PhysBAM library, the course will explain the underlying techniques that make these simulations possible, in particular level set methods such as fast marching, fast sweeping, and the particle level set method. It will also address the important aspects of a fluid simulation, including advection, viscosity, and projection.

There are 12 courses left to cover; I’ll do so over my next few blog posts.

SIGGRAPH 2011 Courses – Part 1

At 18 courses, the SIGGRAPH 2011 course program is smaller than it has been in previous years, but what it lacks in size it more than makes up in quality. I’ll go over the list with a focus on courses of interest to game developers and/or real-time rendering researchers. If you are going to be attending SIGGRAPH, this should help you decide which courses to attend – if not, you’ll at least know which course notes and Encore videos to hunt down after the conference. Since this post is turning out to be quite long, I’ll split it up into several parts, spread out over the next few days.

UPDATES:

  • 6/20/2011: Added details to Beyond Programmable Shading regarding Peter-Pike Sloan’s talk and the Software Rasterization on GPUs talk, as well as correcting the titles of several of the speakers.
  • 6/21/2011: Added links to the papers High-Performance Software Rasterization on GPUs and VoxelPipe: A Programmable Pipeline for 3D Voxelization (the second link is the paper webpage – no PDF yet).
  • 6/24/2011: Removed an incorrect detail about the DEAA technique.
  • 7/10/2011: Updated the description of the Battlefield 3 / Need for Speed: The Run talk in the Advances in Real-Time Rendering in Games course.

Advances in Real-Time Rendering in Games (Part I & Part II)

Since 2006, this course series (organized by Natalya Tatarchuk, formerly at AMD and now at Bungie) has been my favorite thing to see at SIGGRAPH. Each year it has showcased new content from the cutting edge of game and IHV graphics development. Since Natalya joined Bungie, the emphasis has been less on IHV demos and more on games, which in my opinion makes the course even better – this year looks like the best yet! Part I starts with a  brief introduction by Natalya and continues with four talks, each between 45 and 60 minutes in length:

  • Bungie’s Graphics Secret Sauce, by Natalya Tatarchuk (Senior Graphics Researcher, Bungie) and Hao Chen (Engineering Lead, Bungie): Bungie’s talk will cover the graphics techniques developed for the award-winning game Halo: Reach,  along with some new research undertaken for Bungie’s next title.
  • Rendering in Cars 2, by Christopher Hall, Robert Hall, and David Edwards (Programmers at Avalanche Software): this talk will describe rendering techniques used in Cars 2: The Video Game, including offloading of post-processing and stereoscopy computations onto the Playstation 3’s SPUs. Other topics covered will include new developments in color precision, post processing effects, shadows, and the use of light probes.
  • Secrets of CryENGINE 3 Graphics Technology, by Tiago Sousa (Principal R&D Graphics Engineer, Crytek), Nickolay Kasyan (Senior Rendering Engineer, Crytek), and Nicolas Schulz (Graphics Engineer, Crytek): an overview of the novel deferred lighting approach used in CryENGINE 3, along with an in-depth description of optimization techniques (both general and platform-specific), as well as stereoscopic 3D rendering and shadowing techniques.
  • Two Uses of Voxels in LittleBigPlanet2’s Graphics Engine Alex Evans (CTO & Co-Founder, Media Molecule) and Anton Kirczenow (Senior Programmer, Media Molecule): this talk will describe a PlayStation 3-centric implementation of real-time dynamic scene voxelization and demonstrate two ways this voxel representation was used for rendering and special effects in the game LittleBigPlanet 2.

Part II also starts with a short introduction by Natalya; this is followed by five 30-50 minute talks:

  • More Performance! Five Rendering Ideas from Battlefield 3 and Need For Speed: The Run, by John White (Senior Rendering Engineer, NFS) and Colin Barré-Brisebois (Rendering Engineer, BF3): this talk will cover several techniques from Battlefield 3 and Need for Speed: The Run designed to increase performance without sacrificing visual quality. These will include chroma sub-sampling for faster full-screen effects, a novel DirectX 9+ scatter-gather approach to bokeh rendering, improved temporally-stable dynamic ambient occlusion, HiZ reverse-reload for faster shadow and tile-based deferred shading on Xbox 360 (the last topic is a good complement to Christina Coffin’s GDC 2011 presentation giving Playstation 3 implementation details).
  • Physically-based Lighting in Call of Duty: Black Ops, by Dimitar Lazarov (Lead Graphics Engineer, Treyarch): Dimitar will give an overview of the lighting architecture used in the Call of Duty games to achieve competitive visual quality at 60 frames per second. He will then describe the process of introducing a physically-based lighting model to the series in Call of Duty: Black Ops, from the premise behind the model to the specific benefits and issues encountered when integrating it into the game.
  • Real-time Image Quilting: Arbitrary Material Blends, Invisible Seams, and No Repeats, by Hugh Malan (Graphics Programmer, CCP Games): A pixel shader-based image quilting technique which handles situations where standard environment texturing has problems: transitions between arbitrary neighbor materials, localized texture features due to custom geometry, and geometry-dependent edge effects. Production details such as vertex sharing and compaction techniques, texture storage options, and implementation issues for PC and console will also be covered.
  • Dynamic Lighting in God of War III, by Vassily Filippov (Lead Game Programmer, SCEA Santa Monica): this talk will cover a novel forward lighting approach used in God of War III to create rich dynamically lit environments with dozens of light sources applied to a single pixel. The description will include a complete mathematical explanation of the algorithm, as well as implementation details such as the combination of multiple lights into a single aggregate light per vertex on the Playstation 3’s SPUs, a new light interpolation approach which improved lighting accuracy, and the application of the aggregate lights per pixel on the GPU. Usability constraints, edge cases and ways to reduce artifacts will be covered in detail.
  • Pre-Integrated Skin Shading, by Eric Penner (Rendering Engineer, Electronic Arts Vancouver): Eric will describe a technique for rendering realistic skin in games, where rather than gathering neighboring light to simulate subsurface scattering, the effects of scattered light are pre-integrated. This allows for achieving the non-local effects of subsurface scattering using only locally stored information and a custom shading model.

Filtering Approaches for Real-Time Anti-Aliasing

From a theoretical standpoint, performing anti-aliasing as a post-process is locking the barn door after the horses have bolted. However, such techniques have recently proven to be surprisingly effective in practice – a flurry of algorithms, implementations, and variants have created one of the most important real-time rendering trends. For this course, the organizers – Jorge Jimenez (Real-Time Graphics Researcher, University of Zaragoza) and Diego Gutierrez (Associate Professor, University of Zaragoza) – have tracked down the inventors of pretty much every important technique in this area and recruited them to present their work:

  • Morphological Antialiasing (MLAA), presented by Alexander Reshetov (Senior Staff Researcher, Intel) – this technique was presented as a paper at the High Performance Graphics (HPG) conference in 2009; the impressive results shown sparked most of the current interest in this general approach.
  • A Directionally Adaptive Edge Anti-Aliasing Filter, presented by Jason Yang (Principal Member of Technical Staff , AMD). This technique was also presented as a HPG 2009 paper, and was influential as well.
  • A GPU-friendly variant of MLAA, presented by Jorge Jimenez (Real-Time Graphics Researcher, University of Zaragoza). This variant was published in the book GPU Pro 2; the talk will also cover recent developments not included in the book.
  • A hybrid CPU/GPU MLAA variant implemented for Costume Quest on the XBox 360,  presented by Pete Demoreuille (Lead Programmer, Double Fine).
  • The Playstation 3/SPU MLAA implementation first used in God of War III and subsequently made available to all Playstation 3 developers as part of the EDGE library. Tobias Berghoff (Senior Programmer, SCEE) will detail the implementation (including recent improvements), and Cedric Perthuis (Senior Staff Graphics Engineer, SCEA Santa Monica) will talk about how the technique was integrated into the God of War III engine.
  • The SPU-based Anti-Aliasing technique (SPUAA) used on the Playstation 3 version of The Saboteur, presented by Henry Yu (Founder and CEO, Kalloc Studios). This technique has long been a topic of speculation among game developers, and will be discussed here for the first time.
  • Subpixel Reconstruction Antialiasing (SRAA), presented by Morgan McGuire (Assistant Professor, Williams College and Visiting Scientist, NVIDIA). This technique was presented as a paper in the 2011 Symposium on Interactive 3D Graphics and Games (I3D).
  • Fast approXimate Anti-Aliasing (FXAA), presented by Timothy Lottes (Developer Technology, NVIDIA). Fast and effective, this technique is currently being evaluated by many developers for inclusion in their games.
  • Distance-to-Edge Anti-Aliasing (DEAA), presented by Hugh Malan (Graphics Programmer, CCP Games – formerly at Realtime Worlds).
  • Geometry Buffer Anti-Aliasing (GBAA), presented by Emil Persson (also known as “Humus” – Graphics Programmer, Avalanche Studios).
  • The Directionally Localized Anti-Aliasing (DLAA) technique used in Star Wars: The Force Unleashed 2, presented by Dmitry Andreev (Senior Rendering Engineer, Visceral Games – formerly at LucasArts).
  • The temporal filtering anti-aliasing technique used in Crysis 2, presented by Tiago Sousa (Principal R&D Graphics Engineer, Crytek).

Beyond Programmable Shading (Part I & Part II)

Similarly to the “Advances in Real-Time Rendering” course, “Beyond Programmable Shading” is an “ensemble” course which has been presented annually at SIGGRAPH for several years running. As its name reflects, it deals with GPU-based graphics that go beyond the traditional graphics pipeline. This course has had uniformly high quality each year, and 2011 appears to be no exception. Part I starts with a 20-minute introduction by the course organizers – Aaron Lefohn (Lead Research Scientist, Intel) and Mike Houston (Fellow, AMD) – and continues with six 25-30 minute talks:

  • Peter-Pike Sloan (Research & Development Lead, Disney Interactive Studios) will give a talk (title to be determined) about the applicability of current graphics research to games, discussing examples of research that works in games today, as well as research that does not work  – and why.
  • GPU Architecture, by Mike Houston (Fellow, AMD): an overview talk covering GPU architecture – unlike similar talks in previous iterations of the course, the architecture talk is extended this year to include heterogeneous architectures.
  • Scheduling the Graphics Pipeline, by Jonathan Ragan-Kelley (PhD Student, MIT): this is an extension of a talk given by Jonathan in last year’s course – it will include significant new material, including more specifics on how scheduling works in particular GPU architectures.
  • Parallel Programming for Real-Time Graphics, by Aaron Lefohn (Lead Research Scientist, Intel): compared to the talk of the same name in last year’s course, this talk will be significantly re-written and updated, including an increase in the number of concrete examples.
  • Software Rasterization on GPUs, by Samuli Laine (Senior Research Scientist, NVIDIA) and Jacopo Pantaleoni (Senior Architect, NVIDIA): software rasterization on GPUs can be an effective way to bypass the limitations of the GPU’s fixed-function rasterizer. Each of the speakers will be discussing papers they will publish at HPG 2011 – in Samuli’s case, High-Performance Software Rasterization on GPUs and in Jacopo’s case, VoxelPipe: A Programmable Pipeline for 3D Voxelization.
  • The course organizers are still in the process of finalizing the topic and speaker of the last talk.

Part II starts with a brief welcome and re-introduction by Mike Houston. This is followed by four 30-40 minute talks, all new to this course series:

  • Toward a Blurry Rasterizer, by Jacob Munkberg (Research Scientist, Intel): this talk will cover the current state of the art in rasterizing triangles with motion and defocus blur – this is a very active area of research, which I suspect will yield some important GPU advances in the near future. Jacob has co-authored several important papers in this area – most notable are the Graphics Hardware 2007 paper Stochastic Rasterization using Time-Continuous Triangles and the HPG 2011 paper Hierarchical Stochastic Motion Blur Rasterization.
  • Order-Independent Transparency, by Marco Salvi (Research Scientist, Intel): similarly to the previous talk, this covers the current state of the art in an important topic on which the speaker has considerable expertise. Of Marco’s work on the topic, most notable is the HPG 2011 paper Adaptive Transparency (not yet available online but his GDC 2011 talk on the topic – including source code – is available).
  • Interactive Global Illumination, by Chris Wyman (Associate Professor, University of Iowa): this is the third “state-of-the-art talk” covering a relatively broad topic. Chris’ publications page includes numerous papers on this topic, some including source code.
  • User-Defined Pipelines for Ray Tracing, by Steven Parker (Director of High Performance Computing and Computational Graphics, NVIDIA): this is a more tightly focused talk than the previous three. It has the potential to be quite interesting, given the speaker’s central role in the development of the Optix ray tracing system (he was the first author on the SIGGRAPH 2010 paper) as well as his area of responsibility at NVIDIA.

The course closes with a 15-minute wrap-up talk by the organizers (on the topic “What’s Next for Interactive Rendering Research?”), followed by a 45-minute panel discussion between the various course speakers.

I’ll continue going over the remaining SIGGRAPH 2011 courses in my next few blog posts.

Hybrid CPU/GPU MLAA on the Xbox-360Pete Demoreuille

Quick SIGGRAPH Roundup

I’m planning a series of more extensive posts on SIGGRAPH content (starting with the courses), but I’ll start with a quick roundup to help people decide on their attendance before the early-bird registration expires at the end of this week. The roundup is focused on those sessions of potential interest to professional game artists, professional game programmers, real-time rendering researchers and real-time rendering students. I’m not listing paper sessions – I typically skip those in favor of other sessions since the papers themselves tend to be readily available. I’ve also skipped the Reception and the various Birds of a Feather sessions for brevity since those tend to be more social (some Birds of a Feather sessions do have presentations, and others might be of particular interest, so it’s probably a good idea to check the BoF list). More information can be found on the individual SIGGRAPH web pages (linked where available) as well as the SIGGRAPH Advance Program.

UPDATES:

  • June 15, 2011: Added SIGGRAPH Dailies! and relevant Exhibitor Tech Talks; added links to individual Panels.
  • June 20, 2011: Added links to individual CAF Production Sessions, The Studio Workshops, The Studio Digital Artistry Sessions, and the Keynote.
  • June 23, 2011: Added links to remaining sessions, and corrected the classification of some of The Studio presentations.
  • June 24, 2011: Removed Reception and Birds of a Feather sessions for brevity; also corrected times of some Studio Talks.
  • July 15, 2011: Added individual NVIDIA Exhibitor Tech Talks.

Multiple Days

  • Electronic Theater (6:00-8:00 on August 8, 9, and 10)
  • Emerging Technologies (2:00-5:30 on August 7; 9:00-5:30 on August 8, 9, and 10; 9:00-1:00 on August 11; also open during Reception)
  • Exhibition (9:30-6:00 on August 9 and 10; 9:30-3:30 on August 11)
  • Posters (12:00-5:30 on August 7; 9:00-5:30 on August 8, 9, 10, and 11)
  • Real-Time Live! (4:30-5:15 on August 8, 9, and 10)
  • The Sandbox (12:00-5:30 on August 7; 9:00-5:30 on August 8,9, and 10; 9:00-1:00 on August 11; also open during Reception)
  • There are also several co-located conferences which may be of interest

Sunday, August 7th

12:00-1:45:

12:30-1:45:

2:00-3:30:

2:00-5:15:

3:00-3:30:

3:45-4:15:

3:45-5:15:

4:30-5:00:

5:00–5:30:

6:00-8:00:

Monday, August 8th

9:00-9:30:

9:00-10:00:

9:00-10:30:

9:00-12:15:

9:30-10:30:

10:15-11:15:

10:40-12:10:

11:00-1:00:

11:30-12:30:

12:00-1:00:

12:45-1:30:

1:45-3:00:

2:00-2:30:

2:00-3:30:

2:00-5:15:

3:15-4:15:

3:45-4:15:

3:45-5:00:

3:45-5:15:

4:30-5:00:

4:30- 5:30:

5:00-5:30:

Tuesday, August 9th

9:00-9:30:

9:00-10:30:

9:00-12:15:

10:30-11:30:

10:40-12:15:

10:45-12:15:

12:30-1:45:

1:15-1:45:

2:00-3:30:

2:00-3:30:

2:00-5:15:

3:00-3:30:

3:45-4:15:

3:45-4:40:

3:45-5:00:

3:45-5:15:

4:30-5:00:

Wednesday, August 10th

9:00-9:30:

9:00-10:30:

9:00-12:15:

9:45-10:45:

10:30-11:30:

10:40-12:10:

10:45-12:15:

11:15-12:15:

11:30-12:00:

12:30-1:45:

2:00-3:30:

2:00-5:15:

2:15-3:15:

3:45-5:00:

3:45-5:15:

4:30-5:00:

  • The Visual Style of “Legend of the Guardians: The Owls of Ga’Hoole” (The Studio Talk)

6:00-7:30:

Thursday, August 11th

9:00-10:30:

9:00-12:15:

10:40-12:15:

10:45-12:15:

2:00-3:30:

2:00-5:15:

3:45–5:15:

How to make money with your GPU

You’ve probably heard about bitcoins by now, the currency of cryptoanarchist libertarian computer geeks or something. It turns out that GPUs are particularly good at mining bitcoins, compared to CPUs: check out this chart – the key factor is Mhash/sec (though Mhash/Joule is also an entertaining concept). The most interesting page (for me) at the site is their explanation of why a GPU is (so much) faster than a CPU for this task. Not a shocker for anyone reading this blog; we all know that GPGPU can rip through certain tasks at amazing speeds. What’s more interesting to me is how and why one IHV’s GPUs are considerably faster than the other’s. I won’t spoil the surprise here, see the page to learn more.

Loosening of ACM’s copyright policy

We’ve talked about this before, how ACM’s copyright policy stated that they, not you, control the copyright of any images you publish in their journals, proceedings, or other publications. For example, if your hometown newspaper wants to publish a story of “local boy makes good” and wish to include samples of your work, they needed to ask the ACM for permission (and pay the ACM $28 per image). Not a huge problem, but it’s a bureaucratic roadblock for a reasonable request. Researchers are usually surprised to hear they have lost this right.

While it was possible to be assertive and push to retain copyright to your images (or even article) and just grant ACM unlimited permission – certainly firms such as Pixar and Disney have done so with their content – the default was to give the ACM this copyright control.

James O’Brien brought it to our attention that this policy has been revised, and I asked Stephen Spencer (SIGGRAPH’s Director of Publications) for details. His explanation follows.

ACM has recently changed its copyright policy to include the option, under certain circumstances, of retaining copyright on embedded content in material published by ACM. Embedded content can now fall into one of three categories: copyright of the content is transferred to ACM as part of the rest of the paper (the default), the content is “third-party” material (not created by the author(s)), or the content is considered an “artistic image.”

The revised copyright form includes this definition of “artistic images”:

“An exception to copyright transfer is allowed for images or figures in your paper which have ‘independent artistic value.’ You or your employer may retain copyright to the artistic images or figures which you created for some purpose other than to illustrate a point in this paper and which you wish to exploit in other contexts.”

The ACM Copyright Policy page also documents this change in policy.

ACM’s electronic copyright system is being updated to implement this change; authors who wish to declare one or more pieces of embedded content in their papers as “artistic images” should contact Stephen Spencer (at <spencer@cs.washington.edu>) to receive a PDF version of the revised copyright form.

The copyright form includes instructions for declaring embedded content as “artistic images,” both in your paper and on the copyright form.

—-

Note that this change is “going forward”; if you have already given ACM the copyright, you cannot get it back. Understandable, as otherwise there could be a flood of requests for recategorization.

I’m happy to see this change, it is a good step in the right direction.