Seven Things for May 4th, 2011

Seven things:

  • There’s a post on speculative contacts by Paul Firth, a way of simplifying and stabilizing collision detection that has been used in Little Big Planet. Particularly nice is that demos are built into the page, so you can try the various methods out and see the problems and performance for yourself. This author has followed up with “Collision Detection for Dummies“, a great overview, and “Physics Engines for Dummies“, again with interactive demos.
  • The Gamedev Coder Diary has a worthwhile summary of the current state of deferred shading vs. deferred lighting (aka “light pre-pass”) techniques, discussing problems and strengths of each.
  • The CODE517E blog has had a number of good posts lately, including an article on deferred rendering myths, another on stable cascaded shadow maps, an accumulation-buffer-like way of making super-high resolution images for printing (with some worthwhile analysis of problems it engenders with mipmap sampling and with view shifting – fun to think about), an extensive rundown of programming languages for videogames, and a summary of tools he uses (quite the long list – I’m still working through those I hadn’t seen before).
  • On the topic of languages, Havok put together a page collecting the Lua tutorial talks at GDC 2011.
  • The Boeing 777 model (almost 400 million polygons) ray traced at interactive rates on a consumer-level PC, using CUDA. CentiLeo is an out-of-core GPU ray tracer, see this page for some of the slides from the (rather long) video. That said, don’t be fooled by the start of the video: those sequences are generated at 15 seconds a frame and played back at 60 FPS (so 500-1000x from being real-time). Still, the preview mode is indeed interactive, and the Boeing is a huge model. On the other end of things, here’s a fun demoscene ray trace. By the way, Ray Tracey’s blog is good for keeping up on new ray tracing videos and demos and other related topics.
  • A poster accepted to SIGGRAPH 2011 by Ohlarik and Cozzi gives a clever little method of properly drawing lines on surfaces for GIS applications. It converts lines to “walls”, then marks those pixels where there is a visibility change of the wall (i.e., one pixel of the wall is visible, a neighboring pixel is not), with a correction for terrain silhouette edges. One more trick for the bag.
  • More about the look and feel of games than the technical nerdy stuff I cover here, Topi Kauppinen’s blog pointed me to Susy Oliveira’s sculptures, which are pretty amusing (finally, perfect models for 3D web browsers). There have been similar works by other artists (e.g. Eric Testroete’s head), but the more the merrier.

I3D 2011 Report

This report about I3D 2011 is from Mauricio Vives, a coworker at Autodesk; he also has photos from the symposium. The papers listing can be found at Ke-Sen Huang’s page and at the ACM Digital Library’s page (tabbed version here). The ACM site has some interesting info, by the way, such as acceptance rates and “most downloaded, most cited” for all I3D articles. I should also remind people: if you’re a member of ACM SIGGRAPH, you automatically have access to all SIGGRAPH-sponsored ACM Digital Library materials (which is essentially almost all their graphics papers).


In case you missed them, Naty’s reports (so far) about I3D are also on this blog: Keynote, Industry Session, and Banquet Talk.

Papers


Session: Filtering and Reconstruction

A Local Image Reconstruction Algorithm for Stochastic Rendering

In a stochastic rasterizer, effects like defocus (depth of field) and transparency need a very large number of samples (64 – 256) to remove noise and look decent. This paper describes a technique to remove the noise using far fewer samples, through sorting to selectively blur samples. The results look quite good, though it does have the disadvantage of blurring transparency.

Subpixel Reconstruction Antialiasing for Deferred Shading

The post-process MLAA antialiasing technique works to find sharp edges in an image and smooth them. It provides pretty good results. This technique (SRAA) is similar, but operates with supersampled depth and normal data to produce even better results with predictable performance. For example, it looks comparable to 16x SSAA (super-sampled antialiasing) in just 2 ms on a GTX 480 at HD resolution. As with MLAA, only single-sample shading is used, and it is compatible with deferred lighting. [The use of depth and normals means it is focused on geometric edge finding, so the technique is not applicable to shadow edges and NPR post-process thickened edges, for example. – Eric]

High Quality Elliptical Texture Filtering on GPU

Elliptical filtering is necessary for removing artifacts from some high-frequency textures, but is too slow for real-time applications. This is a technique for approximating true elliptical filtering using hardware anisotropic filtering. They claim it is easy to integrate, and they provide drop-in GLSL code to replace existing texture lookups.

Session: Lighting in Participating Media

Transmittance Function Mapping

This is about rendering light and shadow in participating media (e.g. fog, smoke) with single scattering. Two techniques are described: a faster one called volumetric shadow mapping for homogeneous media, and a slower one called transmittance function mapping for heterogeneous media. This is suitable for both offline rendering and interactive applications, but it is not quite real-time (> 30 fps) yet.

Real-Time Volumetric Shadows using 1D Min-Max Mipmaps

This is again about volumetric shadows, but with real-time results for homogeneous media, using only pixel shaders. A combination of epipolar warping on the shadow map, and a 1D min-max mipmap for storage, makes this possible. This is a pretty clever combination of techniques. [This paper was the “NVIDIA Best Paper Presentation”.]

Real-Time Volume Caustics with Adaptive Beam Tracing

This is an extension of previous beam-tracing work to allow for real-time caustics, with receiving surfaces and volumes handled separately. A coarse grid of beams of refined based on the geometry and viewer to put detail where it is needed most. This enhancement is something an effect like caustics certainly needs to look good.

Session: Collision & Sound

Sound Synthesis for Impact Sounds in Video Games

This paper was motivated by having very little memory on a game console to store physics-based sounds, e.g. for footsteps or colliding objects. This uses a combination of modal synthesis and spectral modeling synthesis to produce extremely small sound definitions that could be varied on demand to provide a variety of believable effects. I don’t know much about sound generation, but the audio results (from the only audio paper in the conference) were very good.

Collision-Streams: Fast GPU-based Collision Detection for Deformable Models

Fast Continuous Collision Detection using Parallel Filter in Subspace

Both of these papers address the same problem: how to quickly perform continuous collision detection, which works by interpolating motion between time steps. I am a novice when it comes to collision detection, so I didn’t take away much from these.

Session: Shadows

This is the session that was most interesting to me, as I work on shadow algorithms.

Shadow Caster Culling for Efficient Shadow Mapping

This paper has a simple idea: when rendering a shadow map, cull objects that don’t produce a visible shadow to improve performance. Here occlusion culling is used to determine which objects are visible from the light source, and then these are rendered to produce a receiver mask in the stencil buffer. The stencil occlusion query operation can then be used to skip irrelevant objects. There are a few methods to render the receiver mask, trading off complexity and accuracy. This was mostly tested with city scenes, where it offered the same images at about 5x speed.

Colored Stochastic Shadow Maps

This included a nice overview of the problem, terminology, and existing techniques: how to render colored shadows from translucent objects? This extends existing techniques for rendering stochastic transparency to also support colored transparent shadows. It has a filter that can be adjusted for performance and quality, being based on stochastic techniques. The paper has simple explanations of the algorithm, and it seems from the presentation that this is fairly easy to integrate in a system that already handles basic stochastic shadow maps.

Sample Distribution Shadow Maps

This was the paper I was most interested in seeing, since it is an extension to cascaded shadow maps (CSM), a popular shadowing technique. The idea here is quite simple: reduce perspective aliasing with shadow maps by analyzing the distribution of samples visible to the viewer. This can be used to determine tight near/far planes for CSM partitions, and for tightly fitting the light frustum to just the visible shadow receivers.

The analysis requires a reduction operation, which can be done with rasterization (D3D9/10) or compute shaders (D3D11). Using this algorithm can result in sub-pixel sampling in many cases using a 2K shadow map; see the example below.

Session: Refraction & Global Illumination

Voxel-based Global Illumination

It wouldn’t be an interactive graphics conference without a paper about interactive global illumination, and here is one. The idea behind this technique is to create an atlas-based voxelization of the scene (every frame), and then perform fast ray casting into that grid. Direct lighting can be stored in the grid, or reflective shadow maps can be used; the latter seems to be preferred.

The technique is of course able to capture lighting and details that screen-space methods can’t. However, it can have artifacts that require a denser grid and a tuned offset to avoid self-intersections. Also, as noted earlier, it requires a texture atlas for generating the voxel grid. The results are interactive, if not quite real-time.

Real-Time Rough Refraction

Here the authors a solving the interesting, though somewhat specific, problem of rendering transparent materials (no scattering) with rough surfaces, which is different from translucent surfaces. Glossy reflection has been handled before; this is about glossy refraction. In this case, rays would be refracted twice, entering and exiting the surface, and this provides a fast way to perform what would otherwise be a double integration.

I wasn’t able to follow all of the math in this one, but the results are indeed real-time (> 30 fps), and comparable to what you would get out of a ray-tracer in dozens of seconds or minutes. There is also a cheap approximation for total internal reflection. This paper was selected as one of the best papers of the conference. An example with increasing roughness is shown below.

Screen-Space Bias Compensation for Interactive High-Quality Global Illumination with Virtual Point Lights

One of the problems with using virtual point lights (VPL) for global illumination is that you must apply a clamping above a certain amount of reflection to avoid pin-point light artifacts. This paper presents a fast way to compute the amount of light that was clamped away, which can then be added back to the VPL result. It is done in screen space, which can lead to some issues (as you might expect), but it is fully interactive and easy to integrate into other renderers that already have VPL.

Session: Human Animation

This is certainly not my area of expertise, so I only have a few comments about these papers.

Motion Rings for Interactive Gait Synthesis

This is human walking motion interpolation made more efficient, responding within a quarter-gait, which is necessary for interactive applications. It relies on a parameterized motion loop (called a “ring” here), and uneven terrain is handled with IK adjustments.

Realtime Human Motion Control with A Small Number of Inertial Sensors

This paper describes how to combine high-quality prerecorded motion capture data with live motion from just a few simple (noisy) sensors to enhance what would otherwise be very poor input. This is validated by comparing against the high-quality original data for motions like walking, boxing, golf swings, and jumping.

A Modular Framework for Adaptive Agent-Based Steering

Crowd simulations need hundreds of characters, but often lack local (per-person) intelligence. This paper presents a framework for dynamically choosing between one of several local steering strategies. This is able to handle fairly tight and deadlocked situations, such as two people walking toward each other down a narrow hall, though some of the resulting motion is awkward.

Session: Geometric and Procedural Modeling

Editable Polycube Map for GPU-based Subdivision Surfaces

This is an extension of previous “polycube map” work to allow transferring geometric detail from a high-resolution triangle mesh to subdivision surfaces. A simple modeling system was presented where the user creates a very coarse polycube and sketches a handful of correspondences between it and the high-resolution mesh. The results are really quite remarkable, and you can see the process below.

GPU Curvature Estimation on Deformable Meshes

It is not unusual to perform vertex skinning or iso-surface extraction on the GPU now. However, for some effects like ambient occlusion or NPR edge extraction, it is useful to have the mathematical curvature of the new surface, but this is slow or impossible to read back from the GPU. This paper presents a method for estimating curvature on the GPU in real-time, even for very detailed models, much faster than could be done on the CPU. [This paper was a “Best Paper – Honorable Mention”.]

Urban Ecosystem Design

I have seen several papers at SIGGRAPH and elsewhere about procedurally generating urban environments: basically streets and buildings. This paper procedurally adds plants (mostly trees) to such urban layouts. City blocks are assigned a level of human “manageability” which determines how organized or wild the plants in that block will be. From there, growth and competition rules are applied. Only the city geometry is taken into account, so this can be used with systems where other information (like land use) is not available. It was implemented with CUDA, and as an example, it can simulate 70 years of growth for 250,000 plants in about two minutes.

Session: Interactivity and Interaction

Data Management for SSDs for Large-Scale Interactive Graphics Applications

Here “SSD” refers to solid-state disks, so this is about organizing a graphics database to allow for efficient out-of-core rendering using SSDs instead of traditional hard disks with spinning platters. Since SSDs don’t need data ordering, locally (as opposed to globally) optimized layouts work well with them. The presentation seem to be lacking in detail, but the demo was fairly impressive: a very large scene on disk being displayed and edited very smoothly with very little RAM. For any developers working with large graphics databases I would certainly recommend reading this paper for getting some ideas.

Coherent Image-Based Rendering of Real-World Objects

This paper attempts to generate a depth map from images captured from a few cameras. The goal is to build a virtual dressing room with a “mirror”: the mirror is a display that shows you with different virtual clothes on. The entire system runs with CUDA on a single machine with a single GPU, at interactive rates even with full body movement. It exploits frame coherence, i.e., reusing parts of previous frames, to reduce latency. This was surprisingly the only paper on image-based rendering, despite the first keynote (below) being entirely about IBR.

Slice WIM: A Multi-Surface, Multi-Touch Interface for Overview+Detail Exploration of Volume Datasets in Virtual Reality

Here WIM is “world in miniature,” a technique that presents a small version of a virtual environment to help the user navigate the environment. This paper extends the technique to aid in the visualization of complex volume data generated from slices, especially for medical imaging. The resulting system consists of a large main display, a smaller horizontal touch display through which the user can manipulate the view and slices, and a head-tracked VR display for the WIM. Better to just show you! [This paper was a “Best Paper – Honorable Mention”.]

TVCG Papers

A few papers from IEEE Transactions on Visualization and Computer Graphics were also presented as “guests” of the conference. There didn’t seem as relevant, but I list them here:

  • Interactive Visualization of Rotational Symmetry Fields on Surfaces
  • Real-Time Ray-Tracing of Implicit Surfaces on the GPU
  • Simulating Multiple Character Interactions with Collaborative and Adversarial Goals
  • Directing Crowd Simulations Using Navigation Fields

Posters

I didn’t spend much time looking at the posters this year, but I did make note of a few that I would like to investigate further; see below. The full poster list is here [ACM Digital Library subscribers and SIGGRAPH members can download from here].

  • gHull: A Three-dimensional Convex Hull Algorithm for Graphics Hardware
  • Interactive Indirect Illumination Using Voxel Cone Tracing  [This was “Best Poster – Winner”.]
  • Level-of-Detail and Streaming Optimized Irradiance Normal Mapping
  • Poisson Disk Ray-Marched Ambient Occlusion

Talks

Image-Based Rendering: A 15-Year Retrospective

This talk was given by Rick Szeliski from Microsoft Research. As the title implies, it was an overview of image-based rendering research, covering topics such as panoramas, image-based modeling, photo tourism, and (in particular) light fields. The fundamental problem of such research is determine how to make a scene from an existing one. At one point he mentioned that Autodesk “probably” has something for image-based modeling, and indeed we do; I sent that link to him after the conference. Naty Hoffman of Activision has a longer discussion of this talk in an earlier blog posting.

From Papers to Pixels: How research finds its way into Games

This talk by Dan Baker of Firaxis was a light “rant” about what researchers should do to get their research into games. For example, they need to consider quality and ease of integration, and realize that hardware advances will make certain techniques obsolete quickly. Most of it was reasonable, but it included a number of controversial points, in my opinion. For example, “nobody uses OpenGL” and there is too much research into GPGPU.

He also started out by stating that the vast majority of the research for I3D is for games, and that everything else is “boring”… calling out “3D CAD” in particular as boring! By the end, I was quite tempted to provide an on-the-spot rebuttal from the Autodesk perspective.

A Game Developer’s Wishlist for Researchers

Chris Hecker, formerly of Maxis, presented his opinion on almost the same topic, i.e. what game developers want from researchers – I liked it better. His priorities are robustness, simplicity, and performance, in that order, but researchers often make the mistake of putting performance first. The nature of interactivity means that robustness is absolutely critical. For example, you can’t afford errors even every few hundred frames when you are rendering 30 fps. He also states that papers frequently exclude negative results and worst-case scenarios, which would be helpful for assessing robustness.

I could say more, but Chris has the full talk available here. See for yourself why he says, “We are always about to fail.”

GPU Computing: Past, Present, and Future

This was the banquet talk, given by David Luebke of NVIDIA. It was a fairly light look at the state of GPU computing, where it has come and where it will go. A lot of it went against Dan Baker’s earlier comments about GPGPU, and David made sure to point that out (and I agree).

Some of this talk had the feel of a marketing presentation for NVIDIA’s CUDA and GPU computing products like Tesla, but it is hard to deny that this area is important. He cited several cases where GPU computing is saving lives, e.g. assisting in heart surgery, malaria treatment, and cancer detection. Of course, he also mentioned graphics, scientific computing, data mining, speech processing, etc. At one point he (amusingly) pointed out that all of this innovation and technology has its roots in humble VGA controller hardware.

Mobile Computational Photography

As someone who recently started going beyond point-and-shoot photography with a DSLR camera, this talk was quite interesting. Kari Pulli of Nokia described an API for controlling digital cameras called FCam… and it’s pretty cool what skilled hands are able to do with this. Note that this isn’t really about high-end cameras: the idea is to allow even cell phone cameras to take great photos.

FCam basically makes a supporting camera programmable, allowing you to do things that the camera normally can’t do. Most of it revolves around taking multiple images and combining them to produce a better result. For example, from his talk and the site:

… take two photos with different settings back-to-back. The first frame has a high ISO and short exposure time, and the second has a low ISO and longer exposure time. The first frame tends to be noisy, while the second instead exhibits blur due to camera shake. This application combines the two frames to create an output that’s better than either.

This is absolutely something I wish I could do with my current camera. Another example he showed was combining a few images with narrow depths of field into a single image with a wide depth of field (i.e. fully sharp). Another automatically took photos until one was captured with minimal camera shake, keeping only the “good” one. All of this is done right on the camera, which is a great improvement over the typical post-processing workflow, and it can leverage metadata that only the camera has.

CFP: Game Development Tools 2

Passing this along, from Marwan Ansari. “Real” blogging again soon…

Now that the first volume of Game Development Tools has gone to the printers and will be available shortly, we invite you to submit a proposal for an  innovative article to be included in a forthcoming book, Game Development Tools 2, which will be edited by Marwan Y. Ansari and published by CRC Press/A. K. Peters. We expect to publish the volume in time for GDC 2012.

We are open to any tools articles that you feel would make a valuable contribution to this book.

Some topics that would be of interest include:

· Content Pipeline tools (creation, streamlining, management)

· Graphics/Rendering tools

· Profiling tools

· Collada import/export/inspection tools

· Sound tools

· In-Game debugging tools

· Memory management & analysis tools

· Console tools (single and cross platform)

· Mobile Device (phone/tablet) tools

This list is not meant to be exclusive and other topics are welcome.

The schedule for the book is as follows:

July 1         – All proposals in.

July 18th   – Authors are informed and begin writing articles.

Aug 19th   – First draft in to editor

Sept 16th   – Drafts sent back authors with notes for final draft.

Oct  15th   – Final articles in to editor

Dec 1st      – Final articles to publisher (A K Peters)

GDC 2012 – Book is released

Please send proposals using this form to: marwan at gamedevelopmenttools dot com.

CFP: IEEE CG&A special issue on material appearance

Passing on the word:

IEEE CGA special issue

Modeling and Rendering Material Appearance

Final submissions due: 1 July 2011
Publication date: March/April 2012

Modeling and rendering the appearance of materials is important in many computer graphics applications. Understanding material appearance draws on methods from diverse fields including the physics of light interaction with material (including models of BRDF, bidirectional reflectance distribution functions, and BSSRDF, bidirectional subsurface scattering reflection distribution functions), human perception of materials, and efficient data structures and algorithms.

This special issue will cover all aspects of material appearance in graphics, ranging from theory to application. Possible topics include (but are not limited to)

  • first-principle models for BRDF and BSSRDF;
  • procedural models of materials;
  • modeling of mesoscale material features including bumps, ridges, and so on;
  • measurement of material appearance including BRDF, BSSRDF, and BTF (bidirectional texture functions);
  • numerical simulation of material appearance;
  • new instruments for measuring appearance;
  • material-appearance models from photo collections;
  • new data structures for representing material appearance;
  • efficient rendering of BTF and BSSRDF;
  • new interfaces for designing material appearance;
  • methods for printing hard copies of material appearance;
  • psychophysics of material appearance with application to computer modeling;
  • material-appearance applications in industry such as the design of paints and coatings; and
  • nonphotorealistic rendering of material appearance.

Questions?

Contact Holly Rushmeier (holly@acm.org) or  Pierre Poulin (poulin@iro.umontreal.ca)

Submission Guidelines

Articles should be no more than 8,000 words, with each figure counting as 200 words. Cite only the 12 most relevant references, and consider providing technical background in sidebars for nonexpert readers. Color images are preferable and should be limited to 10. Visit CG&A style and length guidelines at www.computer.org/cga/author.html.

Please submit your article using the online manuscript submission service at https://mc.manuscriptcentral.com/cs-ieee. When uploading your article, select the appropriate special-issue title under the category “Manuscript Type.” Also include complete contact information for all authors. If you have any questions about submitting your article, contact the peer review coordinator at cga-ma@computer.org.

Awards Season Roundup

There  have been a lot of awards recently of interest to readers of this blog; I thought it would be useful to provide an overview, as well as covering some of the more obscure awards.

The Oscars are by far the most well-known awards, bestowed annually since 1929 by the Academy of Motion Pictures Arts and Sciences. Oscars are voted on by Academy voting members (who total about 6,000) in the relevant disciplines (e.g. directors vote for Best Director, actors for Best Actor, etc.). The Scientific and Technical Awards are especially notable; they are given in a separate ceremony in early February, two weeks before the main awards ceremony celeb-fest.

Although some of the Sci-Tech awards are still for “analog” stuff like camera mounts, lenses, and film emulsions, in recent times most of the honored developments have been digital. All except three of this year’s winners were for digital advances (the other three were for computer-controlled camera and prop cable suspension systems, so partially digital as well).

The most directly relevant award was for a development described in a SIGGRAPH paper (the 2004 paper, “An Approximate Global Illumination System for Computer Generated Films”, was even mentioned in the award text). The award was given to Eric Tabellion and Arnauld Lamorlette, “for the creation of a computer graphics bounce lighting methodology that is practical at feature film scale”. This technique (as described in the 2004 paper) is a fast one-bounce GI method that uses interesting approximations for both geometry and surface material. The paper is well worth reading; the technique was highly influential for film rendering and some of the ideas are relevant for real-time rendering as well.

Another computer graphics-related award was given to Dr. Mark Sagar “for his early and continuing development of influential facial motion retargeting solutions”. Dr. Sagar pioneered the use of the Facial Animation Coding System (FACS) for film production, starting with Monster House and supervising its use on King Kong and Avatar; this system is now widely used, with growing adoption by the game industry as well. Dr. Sagar also won an Academy Sci-Tech award last year, for his work on Light Stage.

As seen in the cable suspension case, Academy Sci-Tech awards tend to come in “clumps”. As a particular technology area is recognized as important by the Academy, several different groups who did important work in that area receive awards in one year. For example, a bunch of last year’s Sci-Tech awards were related to the digital intermediate (DI) process. The biggest “clump” this year was for another graphics-related topic: render queues (software used to manage render farms, the earliest  – and still most widespread in film production – form of parallel graphics processing):

The final computer graphics-related SciTech award was given to Tony Clark, Alan Rogers, Neil Wilson and Rory McGregor “for the software design and continued development of cineSync, a tool for remote collaboration and review of visual effects” (cineSync is developed and sold by Rising Sun Research).

Some of the main Academy Awards (announced in late February) are also of interest to readers of this blog; there’s a lot of information about these awards out there so I’ll just mention the winners for Visual Effects (Inception), Animated Feature Film (Toy Story 3), and Animated Short Film (The Lost Thing).

The closest video game equivalent to the Oscars are the Interactive Achievement Awards, bestowed annually by the Academy of Interactive Arts and Sciences at the annual D.I.C.E. Summit in early February. Similarly to the Oscars, they are voted for by registered AIAS members, who must be working in the appropriate game development discipline to vote on a given award. This year’s awards of interest to readers of this blog include: Outstanding Achievement in Animation (God of War III), Outstanding Achievement in Art Direction (Red Dead Redemption), and Outstanding Achievement in Visual Engineering (Heavy Rain).

The Game Developers Choice Awards are also prestigious, and are bestowed at the annual Game Developers Conference (which takes place in late February or early March). One must be a registered member of the Gamasutra website (owned by United Business Media, which also owns the Game Developers Conference) to nominate or vote, and the advisory committee which oversees the process is chosen by the editors of Game Developer Magazine (also owned by United Business Media) and Gamasutra. The Game Developers Choice Awards are unusual in thus being managed by a for-profit corporation rather than a nonprofit professional organization. This year’s awards of interest: Best Technology (Red Dead Redemption), and Best Visual Arts (Limbo).

Regarding video game awards, one notable event that happened this year (on February 12th) was the first Grammy award for music composed for a video game. The Grammys don’t have a dedicated award for video game music – this award was for a song, Baba Yetu, originally composed by Christopher Tin for Civilization IV and released on the 2009 album Calling All Dawns (which itself won a Grammy in addition to the award for Baba Yetu).

The awards given by the British Academy of Film and Television Arts (BAFTA) are almost as well-known in the UK as the Oscars are in the US. BAFTA gives awards for TV shows and video games as well as movies.

The British Academy Film Awards were held in mid-January. Awards of interest: Animated Film (Toy Story 3), Short Animation (The Eagleman Stag – which was stop-motion, not CG), and Special Visual Effects (Inception).

One of the British Academy Television Craft Awards was of interest: Visual Effects (The Day of the Triffids).

The British Academy Video Game Awards were held in mid-March. Awards of interest: Artistic Achievement (God of War III), and Technical Innovation (Heavy Rain). A minor controversy erupted after Red Dead Redemption did not win any awards – it turns out that it was not entered by the developers (Rockstar Games), most likely for reasons related to a perceived snub that Grand Theft Auto IV (also developed by Rockstar) received in the 2009 awards.

The last set of awards I will discuss are perhaps the most directly relevant for this blog, though not as well-known as the ones previously mentioned. The Visual Effects Society (VES) is a professional organization representing practitioners in visual effects and computer-generated animation for TV, film and video games. Among their other activities, they host the VES Awards every year in early February. Due to these awards’ focus, most of them are of interest – the full list can be found here. I’ll highlight some of the most interesting awards categories, but first I wanted to mention this year’s VES Lifetime Achievement Award recipient, Ray Harryhausen. Harryhausen is a giant in the field; his pioneering stop-motion effects work on many films, from Mighty Joe Young (1949) to Clash of the Titans (1981) inspired many of today’s most prominent filmmakers. I’ve been going through his films on recent weekends with the wife and kids; most of them are great fun and well worth seeing. I’m not sure why it took the VES nine years to recognize Harryhausen (even the Academy of Motion Pictures Arts and Sciences, which snubbed him for the special effects Oscars throughout his career, finally awarded him the Gordon E. Sawyer Award in 1992).

This year’s VES video game award winners: Outstanding Real-Time Visual Effects in a Video Game (Halo: Reach; presented to Marcus Lehto, Joseph Tung, Stephen Scott, and CJ Cowan from Bungie – two clips related to the submission are available on YouTube: “work to be considered” clip, “before and after” clip), Outstanding Animated Character in a Video Game (StarCraft II – Sarah Kerrigan; presented to Fausto De Martini, Xin Wang, Glenn Ramos, and Scott Lange from Blizzard), Outstanding Visual Effects in a Video Game Trailer (World of Warcraft – for the Cataclysm cinematic; presented to Marc Messenger and Phillip Hillenbrand, Jr. from Blizzard).

Notable VES feature film-related awards: VES Visionary Award (Christopher Nolan), Outstanding Visual Effects in a Visual-Effects Driven Feature Motion Picture (Inception), Outstanding Supporting Visual Effects in a Feature Motion Picture (Hereafter), Outstanding Animation in an Animated Feature Motion Picture (How to Train Your Dragon), Outstanding Achievement in an Animated Short (Day & Night), Outstanding Animated Character in a Live Action Feature Motion Picture (Harry Potter and the Deathly Hallows: Part 1 – Dobby), Outstanding Animated Character in an Animated Feature Motion Picture (How to Train Your Dragon – Toothless), Outstanding Effects Animation in an Animated Feature Motion Picture (How to Train Your Dragon).

Recognition of exceptional work is an important part of the advancement of any professional field; it’s good to see that the field of computer graphics is so well-covered in this respect.

SIGGRAPH 2011 Preliminary Course List

A partial, early list of SIGGRAPH 2011 courses has recently been published. SIGGRAPH has published such preliminary lists in previous years, typically representing around half to a third of the final course list.

The list includes six very promising courses:

  1. Advances in Real-Time Rendering in Games: Part I – this is the next iteration in a course series, organized by Natalya Tatarchuk, that has been presented at SIGGRAPH every year (with new content) since 2006. This course has been a highlight of every SIGGRAPH it has appeared in and I’m pleased to see it coming back. The instructors are not yet listed, but Natasha has always been able to round up a top-notch speaker roster, and I am confident she will do so again this year. “Advances…” has always been a full-day course, though since 2008 (when SIGGRAPH canceled the full-day course format) it’s been divided into two half-day courses. Only one of the two halves appears on this list; hopefully this is a simple oversight and SIGGRAPH didn’t reject the other half of the course!
  2. Character Rigging, Deformations, and Simulations in Film and Game Production – I’m always happy to see “X in film and games”-types courses. If well-organized and presented, such courses detail the current cutting-edge of actual production practice in both industries, emphasizing interesting differences and commonalities between the two. Such crossover content is an important feature of SIGGRAPH not found in industry-specific conferences like GDC. The topic is important; many games don’t put enough of an emphasis on animation quality. The speaker list is strong, including Tim McLaughlin (a graphics researcher at Texas A&M University who also has a nice body of film VFX work he did at ILM), Larry Cutler (a character technical director at Dreamworks Animation, formerly at Pixar), and David Coleman (a Senior CG Supervisor at Electronic Arts Canada, where he leads the EA Sports rigging team).
  3. Cinematography: The Visuals & the Story – I’m very happy to see this course on the list. I have become  increasingly fascinated with cinematography over the last few years; there is a lot that video games can learn from cinematography, from creative topics like lighting and composition to technical ones such as depth of field and tone mapping. This course is taught by Bruce Block, a film producer and visual consultant who wrote a very well-regarded and influential book called The Visual Story, about how visual structure is used to present story in film. I’m trying to get a course put together for next year which would cover the topic from a different angle, as presented by working film cinematographers; the two courses should make a nicely complementary pair.
  4. Destruction and Dynamics for Film and Game Production – Another “X in film and games” course on a key topic, organized by Erwin Coumans (AMD; formerly at SCEA R&D, Havok and Guerrilla Games). Erwin is the creator of the open-source Bullet Physics engine, which has been used in many films and games. Other speakers include Takahiro Harada (a GPU physics researcher at AMD, formerly Havok and the University of Tokyo), Nafees Bin Zafar (a senior production engineer at DreamWorks Animation who won an Academy Scientific & Engineering Award for his fluid simulation work at Digital Domain), Mark Carlson (an FX R&D programmer at DreamWorks Animation, formerly at Disney Animation), Brice Criswell (a senior software engineer at ILM), Michael Baker (no affiliation listed – I’m guessing it’s the Michael Baker who teaches at the Art Institute of Las Vegas and develops tools for the Dynamica Bullet Maya plugin), and Erin Catto (a principal software engineer at Blizzard who also developed the very widely used Box2D open source 2D physics engine).
  5. PhysBAM: Physically Based Simulation – Another physics course, but with a different emphasis. It focuses on the the PhysBAM simulation library developed at Stanford University and used by ILM, Disney Animation, and Pixar. Parts of PhysBAM are already open source – since the course webpage refers to “the soon-to-be-released simulation library PhysBAM”, presumably the rest will be available soon. The course is presented by Craig Schroeder (a PhD student at Stanford).
  6. Storytelling With Color – Anyone who saw my color course last year knows that I believe that getting the technical side of color right is important, for both film and games. But the reason it is important comes from the creative side – the way that a selection of colors can drive story or establish a mood. This course covers that topic, and should be of great interest to many game developers. It will be presented by Kathy Altieri (a production designer at DreamWorks Animation who worked on filmsm including The Prince of Egypt, Over the Hedge, and How to Train Your Dragon, and previously at Disney Animation on The Little Mermaid, Aladdin, and The Lion King).

If the rest of the content will be nearly as good as this preliminary set of courses appears to be, SIGGRAPH 2011 will be a conference to remember!

SIGGRAPH Asia 2011 Call for Submissions

The call for submissions for SIGGRAPH Asia 2011 has recently gone live. This fourth iteration of the SIGGRAPH Asia conference will take place in Hong Kong between December 12th and 15th. In previous years, the sketches and course programs have been of similar quality (if reduced quantity) compared to their North American counterparts. The SIGGRAPH Asia Technical Papers have been really good, better in my opinion than the relatively abstruse SIGGRAPH Technical Papers. If you want to see for yourself, the incomparable Ke-Sen Huang has your back, with paper link pages for SIGGRAPH Asia 2008, 2009 and 2010. Ke-Sen deserves an outstanding service award from ACM, instead of the more negative attentions he has received from them.

Here is the 2011 CFS text (a slightly more detailed version can be found here):

SIGGRAPH Asia 2011 sees the return of the Art Gallery and Emerging Technologies programs. Also calling for submissions are: Computer Animation Festival, Courses, Technical Papers, Technical Sketches & Posters.

Submit your research, theories, and innovations and you might be the next to have the valuable opportunity to present your work to audience-packed halls at SIGGRAPH Asia 2011 conference in Hong Kong this December.

For more information on SIGGRAPH Asia 2011, please visit www.siggraph.org/asia2011/.

I3D 2011 Report – Part III: Banquet Talk

GDC has put a bit of a hiatus in my I3D posts; I better get them done soon so I can move onto the GDC posts.

This post describes a talk that David Luebke (Director of Research at NVIDIA) gave during the I3D banquet titled GPU Computing: Past, Present, and Future. Although the slides for the I3D talk are not available, parts of this talk appear to be similar to one David gave a few months ago, which does have video and slides available.

I’ll summarize the talk here; anyone interested in more detail can view the materials linked above.

The first part of the talk (which isn’t in the earlier version) covered the “New Moore’s Law”: computers no longer get faster, just wider; must re-think algorithms to be parallel. David showed examples of several scientists which got profound speedups – from days to minutes. He covered several different techniques, I’ll summarize the most notable four:

  1. A “photonic fence” that zaps mosquitoes with lasers, to reduce the incidence of malaria in third world countries. This application needs fast computer vision combined with low power consumption, which was achieved by using GPUs.
  2. A military vehicle which detects Improvised Explosive Devices (IEDs) using computer vision techniques. The speedup afforded by using GPUs enables the vehicle to drive much faster (an obvious advantage when surrounded by hostile insurgents) while still reliably detecting IEDs.
  3. A method for processing CT scans that enables much reduced radiation exposure for the patient. When running on CPUs, the algorithm was impractically slow; GPUs enabled it to run fast enough to be used in practice.
  4. A motion compensation technique that enables surgery on a beating heart. The video of the heart is motion-compensated to appear static to the surgeon, who operates through a surgical robot that translates the surgeon’s motions into the moving frame of the heart.

David started the next part of the talk (which is very similar to the earlier version linked above)  by going over the heritage of GPU computing. He did so by going over three separate historical threads: graphics hardware, supercomputing, and finally GPU Computing.

The “history of graphics hardware” section started with a brief mention of a different kind of hardware: Dürer‘s perspective machine. The history of electronic graphics hardware started with Ivan Sutherland’s SketchPad and continues through the development of the graphics pipeline by SGI: Geometry Engine (1982), RealityEngine (1993), and InfiniteReality (1997). In the early days, the graphics pipeline was an actual description of the physical hardware structure: each stage was a separate chip or board, with the data flow fixed by the routing of wires between them. Currently, the graphics pipeline is an abstraction; the stages are different threads running on a shared pool of cores, as seen in modern GPU designs such as the GeForce 8, GT200, and Fermi.

The second historical thread was the development of supercomputers. David covered the early development of three ways to build a parallel machine: SIMD (Goddard MPP, Maspar MP-1, Thinking Machines CM-1 and CM-2), hardware multithreading (Tera MTA) and symmetric multiprocessing (SGI Challenge, Sun Enterprise) before returning to Fermi as an example of a design that combines all three.

“GPU computing 1.0” was the use (or abuse) of graphics pipelines and APIs to do general-purpose computing, culminating with BrookGPU. CUDA ushered in “GPU computing 2.0” with an API designed for that purpose. The hardware supported branching and looping, and hid thread divergence from the programmer. David claimed that now GPU computing is in a “3.0” stage, supported by a full ecosystem (multiple APIs, languages, algorithms, tools, IDEs, production lines, etc.). David estimated that there are about 100,000 active GPU compute developers in the world. Currently CUDA includes features such as “GPU Direct” (direct GPU-to-GPU transfer via a unified address space), full C++ support, and a template library.

The “future” part of the talk discussed the workloads that will drive future GPUs. Besides current graphics and high performance computing workloads, David believes a new type of workload, which he calls computational graphics, will be important. In some cases this will be the use of GPU compute to improve (via better performance or flexibility) algorithms typically performed using the graphics pipeline (image histogram analysis for HDR tone mapping, depth of field, bloom, texture-space diffusion for subsurface scattering, tessellation), and in others it will be to perform algorithms for which the graphics pipeline is not well-suited: ray tracing, stochastic rasterization, or dynamic object-space ambient occlusion.

David believes that the next stage of GPU computing (“4.0”) poses challenges to APIs (such as CUDA), to researchers, and to the education community. CUDA needs to be able to elegantly express programming models beyond simple parallelism, it needs to better express locality, and the development environment needs to improve and mature. Researchers need to foster new high-level libraries, languages, and platforms, as well as rethinking their algorithms. Finally, computer science curricula need to start teaching parallel computing in the first year.

Seven Things for March 10th, 2011

I’m back from a NYC trip (highlight: went to the taping of the Jimmy Fallon show and saw Snooki & Laurie Anderson – now there’s a combo; if only they had collaborated) and a San Francisco trip (highlights: the Autodesk Gallery – open to the public Wednesday afternoons – plus the amusingly-large and glowing heatsink on a motherboard at the NVIDIA GDC reception). So, it’s time to write down seven other cool things.

  • A convincing translucency effect was presented at GDC by the DICE guys (there’s precomputation involved, but it looks wonderful); Johan Andersson has a rundown of other DICE presentations. Other presentation lists include ones from NVIDIA and Intel, which I need to chew through sometime soon.
  • Vincent Scheib has a quick GDC report, and a presentation on HTML 5 and other browser technologies (e.g. WebGL), with a particular interest in the handheld market. Vincent mentions the Unreal GDC demo, which is pretty amazing.
  • Intel has a nice shadows demo, showing the various tradeoffs with cascaded and exponential variance shadow maps. It compiled out of the box for me, and there’s lots to try out. My only disappointment was that Lauritzen et al.’s clever shadow tricks are not demonstrated in it! Their basic ideas center around the idea of a prepass of the scene. They get tight bounds on the near and far view planes by finding the min and max depths, and tighten the shadow maps’ frustums around the visible points. Simple and clever, large improvements in shadow quality in real scenes, and relatively easy to implement or add to existing systems. (thanks to Mauricio Vives)
  • Feed43: This is a nice little idea. It tracks any web page you want, and you specify what is considered a change to the page. When a change is detected, you’re given an RSS ping. Best part is, you can share any RSS feed created with everyone. Examples: Ke-Sen Huang’s great conference paper list, and The Ray Tracing News. If you make a good feed, let me know and I’ll pass it on here. (thanks to Iliyan Georgiev)
  • This one’s old, but it’s a great page and I found it worthwhile, a discussion of gamma correction and text rendering. The surprising conclusion is that gamma alone doesn’t work nicely for text (it does wonders for line antialiasing, as I hope you know: compare uncorrected vs. corrected). It turns out that things like TrueType’s hinting has been tuned such that antialiasing and gamma correction can be detrimental.
  • An interesting tidbit from the government report “Designing a Digital Future“: on page 71 is an interesting section. A sample quote: “performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed.” They give a numerical algorithms example where hardware gave a 1000x gain, algorithms gave a 43000x gain, 43 times as much. (thanks to Morgan McGuire)
  • My Minecraft addiction has died down a fair bit (“just one more project…”), but I was happy to see Notch make a blog post with some technical chew, with more posts to come. He talks about a problem many apps are starting to run into, how to deal with precision problems when the terrain space is large. His solution for now, “it’s a feature!”, which actually kinda makes sense for Minecraft. He also starts to describe his procedural terrain generation algorithm.