- RapidMind is now a part of Intel. Which I think is a good fit; RapidMind does some interesting multicore techniques. Anything that makes multicore programming easier for a larger group of people is all to the good.
- I3D 2010 Call for Participation. October 23rd for papers, December 18th for posters – submit! I3D will be in Washington, DC, February 19-21.
- Dirty Coding Tricks in Gamasutra. A reprint of a Game Developer article, I’m glad to see it online – it’s pretty amusing in places. The hack at the top of page 2 is my favorite.
- Carmack on the iPhone 3GS. Now the iPhone begins to face the same problem PC developers have to deal with: different levels of GPU support. At least there will be only one line of Apple phones, vs. multiple IHVs with a wide range of offerings at different price points, etc.
- Mendeley Bibliographic Database. Manny Ko mentioned this one to me, and it looks interesting: a number of tools to pull in PDFs, get their references, let you annotate them, etc.
- O’Reilly’s author guide. A surprisingly open and honest guide to how you as an author deal with publishers, specifically O’Reilly. Many publishers consider their contracts and royalty terms to be trade secrets, so it’s refreshing to see this information given in a straightforward manner.
- aM Laboratory ToneMatrix. Pretty fun, and it reminds me of an experimental device seen at SIGGRAPH a few years ago. To see the most elaborate (and beautiful) web app ever, click on the AudioTool link, Start AudioTool, then pick DrumNBass. Scroll around. Click the menu in the lower left corner. Amazing.
Visual Stuff
After all the heavy lifting Naty’s been doing in covering conferences, I thought I’d make a light posting of fun visual stuff.
The first one’s not particularly visual, I include it just because the cover and book description was put on the web just a few days ago:
In short, the ShaderX series has moved publishers, from Charles River Media to A.K. Peters. Unfortunately for everyone else in the world, CRM retains the rights to the ShaderX name, hence the confusing rename. This book is ShaderX8, under a new title.
This resource is possibly handy: a map of game studios and educational institutions, searchable by state, city, etc. That said, it’s a bit funky: search by “Massachusetts” and you get a few reasonable hits, plus the Bermuda Triangle. Search on “MA” for State and you get lots of additional hits, mostly mall stores. But, major developers like Harmonix (in Cambridge) don’t show up. So, take it with a grain of salt, but it might be handy in turning up a place or two you might not have found otherwise.
A few weeks back I passed on a link from Morgan McGuire’s worthwhile Twitter blog (the only good use I’ve seen for Twitter so far) for a business-card sized ray tracer created by Andrew Kensler. In case you were too busy to actually compile and run this tiny piece of code, here’s the answer, computed in about a minute, sent on to me by Mauricio Vives. Note the depth of field and soft shadows:
Speaking of ray tracing, I noticed some GPU-side ray tracers are available for iPhone 3GS from Angisoft:
With the recent posting on Morphological Antialiasing, Matt Pharr pointed me at this cool Wikipedia page on scaling up pixel art. To whet your appetite, here’s an example from that page, the left side being the original image used to generate the right:
In a similar vein, I was highly impressed by the examples created by Potrace, a free, GPL’ed package for deriving Bézier curves from raster images. Here’s an example:
See more examples on Peter Selinger’s Potrace examples page. Doubly impressive is that Peter also carefully describes the algorithms used in the process.
I enjoy collecting images of reality that look like they have rendering artifacts. Here’s one from photos by Morgan McGuire, from the Seattle public library. The ground shadow look undersampled and banded, like someone was trying to get soft shadows by just adding a bunch of point light sources. What’s great is that reality is allowed to get away with artifacts – if this effect was seen in a synthetic image it would come across as unconvincing.
The best thing about reality is that it’s real, not photoshopped. I also enjoy photos where reality looks like computer graphics. Here’s a fine example by Benedict Radcliffe from this entertaining collection:
My one non-visual link for this posting is to Jos Stam’s essay on how photography and photorealism is not necessarily the best way to portray reality.
There are tons of visual toys on the web, a few in true 3D. Some (sent on by John McCormack) I played with for up to a whole minute or more: ECO ZOO – click on everything and know it’s all 3D, don’t forget to rotate around; the author’s bio and info is at ROXIK – needs more polygons, but click and drag on the face. In the end, give your eyes a rest with this instant screen saver (actually, it’s also a bit interactive). This last was done using Papervision3D, an open source library which controls 3D in Flash. More demos here. Maybe there’s actually something to this idea of 3D on the web after all… nah, crazy dream.
OK, I’m done with things that are in some way vaguely, almost educational. Here’s a video, 8 Bit Trip, that’s been making the rounds; a little more info here. Not fantastically entertaining, but I admire the amazing dedication to stop motion animation. 1500 hours?!
Art: Xia Xiaowan makes sculptures by a method reminiscent of volume rendering techniques:
More at Google Images.
The Mighty Optical Illusions blog is a great place to get a feed of new illusions. Here are two posts I particularly liked: spinning man (sorry, you’ll actually have to click that link to see it) and more from Kitaoka, e.g.
I love that new illusions are being developed all the time nowadays. I found this next one here; unfortunately, to quote Tom Parmenter, “digital technology is the universal solvent of intellectual property rights” (Copyright 1995). No credit is given at that site, so I don’t know who actually made this one, but it’s lovely:
One last illusion, from here (again, author unknown), included since it’s such a retina-burner:
If you hanker for something real and physical after all these, you might consider making a pseudoscope (instructions here). To be honest, I tried, and I’ll tell you that mirrors from the local craft store are truly bad for this project. So, I can’t say I’ve seen the effect desired yet. Next step for me is finding a good, cheap store for front surface mirrors (the link in the article is broken) – if anyone has suggestions, please let me know.
Pacific Graphics 2009 Papers
The incomparable Ke-Sen Huang has put up yet another graphics conference papers page, this time for Pacific Graphics 2009. The full papers list is up and there is a good number of preprints linked. Pacific Graphics has had a good number of interesting papers over the years; this year continues the tradition. HPCCD: Hybrid Parallel Continuous Collision Detection proposes an interesting combined CPU-GPU collision detection algorithm. Procedural Generation of Rock Piles using Aperiodic Tiling uses an aperiodic tiling method (similar to a 3D version of Wang Tiles) to enable the quick placement of large numbers of random-seeming rocks. The technique described in Fast, Sub-pixel Antialiased Shadow Maps is quite expensive (it improves quality and adds cost to the already-costly irregular z-buffer technique), but it is worth looking at for applications running on advanced hardware (in other words, not for current-gen games platforms). Finally, Interactive Rendering of Interior Scenes with Dynamic Environmental Illumination presents a PRT-like approach to render complex interiors lit by arbitrary environmental lighting coming in through windows.
Three more papers lack online preprints or abstracts (at the moment) but have promising titles: Procedural Synthesis using Vortex Particle Method for Fluid Simulation, Linkless Octree Using Multi-Level Perfect Hashing, and The Dual-microfacet Model for Capturing Thin Transparent Slabs.
SIGGRAPH 2009 Encore
SOMA Media has been providing (for a fee) video of SIGGRAPH presentations for the last few years; this service (SIGGRAPH Encore) is invaluable for people who could not attend the conference. Even if you did attend, it is great to be able to see presentations you had to miss (SIGGRAPH has a lot of overlap), or even to revisit presentations you were able to see. The videos include the speaker’s voice synchronized with their screen feed, including slides as well as any demos they were showing – it is almost as good as being in the room.
Encore has part of SIGGRAPH 2003, as well as most of 2004, 2005, 2007 and 2008 (I’m not sure what happened to 2006). As of yesterday, SIGGRAPH 2009 is available as well. This includes Courses, Talks, and Computer Animation Festival presentations, as well as Technical, Game and Art Paper presentations. However, not all sessions are available; as in previous years, some needed to be removed for copyright or other reasons. Unfortunately, some of the omissions include key content like the Beyond Programmable Shading course and the second half of the Advances in Real-Time Rendering in 3D Graphics and Games course. I will list the available presentations; if you see stuff you like, it might be worth purchasing the relevant presentation videos. Individual videos cost between $10 and $20; the entire 2009 set is $300. Presentations which I think are most relevant to readers of this site will be marked in bold.
The available courses are: “The Whys, How Tos, and Pitfalls of User Studies“, Introduction to Computer Graphics, Advances in Real-Time Rendering in 3D Graphics and Games I, Point Based Graphics – State of the Art and Recent Advances, Color Imaging, Real-Time Global Illumination for Dynamic Scenes, Acquisition of Optically Complex Objects and Phenomena, Creating New Interfaces for Musical Expression, An Introduction to Shader-Based OpenGL Programming, Next Billion Cameras, Advanced Illumination Techniques for GPU Volume Raycasting, Scattering, Visual Algorithms in Post-Production, Interaction: Interfaces, Algorithms, and Applications, Shape Grammars, and The Making of ‘Shade Recovered’: Networked Senses at Play.
The missing courses are: Advances in Real-Time Rendering in 3D Graphics and Games II, Build Your Own 3D Scanner: 3D Photography for Beginners, Interactive Sound Rendering, Efficient Substitutes for Subdivision Surfaces, Beyond Programmable Shading (I and II), Visual Perception of 3D Shape, The Digital Emily Project: Photoreal Facial Modeling and Animation, Realistic Human Body Movement for Emotional Expressiveness, Computation & Cultural Heritage: Fundamentals and Applications, and Advanced Material Appearance Modeling.
The available talks are: Tablescape Animation: A Support System for Making Animations Using Tabletop Physical Objects, Teaching Animation in Second Life, Collaborative Animation Productions Using Original Music in an Unique Teaching Environment, MyWorld4D: Introduction to Computer Graphics with a Modeling and Simulation Twist, GPSFilm: Location-Based Mobile Cinema, Applying Color Theory to Creating Scientific Visualizations, BiDi Screen, Karma Chameleon: Jacquard-woven photonic fiber display, Generalizing Multi-Touch Direct Manipulation, Non-Linear Aperture for Stylized Depth of Field, PhotoSketch: A Sketch Based Image Query and Compositing System, Automatic colorization of grayscale images using multiple images on the Web, 2D and 3D Facial Correspondences via Photometric Alignment, Estimating Specular Roughness from Polarized Second Order Spherical Gradient Illumination, Motion Capture for Natural Tree Animation, Connecting the dots: Discovering what’s important for creature motion, Surface Motion Graphs for Character Animation from 3D Video, Methods for Fast Skeleton Sketching, Animation and Simulation of Octopus Arms in ‘The Night at the Museum 2’, Wildfire forecasting using an open source 3D multilayer geographical framework, Innovation in Animation: Exiting the Comfort Zone, “Building Bridges, Not Falling Through Cracks: what we have learned during ten years of Australian Digital Visual Effects Traineeships”, Genetic Stair, Computational Thinking through Programming and Algorithmic Art, Visual Zen Art: Aesthetic Cognitive Dissonance in Japanese Dry Stone Garden Measured in Visual PageRank, Spore API: Accessing a Unique Database of Player Creativity, Results from the Global Game Jam, Houdini in a Games Pipeline, well-formed.eigenfactor: considerations in design and data analysis, Multi-Touch Everywhere!, The Immersive Computer-controlled Audio Sound Theater: History and current trends in multi-modal sound diffusion, Radially-Symmetric Reflection Maps, Smoother Subsurface Scattering, Painting with Polygons, Volumetric Shadow Mapping, Bucket Depth Peeling, BVH for efficient raytracing of dynamic metaballs on GPU, Normal Mapping with Low-Frequency Precomputed Visibility, RACBVHs: Random-Accessible Compressed Bounding Volume Hierarchies, Rendering Volumes With Microvoxels, Multi-Layer Dual-Resolution Screen-Space Ambient Occlusion, Beyond Triangles: Gigavoxels Effects In Video Games, Single Pass Depth Peeling via CUDA Rasterizer, Design and self-assembly of DNA into nanoscale three-dimensional shapes, Computer-Mediated Performance and Extended Instrument Design, InTune: A Musician’s Intonation Visualization System, Adaptive Coded Aperture Projection, Projected Light Microscopy, High-Tech Chocolate: Exploring mobile and 3D applications for factories, Non-Reflective Boundary Condition For Incompressible Free Surface Fluids, See What You Feel – A Study in the Real-time Visual Extension in Music, Designing Instruments for Abstract Visual Improvisation, 2009 Japan Media Arts Festival Review, “Model-Based Community Planning, Decision Support, and Collaboration”, and “Universal Panoramas: Narrative, Interactive Panoramic Universes on the Internet“.
The missing talks are: “Synchronous Objects for One Flat Thing, Reproduced”, GreenLite Dartmouth: Unplug or the Polar Bear Gets It, Shooting ‘UP’: A Trip Through the Camera Structure of ‘UP’, From Pythagoras to Pixels: The Ongoing Trajectory of Visual Music, Modulated Feedback: The Audio-Visual Composition ‘Mercurius’, Visual Music and the True Collaboration of Art Forms and Artists, What Sound Does Color Make?, Exploring Shifting Ground: Creative Intersections Between Experimental Animation and Audio, An Efficient Level Set Toolkit for Visual Effects, Water-Surface Animation for ‘Madagascar: Escape 2 Africa’, Underground Cave Sequence for ‘Land of the Lost’, Creativity in Videogame Design as Pedagogy, Geometric Fracture Modeling in ‘Bolt’, Simulating the Balloon Canopy in ‘Up’, Fight Night 4: Physics-Driven Animation and Visuals, B.O.B.: Breaking Ordinary Boundaries of Animation in ‘Monsters vs. Aliens’, Empowering Audiences Through User-Directed Entertainment, Educate the Educator: Lessons Learned From the Faculty Education Programs at Rhythm & Hues Studios Worldwide, Bringing the Studio to Campus: A Case Study in Successful Collaboration Between Academia and Industry, The Evolution of Revolution of Design: From Paper Models and Beyond, Green From the Ground Up: Infrastructure Rehabilitation and Sustainable Design, Model Rebuilding for New Orleans Transportation, Making Pixar’s ‘Partly Cloudy’: A Director’s Vision, Hatching an Imaginary Bird, Rhino-Palooza: Procedural Animation and Mesh Smoothing, It’s Good to be Alpha, Venomous Cattle for ‘Australia’, Applying Painterly Concepts in a CG Film, From Pitchvis to Postvis: Integrating Visualization Into the Production Pipeline, The Light Kit: HDRI-Based Area Light System for ‘The Curious Case of Benjamin Button’, Interactive Lighting of Effects Using Point Clouds In ‘Bolt’, Composite-Based Refraction for Fur and Other Complex Objects on ‘Bolt’, Dense Stereo Event Capture for the James Bond Film ‘Quantum of Solace’, ILM’s Multitrack: A New Visual Tracking Framework for High-End VFX Production, Immersive and Impressive: The Impressionistic Look of Flower on the PS3, “Universal Panoramas: Narrative, Interactive Panoramic Universes on the Internet“, The Blues Machine, Real Time Live, Clouds With Character: ‘Partly Cloudy’, The Hair-Motion Compositor: Compositing Dynamic Hair Animations in a Production Environment, iBind: Smooth Indirect Binding Using Segmented Thin-Layers, Concurrent Monoscopic and Stereoscopic Animated Film Production, Pushing Tailoring Techniques To Reinforce ‘Up’ Character Design, The Net Effect: Simulated Bird-Catching in ‘Up’, Destroying the Eiffel Tower: A Modular FX Pipeline, Building Story in Games: No Cut Scenes Required, Real-Time Design Review and Collaboration for Global Infrastructure Projects, Sound Scope Headphones, Medial Axis Techniques for Stereoscopic Extraction, Realistic Eye Motion Using Procedural Geometric Methods, Practical Uses of a Ray Tracer for ‘Cloudy With a Chance of Meatballs’, Making a Feature-Length Animated Movie With a Game Engine, and Practical Character Physics For Animators.
Almost all the Technical Papers presentations are available. The following are missing: Light Warping for Enhanced Surface Depiction, How Well Do Line Drawings Depict Shape?, Detail-Preserving Continuum Simulation of Straight Hair, and Generating Photo Manipulation Tutorials by Demonstration. Also, two of the ToG papers (2D Piecewise Algebraic Splines for Implicit Modeling and A BSP-Based Algorithm for Dimensionally Nonhomogeneous Planar Implicit Curves With Topological Guarantees) were not presented at SIGGRAPH due to last-minute visa or illness issues.
All seven of the Art Paper presentations are available, as well as most of the Game Paper presentations. The following are missing: Inferred Lighting: Fast Dynamic Lighting and Shadows for Opaque and Translucent Objects, Experimental Evaluation of Teaching Recursion in a Video Game, and Cardboard Semiotics: Reconfigurable Symbols as a Means for Narrative Prototyping in Game Design.
Finally, Encore has video for a single panel: “The Future of Teaching Computer Graphics for Students in Engineering, Science, and Mathematics”.
ShaderX^2 Code Available for Download
With Wolfgang Engel’s blessing, I’ve added the ShaderX2 books’ (both of them) CD-ROM code samples as zip files and put links in the ShaderX guide. The code is hardly bleeding edge at this point, of course, but code doesn’t rot – there are many bits that are still useful. I’ve also folded in most of the code addenda into the distributions themselves. The only exception at this point is Thomas Rued’s stereographic rendering shaders; in reality, more up-to-date information (and SDK) is available from the company he works with, ColorCode 3-D.
Our book’s figures now downloadable for fair use
A professor contacted us about whether we had digital copies of our figures available for use on her course web pages for students. Well, we certainly should (and our publisher agrees), and would have done this awhile ago if we had thought of it. So, after a few hours of copying and saving with MWSnap, I’ve made an archive of most of the figures in Real-Time Rendering, 3rd edition. It’s a 34 Mb download:
http://www.realtimerendering.com/downloads/RTR3figures.zip
Update: preview and download individual figures on Flickr
Update: figures for the Fourth Edition are here.
This archive should make preparation a lot more pleasant and less time-consuming for instructors, vs. scanning in pages of our book or redrawing figures from scratch. Here’s the top of the README.html file in this archive:
These figures and tables from the book are copyright A.K. Peters Ltd. We have provided these images for use under United States Fair Use doctrine (or similar laws of other countries), e.g., by professors for use in their classes. All figures in the book are not included; only those created by the authors (directly, or by use of free demonstration programs, as listed below) or from public sources (e.g., NASA) are available here. Other images in the book may be reused under Fair Use, but are not part of this collection. It is good practice to acknowledge the sources of any images reused – a link to http://www.realtimerendering.com we suspect would be useful to students, and we have listed relevant primary sources below for citation. If you have questions about reuse, please contact A.K. Peters at service@akpeters.com.
I’ve added a link to this archive at the top of our main page. I should also mention that Tomas’ Powerpoint slidesets for a course he taught based on the second edition of our book are still available for download. The slides are a bit dated in spots, but are a good place to start. If you have made a relevant teaching aid available, please do comment and let others know.
SIGGRAPH Asia 2009 Papers – Micro-Rendering, RenderAnts, and More
A partial papers list has been up on Ke-Sen Huang’s SIGGRAPH Asia 2009 page for a while now, but I was waiting until either the full list was up or an interesting preprint appeared before mentioning it. Well, the latter has happened – A preprint and video are now available for the paper Micro-Rendering for Scalable, Parallel Final Gathering. It shares many authors (including the first) with one of the most interesting papers from last year’s SIGGRAPH Asia conference, Imperfect Shadow Maps for Efficient Computation of Indirect Illumination. Last year’s paper proposed a way to efficiently compute indirect shadowing by rendering a very large number of very low-quality shadowmaps, using a coarse point-based scene representation and some clever hole-filling. This year’s paper extends this occlusion technique to support full global illumination. Some of the same authors were recently responsible for another notable extension of an occlusion method (SSAO in this case) to global illumination.
RenderAnts: Interactive REYES Rendering on GPUs is another notable paper at SIGGRAPH Asia this year; no preprint yet, but a technical report is available. A technical report is also available for another interesting paper, Debugging GPU Stream Programs Through Automatic Dataflow Recording and Visualization.
No preprint or technical report, but promising paper titles: Approximating Subdivision Surfaces with Gregory Patches for Hardware Tessellation and Real-Time Parallel Hashing on the GPU.
Looking at this list and last year’s accepted papers, SIGGRAPH Asia seems to be more accepting of real-time rendering papers than the main SIGGRAPH conference. Combined with the strong courses program, it’s shaping up to be a very good conference this year.
Fundamentals of Computer Graphics, 3rd Edition
One bit of deja vu for me at SIGGRAPH this year was another book signing at the A K Peters booth. Last year’s SIGGRAPH had the signing for Real-Time Rendering; this year I was at the book signing for the third edition of Fundamentals of Computer Graphics. My presence at the signing was due to the fact that I wrote a chapter on graphics for games (this edition also has new chapters on implicit modeling, color, and visualization, as well as updates to the existing chapters). As in the case of Real-Time Rendering, I was interested in contributing to this book as a fan of the previous editions. Fundamentals is targeted as a “first graphics book” so it has a slightly different audience than Real-Time Rendering, which is meant to be the reader’s second book on the subject.
At the A K Peters booth I also got to try out the Kindle edition of Fundamentals (the illustrations in Real-Time Rendering rely on color to convey information, so a Kindle edition will have to wait for color devices). I haven’t jumped on the Kindle bandwagon personally (the DRM bothers me; when I buy something I like to own it), but I know people who are quite pleased with their Kindle (or iPhone Kindle application).
HPG 2009 Report
I got to attend HPG this year, which was a fun experience. At smaller, more focused conferences like EGSR and HPG you can actually meet all the other attendees. The papers are also more likely to be relevant than at SIGGRAPH, where the subject matter of the papers has become so broad that they rarely seem to relate to graphics at all.
I’ve written about the HGP 2009 papers twice before, but six of the papers lacked preprints and so it was hard to judge their relevance. With the proceedings, I can take a closer look. The “Configurable Filtering Unit” paper is now available on Ke-Sen Huang’s webpage, and the rest are available at the ACM digital library. The presentation slides for most of the papers (including three of these six) are available at the conference program webpage.
A Directionally Adaptive Edge Anti-Aliasing Filter – This paper describes an improved MSAA mode AMD has implemented in their drivers. It does not require changing how the samples are generated, only how they are resolved into final pixel colors; this technique can be implemented on any system (such as DX10.1-class PCs, or certain consoles) where shaders can access individual samples. In a nutshell, the technique inspects samples in adjacent pixels to more accurately compute edge location and orientation.
Image Space Gathering – This paper from NVIDIA describes a technique where sharp shadows and reflections are rendered into offscreen buffers, upon which an edge-aware blur operation (similar to a cross bilateral filter) is used to simulate soft shadows and glossy reflections. The paper was targeted for ray-tracing applications, but the soft shadow technique would work well with game rasterization engines (the glossy reflection technique doesn’t make sense for the texture-based reflections used in game engines, since MIP-mapping the reflection textures is faster and more accurate).
Scaling of 3D Game Engine Workloads on Modern Multi-GPU Systems – systems with multiple GPUs used to be extremely rare, but they are becoming more common (mostly in the form of multi-GPU cards rather than multi-card systems). This paper appears to do a through analysis of the scaling of game workloads on these systems, but the workloads used are unfortunately pretty old (the newest game analyzed was released in 2006).
Bucket Depth Peeling – I’m not a big fan of depth peeling systems, since they invest massive resources (rendering the scene multiple times) to solve a problem which is pretty marginal (order-independent transparency). This paper solves the multi-pass issue, but is profligate with a different resource – bandwidth. It uses extremely fat frame buffers (128 bytes per pixel).
CFU: Multi-purpose Configurable Filtering Unit for Mobile Multimedia Applications on Graphics Hardware – This paper proposes that hardware manufacturers (and API owners) add a set of extensions to fixed-function texture hardware. The extensions are quite useful, and enable accelerating a variety of applications significantly (around 2X). Seems like a good idea to me, but Microsoft/NVIDIA/AMD/etc. may be harder to convince…
Embedded Function Composition – The first two authors on this paper are Turner Whitted (inventor of hierarchical ray tracing) and Jim Kajiya (who defined the rendering equation). So what are they up to nowadays? They describe a hardware system where configurable hardware for 2D image operations is embedded in the display device, after the frame buffer output. The system is targeted to applications such as font and 2D overlays. The method in which operations are defined is quite interesting, resembling FPGA configuration more than shader programming.
Besides the papers, HPG also had two excellent keynotes. I missed Tim Sweeney’s keynote (the slides are available here), but I was able to see Larry Gritz’s keynote. The slides for Larry’s keynote (on high-performance rendering for film) are also available, but are a bit sparse, so I will summarize the important points.
Larry started by discussing the differences between film and game rendering. Perhaps the most obvious one is that games have fixed performance requirements, and quality is negotiable; film has fixed quality requirements, and performance is negotiable. However, there are also less obvious differences. Film shots are quite short – about 100-200 frames at most; this means that any precomputation, loading or overhead must be very fast since it is amortized over so few frames (it is rare that any precomputation or overhead from one shot can be shared with another). Game levels last for many tens of thousands of frames, so loading time is amortized more effiiciently. More importantly, those frames are multiplied by hundreds of thousands of users, so precomputation can be quite extensive and still pay off. Larry makes the point that comparing the 5-10 hours/frame which is typical of film rendering with the game frame rate (60 or 30 fps) is misleading; a fair comparison would include game scene loading times, tool precomputations, etc. The important bottleneck for film rendering (equivalent to frame rate for games) is artist time.
Larry also discussed why film rendering doesn’t use GPUs; the data for a single frame doesn’t fit in video memory, rooms full of CPU blades are very efficient (in terms of both Watts and dollars), and the programming models for GPUs have yet to stabilize. Larry then discussed the reasons that, in his opinion, ray tracing is better suited for film rendering than the REYES algorithm used in Pixar’s Renderman. As background, it should be noted that Larry presides over Sony Pictures Imageworks’ implementation of the Arnold ray tracing renderer which they are using to replace Renderman. An argument for replacing Renderman with a full ray-tracing renderer is especially notable coming from Larry Gritz; Larry was the lead architect of Renderman for some time, and has written one of the more important books popularizing it. Larry’s main points are that REYES has inherent inefficiencies, it is harder to parallelize, effects such as shadows and reflections require a hodgepodge of effects, and once global illumination is included (now common in Renderman projects) most of REYES inherent advantages go away. After switching to ray-tracing, SPI found that they need to render fewer passes, lighting is simpler, the code is more straightforward, and the artists are more productive. The main downside is that displacing geometric detail is no longer “free” as it was with REYES.
Finally, Larry discussed why current approaches to shader programming do not work that well with ray tracing; they have developed a new shading language which works better. Interestingly, SPI is making this available under an open-source license; details on this and other SPI open-source projects can be found here.
I had a chance to chat with Larry after the keynote, so I asked him about hybrid approaches that use rasterization for primary visibility, and ray-tracing for shadows, reflections, etc. He said such approaches have several drawbacks for film production. Having two different representations of the scene introduces the risk of precision issues and mismatches, rays originating under the geometry, etc. Renderers such as REYES shade on vertices, and corners and crevices are particularly bad as ray origins. Having to maintain what are essentially two seperate codebases is another issue. Finally, once you use GI then the primary intersections are a relatively minor part of the overall frame rendering time, so it’s not worth the hassle.
In summary, HPG was a great conference, well worth attending. Next year it will be co-located with EGSR. The combination of both conferences will make attendance very attractive, especially for people who are relatively close (both conferences will take place in Saarbrucken, Germany). In 2011, HPG will again be co-located with SIGGRAPH.
HPG and SIGGRAPH: pix and links
Some seven links to keep you busy while we digest HPG and SIGGRAPH:
- Pictures of HPG and SIGGRAPH – even though just about everyone at these conferences has a camera in some form on them, we just about never take pictures. I decided to try to photograph anyone I recognized this year.
- Tim Sweeney’s HPG keynote slides – I didn’t attend the keynote, unfortunately, but heard about it. Main takeaway for me is that programming these highly parallel machines is hard, and the more that IHVs can do to ease the burden and remove limitations the more successful they will be.
- While waiting for our HPG reports, read Steve Worley’s.
- The course notes for “Advances in Real-Time Rendering in 3D Graphics and Games” will be up in a few weeks, if not sooner. Crytek’s presentation is available at their website.
- The “Beyond Programmable Shading” course notes are available now. I particularly liked Johan Andersson’s talk, partially for the sheer complexity of it all. The various factors that affect making a game engine fast are a bit mind-boggling.
- The place to go for interactive ray tracing development information is the ompf.org forum.
- This was the first year ever that I didn’t attend the Electronic Theater. Well, I did attend the first half-hour (live real-time demos), but then found myself looking at my watch as colorful but meaningless things occurred on the screen. I think the fact that we could attend the E.T. without needing a ticket meant that I could keep putting it off and also wouldn’t feel I lost anything if I missed it. If SIGGRAPH had issued me a ticket for a specific night, I suspect I would have willingly stayed for all of it, not wanting to lose the value of the ticket. Psychology. All that said, the best colorful but meaningless real-time demo I saw was “DT4 Identity SA“, freeware which runs on a Mac and is quite charming.