Category Archives: Resources

SIGGRAPH 2009 Encore

SOMA Media has been providing (for a fee) video of SIGGRAPH presentations for the last few years; this service (SIGGRAPH Encore) is invaluable for people who could not attend the conference. Even if you did attend, it is great to be able to see presentations you had to miss (SIGGRAPH has a lot of overlap), or even to revisit presentations you were able to see.  The videos include the speaker’s voice synchronized with their screen feed, including slides as well as any demos they were showing – it is almost as good as being in the room.

Encore has part of SIGGRAPH 2003, as well as most of 2004, 2005, 2007 and 2008 (I’m not sure what happened to 2006).  As of yesterday, SIGGRAPH 2009 is available as well.  This includes Courses, Talks, and Computer Animation Festival presentations, as well as Technical, Game and Art Paper presentations.  However, not all sessions are available; as in previous years, some needed to be removed for copyright or other reasons.  Unfortunately, some of the omissions include key content like the Beyond Programmable Shading course and the second half of the Advances in Real-Time Rendering in 3D Graphics and Games course.  I will list the available presentations; if you see stuff you like, it might be worth purchasing the relevant presentation videos.  Individual videos cost between $10 and $20; the entire 2009 set is $300.  Presentations which I think are most relevant to readers of this site will be marked in bold.

The available courses are: “The Whys, How Tos, and Pitfalls of User Studies“, Introduction to Computer Graphics, Advances in Real-Time Rendering in 3D Graphics and Games I, Point Based Graphics – State of the Art and Recent Advances, Color Imaging, Real-Time Global Illumination for Dynamic Scenes, Acquisition of Optically Complex Objects and Phenomena, Creating New Interfaces for Musical Expression, An Introduction to Shader-Based OpenGL Programming, Next Billion Cameras, Advanced Illumination Techniques for GPU Volume Raycasting, Scattering, Visual Algorithms in Post-Production, Interaction: Interfaces, Algorithms, and Applications, Shape Grammars, and The Making of ‘Shade Recovered’: Networked Senses at Play.

The missing courses are: Advances in Real-Time Rendering in 3D Graphics and Games II, Build Your Own 3D Scanner: 3D Photography for Beginners, Interactive Sound Rendering, Efficient Substitutes for Subdivision Surfaces, Beyond Programmable Shading (I and II), Visual Perception of 3D Shape, The Digital Emily Project: Photoreal Facial Modeling and Animation, Realistic Human Body Movement for Emotional Expressiveness, Computation & Cultural Heritage: Fundamentals and Applications, and Advanced Material Appearance Modeling.

The available talks are: Tablescape Animation: A Support System for Making Animations Using Tabletop Physical Objects, Teaching Animation in Second Life, Collaborative Animation Productions Using Original Music in an Unique Teaching Environment, MyWorld4D: Introduction to Computer Graphics with a Modeling and Simulation Twist, GPSFilm: Location-Based Mobile Cinema, Applying Color Theory to Creating Scientific Visualizations, BiDi Screen, Karma Chameleon: Jacquard-woven photonic fiber display, Generalizing Multi-Touch Direct Manipulation, Non-Linear Aperture for Stylized Depth of Field, PhotoSketch: A Sketch Based Image Query and Compositing System, Automatic colorization of grayscale images using multiple images on the Web, 2D and 3D Facial Correspondences via Photometric Alignment, Estimating Specular Roughness from Polarized Second Order Spherical Gradient Illumination, Motion Capture for Natural Tree Animation, Connecting the dots: Discovering what’s important for creature motion, Surface Motion Graphs for Character Animation from 3D Video, Methods for Fast Skeleton Sketching, Animation and Simulation of Octopus Arms in ‘The Night at the Museum 2’, Wildfire forecasting using an open source 3D multilayer geographical framework, Innovation in Animation: Exiting the Comfort Zone, “Building Bridges, Not Falling Through Cracks: what we have learned during ten years of Australian Digital Visual Effects Traineeships”, Genetic Stair, Computational Thinking through Programming and Algorithmic Art, Visual Zen Art: Aesthetic Cognitive Dissonance in Japanese Dry Stone Garden Measured in Visual PageRank, Spore API: Accessing a Unique Database of Player Creativity, Results from the Global Game Jam, Houdini in a Games Pipeline, well-formed.eigenfactor: considerations in design and data analysis, Multi-Touch Everywhere!, The Immersive Computer-controlled Audio Sound Theater: History and current trends in multi-modal sound diffusion, Radially-Symmetric Reflection Maps, Smoother Subsurface Scattering, Painting with Polygons, Volumetric Shadow Mapping, Bucket Depth Peeling, BVH for efficient raytracing of dynamic metaballs on GPU, Normal Mapping with Low-Frequency Precomputed Visibility, RACBVHs: Random-Accessible Compressed Bounding Volume Hierarchies, Rendering Volumes With Microvoxels, Multi-Layer Dual-Resolution Screen-Space Ambient Occlusion, Beyond Triangles: Gigavoxels Effects In Video Games, Single Pass Depth Peeling via CUDA Rasterizer, Design and self-assembly of DNA into nanoscale three-dimensional shapes, Computer-Mediated Performance and Extended Instrument Design, InTune: A Musician’s Intonation Visualization System, Adaptive Coded Aperture Projection, Projected Light Microscopy, High-Tech Chocolate: Exploring mobile and 3D applications for factories, Non-Reflective Boundary Condition For Incompressible Free Surface Fluids, See What You Feel – A Study in the Real-time Visual Extension in Music, Designing Instruments for Abstract Visual Improvisation, 2009 Japan Media Arts Festival Review, “Model-Based Community Planning, Decision Support, and Collaboration”, and “Universal Panoramas: Narrative, Interactive Panoramic Universes on the Internet“.

The missing talks are:  “Synchronous Objects for One Flat Thing, Reproduced”, GreenLite Dartmouth: Unplug or the Polar Bear Gets It, Shooting ‘UP’: A Trip Through the Camera Structure of ‘UP’, From Pythagoras to Pixels: The Ongoing Trajectory of Visual Music, Modulated Feedback: The Audio-Visual Composition ‘Mercurius’, Visual Music and the True Collaboration of Art Forms and Artists, What Sound Does Color Make?, Exploring Shifting Ground: Creative Intersections Between Experimental Animation and Audio, An Efficient Level Set Toolkit for Visual Effects, Water-Surface Animation for ‘Madagascar: Escape 2 Africa’, Underground Cave Sequence for ‘Land of the Lost’, Creativity in Videogame Design as Pedagogy, Geometric Fracture Modeling in ‘Bolt’, Simulating the Balloon Canopy in ‘Up’, Fight Night 4: Physics-Driven Animation and Visuals, B.O.B.: Breaking Ordinary Boundaries of Animation in ‘Monsters vs. Aliens’, Empowering Audiences Through User-Directed Entertainment, Educate the Educator: Lessons Learned From the Faculty Education Programs at Rhythm & Hues Studios Worldwide, Bringing the Studio to Campus: A Case Study in Successful Collaboration Between Academia and Industry, The Evolution of Revolution of Design: From Paper Models and Beyond, Green From the Ground Up: Infrastructure Rehabilitation and Sustainable Design, Model Rebuilding for New Orleans Transportation, Making Pixar’s ‘Partly Cloudy’: A Director’s Vision, Hatching an Imaginary Bird, Rhino-Palooza: Procedural Animation and Mesh Smoothing, It’s Good to be Alpha, Venomous Cattle for ‘Australia’, Applying Painterly Concepts in a CG Film, From Pitchvis to Postvis: Integrating Visualization Into the Production Pipeline, The Light Kit: HDRI-Based Area Light System for ‘The Curious Case of Benjamin Button’, Interactive Lighting of Effects Using Point Clouds In ‘Bolt’, Composite-Based Refraction for Fur and Other Complex Objects on ‘Bolt’, Dense Stereo Event Capture for the James Bond Film ‘Quantum of Solace’, ILM’s Multitrack: A New Visual Tracking Framework for High-End VFX Production, Immersive and Impressive: The Impressionistic Look of Flower on the PS3, “Universal Panoramas: Narrative, Interactive Panoramic Universes on the Internet“, The Blues Machine, Real Time Live, Clouds With Character: ‘Partly Cloudy’, The Hair-Motion Compositor: Compositing Dynamic Hair Animations in a Production Environment, iBind: Smooth Indirect Binding Using Segmented Thin-Layers, Concurrent Monoscopic and Stereoscopic Animated Film Production, Pushing Tailoring Techniques To Reinforce ‘Up’ Character Design, The Net Effect: Simulated Bird-Catching in ‘Up’, Destroying the Eiffel Tower: A Modular FX Pipeline, Building Story in Games: No Cut Scenes Required, Real-Time Design Review and Collaboration for Global Infrastructure Projects, Sound Scope Headphones, Medial Axis Techniques for Stereoscopic Extraction, Realistic Eye Motion Using Procedural Geometric Methods, Practical Uses of a Ray Tracer for ‘Cloudy With a Chance of Meatballs’, Making a Feature-Length Animated Movie With a Game Engine, and Practical Character Physics For Animators.

Almost all the Technical Papers presentations are available.  The following are missing: Light Warping for Enhanced Surface Depiction, How Well Do Line Drawings Depict Shape?, Detail-Preserving Continuum Simulation of Straight Hair, and Generating Photo Manipulation Tutorials by Demonstration.  Also, two of the ToG papers (2D Piecewise Algebraic Splines for Implicit Modeling and A BSP-Based Algorithm for Dimensionally Nonhomogeneous Planar Implicit Curves With Topological Guarantees) were not presented at SIGGRAPH due to last-minute visa or illness issues.

All seven of the Art Paper presentations are available, as well as most of the Game Paper presentations.  The following are missing: Inferred Lighting: Fast Dynamic Lighting and Shadows for Opaque and Translucent Objects, Experimental Evaluation of Teaching Recursion in a Video Game, and Cardboard Semiotics: Reconfigurable Symbols as a Means for Narrative Prototyping in Game Design.

Finally, Encore has video for a single panel: “The Future of Teaching Computer Graphics for Students in Engineering, Science, and Mathematics”.

ShaderX^2 Code Available for Download

With Wolfgang Engel’s blessing, I’ve added the ShaderX2 books’ (both of them) CD-ROM code samples as zip files and put links in the ShaderX guide. The code is hardly bleeding edge at this point, of course, but code doesn’t rot – there are many bits that are still useful. I’ve also folded in most of the code addenda into the distributions themselves. The only exception at this point is Thomas Rued’s stereographic rendering shaders; in reality, more up-to-date information (and SDK) is available from the company he works with, ColorCode 3-D.

Our book’s figures now downloadable for fair use

A professor contacted us about whether we had digital copies of our figures available for use on her course web pages for students. Well, we certainly should (and our publisher agrees), and would have done this awhile ago if we had thought of it. So, after a few hours of copying and saving with MWSnap, I’ve made an archive of most of the figures in Real-Time Rendering, 3rd edition. It’s a 34 Mb download:

http://www.realtimerendering.com/downloads/RTR3figures.zip

Update: preview and download individual figures on Flickr

Update: figures for the Fourth Edition are here.

This archive should make preparation a lot more pleasant and less time-consuming for instructors, vs. scanning in pages of our book or redrawing figures from scratch. Here’s the top of the README.html file in this archive:

These figures and tables from the book are copyright A.K. Peters Ltd. We have provided these images for use under United States Fair Use doctrine (or similar laws of other countries), e.g., by professors for use in their classes. All figures in the book are not included; only those created by the authors (directly, or by use of free demonstration programs, as listed below) or from public sources (e.g., NASA) are available here. Other images in the book may be reused under Fair Use, but are not part of this collection. It is good practice to acknowledge the sources of any images reused – a link to http://www.realtimerendering.com we suspect would be useful to students, and we have listed relevant primary sources below for citation. If you have questions about reuse, please contact A.K. Peters at service@akpeters.com.

I’ve added a link to this archive at the top of our main page. I should also mention that Tomas’ Powerpoint slidesets for a course he taught based on the second edition of our book are still available for download. The slides are a bit dated in spots, but are a good place to start. If you have made a relevant teaching aid available, please do comment and let others know.

SIGGRAPH Asia 2009 Papers – Micro-Rendering, RenderAnts, and More

A partial papers list has been up on Ke-Sen Huang’s SIGGRAPH Asia 2009 page for a while now, but I was waiting until either the full list was up or an interesting preprint appeared before mentioning it.  Well, the latter has happened – A preprint and video are now available for the paper Micro-Rendering for Scalable, Parallel Final Gathering. It shares many authors (including the first) with one of the most interesting papers from last year’s SIGGRAPH Asia conference, Imperfect Shadow Maps for Efficient Computation of Indirect Illumination.  Last year’s paper proposed a way to efficiently compute indirect shadowing by rendering a very large number of very low-quality shadowmaps, using a coarse point-based scene representation and some clever hole-filling.  This year’s paper extends this occlusion technique to support full global illumination.  Some of the same authors were recently responsible for another notable extension of an occlusion method (SSAO in this case) to global illumination.

RenderAnts: Interactive REYES Rendering on GPUs is another notable paper at SIGGRAPH Asia this year; no preprint yet, but a technical report is available.  A technical report is also available for another interesting paper, Debugging GPU Stream Programs Through Automatic Dataflow Recording and Visualization.

No preprint or technical report, but promising paper titles: Approximating Subdivision Surfaces with Gregory Patches for Hardware Tessellation and Real-Time Parallel Hashing on the GPU.

Looking at this list and last year’s accepted papers, SIGGRAPH Asia seems to be more accepting of real-time rendering papers than the main SIGGRAPH conference.  Combined with the strong courses program, it’s shaping up to be a very good conference this year.

Morphological Antialiasing

An Intel research group has put their papers and code up for download. I had asked Alexander Reshetov about his morphological antialiasing scheme (MLAA), as it sounded interesting – it was! He generously sent a preprint, answered my many questions, and even provided source code for a demo of the method. What I find most interesting about the algorithm is that it is entirely a post-process. Given an image full of jagged edges, it searches for such edges and blends these accordingly. There are limits to such reconstruction, of course, but the idea is fascinating and most of the time the resulting image looks much better. Anyway, read the paper.

As an example, I took a public domain image from the web, converted it to a bitonal image so it would be jaggy, then applied MLAA to see how the reconstruction looked. The method works on full color images (though has to deal with more challenges when detecting edges). I’m showing a black and white version so that the effect is obvious. So, here’s a zoom in of the jaggy version:

zoomed, no antialiasing (B&W)

And here are the two smoothed versions:

zoomed, original zoomed, MLAA

Which is which? It’s actually pretty easy to figure: the original, on the left, has some JPEG artifacts around the edges; the MLAA version, to the right, doesn’t, since it was derived from the “clean” bitonal image. All in all, they both look good.

Here’s the original image, unzoomed:

original

The MLAA version:

MLAA

For comparison, here’s a 3×3 Gaussian blur of the jaggy image; blurring helps smooth edges (at a loss of overall crispness), but does not get rid of jaggies. Note the horizontal vines in particular show poor quality:

3x3 Gaussian blur

Here’s the jaggy version derived from the original, before applying MLAA or the blur:

jaggy B&W version

SIGGRAPH 2009 Posters

The complete list of accepted posters can be found here.  Posters are in a sense the “smallest” SIGGRAPH contribution – each one constitutes a single (large) page of description.  All the posters are in one large room, so it doesn’t take long to just walk past them and see what looks interesting.  There are also two sessions (each one hour long) where a presenter stands besides each poster and discusses it with anyone who is interested.

The poster list has no abstracts, just titles.  Judging from those, the ones that I find potentially interesting are:

  • Polygonal Functional Hybrids for Computer Animation and Games
  • The UnMousePad – The Future of Touch Sensing
  • Data-Driven Diffuse-Specular Separation of Spherical Gradient Illumination
  • Lace Curtain: Modeling and Rendering of Woven Structures Using BRDF/BTDF
  • Beyond Triangles: Gigavoxels Effects in Video Games
  • Cosine Lobe-Based Relighting From Gradient Illumination Photographs
  • Curvature-Dependent Local Illumination Approximation for Translucent Materials
  • Direct Illumination From Dynamic Area Lights
  • Gaussian Projection: A Novel PBR Algorithm for Real-Time Rendering
  • Interactive Lighting Manipulation Application on GPU
  • Reflection Model of Metallic Paints for Reflectance Acquisition
  • Variance Minimization Light-Probe Sampling

SIGGRAPH 2009 Birds of a Feather events

These events are proposed and organized by SIGGRAPH attendees, not the conference organizers (who simply approve them and provide rooms).  They range from large, elaborate presentations to small meetings.  The list of Birds of a Feather (BoF) events (with dates, times, and locations) is available here.

The OpenGL BoF is one of the largest and longest-running.  Each year, it is the premier event to hear about the latest developments in OpenGL.  This year it is joined (actually, preceded, by one day) by the OpenCL BoF, which discusses this new API for GPU-based general computations.

Although most render farms are used for film production, many game developers also have render farms which they use for lighting and visibility precomputations.  For this reason, some game developers may wish to attend the Renderfarming, Job Queueing, and Distributed Rendering Performance event.

The Computer Graphics for Simulation BoF and the Dynamic Simulation Birds of a Feather also seem relevant for game developers, although they are not specifically concerned with rendering.

The Interactive Ray Tracing BoF has already been mentioned by Eric, and is of interest to many readers of this blog.

One of the uses of real-time graphics with which I have very little familiarity is visualization of molecules, which has its own BoF event (the Molecular Graphics BoF).

Besides BoFs for people interested in specific graphics topics or products, there are also BoF events for particular groups within the graphics community, such as the Women in Animation BoF and SIGGIG: Gays in Graphics.  One of the most interesting of these is the Computer Graphics Pioneers Reception, which is intended for people who have been contributing to computer graphics for at least 20 years (if you fit this description and are interested in being a registered Computer Graphics Pioneer, the membership details are here).

Other BoFs are intended for people from particular regions or who went to certain schools, like the Taipei ACM SIGGRAPH Reunion, the ACCAD/OSU Alumni Gathering, the Tokyo ACM SIGGRAPH Chapter Party, the UNC SIGGRAPH Alumni Reception, Reuniao dos Brasileiros – Brazilian BoF, Purdue University Reunion, RIT Alumni Reception, and Reunion de los Mexicanos – Mexican BoF.

Finally, many regular SIGGRAPH attendees like to go to the Sake Barrel Opening Party BoF.  I haven’t attended one yet, but perhaps I will this year.

SIGGRAPH 2009 Exhibitor Tech Talks

These are sponsored talks, which are specific to a single companies’ products.  Even so, they often have good information.  The complete list of exhibitor tech talks is here; NVIDIA is giving so many talks that it rates a separate page.

AMD has a talk called Next-Generation Graphics: The Hardware and the APIs which from the abstract, seems to be about AMD’s DirectX11-level hardware and how to access its features using OpenGL extensions.  NVIDIA has two talks about using CUDA for non-traditional graphics: Alternative Rendering Pipelines on NVIDIA CUDA and Efficient Ray Tracing on NVIDIA GPUs.  Although both talks discuss ray tracing, the first also discusses a CUDA implementation of the REYES algorithm (which powers Pixar‘s RenderMan).  I think REYES is far more interesting for real-time use than ray tracing; similar algorithms have dominated film rendering for many years (although ray tracing is slowly gaining).

Another interesting NVIDIA talk (3D Vision Technology – Develop, Design, Play in 3D Stereo) discusses stereo rendering.  This is an area that has had many false starts over the last ten years, but now it seems like it might actually make it into the mainstream, driven by stereo film content and advances in home television displays.

Although it is not strictly about real-time rendering, AMD’s GPU-Accelerated Production Rendering is part of an interesting trend where GPUs are used not for real-time rendering, but to accelerate offline rendering.  Some of the techniques used here may inform future high-quality real-time rendering.

7 Things for July 18th

Well, I have 69 links stored up, wade through them here if you want unedited content. I’ve decided that getting 7 links out per post is a good round number, so here’s the first.

  • This is my screen-saver du jour: Pixel City (put the .scr file in your Windows directory). It’s fully described (along with source) in this great set of articles; if you’re too busy to read it all (though you should: it’s an fun read and he has some interesting insights), watch the video summary on that page. If you feel like researching the area of procedural modeling of cities more thoroughly, start here.
  • The book Real-Time Cameras, which is about camera control for games, now has a sample excerpt on Gamasutra.
  • NPR: Forrester Cole has two worthwhile GPU methods for deriving visible line segments for a set of edges (e.g., computing partial visibility of geometric lines). He’s put source code for his methods up at his site, the program “dpix“. Note: you’ll need Qt to compile & link.
  • The author of the Legalize Adulthood blog has recently had a number of posts on using DirectX10.
  • DirectX9 is still with us. Richard Thomson has a free draft of his book about DirectX 9 online. He knows what he’s about; witness his detailed pipeline posters. The bad news is that the book’s coverage of shaders is mostly about 1.X shaders (a walk down memory lane, if by “lane” you mean “horrifically complex assembly language”). The good news is that there’s some solid coverage of the theory and practice of vertex blending, for example. Anyway, grist for the mill – you might find something of use.
  • Around September I have 6 weeks off, so like every other programmer on the planet I’ve contemplated playing around with making a program for the iPhone. The economics are terrible for most developers, but I’d do it just for fun. It’s also interesting to see people thinking about what this new platform means for games. Naturally, Wolfenstein 3D, the “Hello World” of 3D games, has been ported. Andrew Glassner recommended this book for iPhone development, he said it’s the best one he found for beginners.
  • Speaking of Andrew, he pointed me at an interesting little language he’s been messing with, Processing. It’s essentially Java with a lot of built-in 2D (and to a lesser extent, 3D) graphics support: color, primitives, transforms, mouse control, lerps, window, etc., all right there and trivial to use. You can make fun little programs in just a page or two of code. That said, there are some very minor inconsistencies, like transparency not working against the background fill color. Pretty elaborate programs can be made, and it’s also handy for just drawing stuff easily via a program. Here’s a simple image I did in just a few lines, based on mouse moves:
    Processing output
That’s seven – ship it.

Interactive Ray Tracing BOF at SIGGRAPH 2009

Pete Shirley’s organizing an interactive ray tracing Birds of a Feather meeting at SIGGRAPH 2009. The details, as copied from here:

Interactive Ray Tracing
A variety of academic and industry leaders provide presentations and demos, with questions and discussions encouraged.

Tuesday, 5 – 6 pm
Sheraton New Orleans
Waterbury Ballroom
Peter Shirley
pshirley (at) nvidia.com

I’ll be there to help out. Pete’s already lined up demos from NVIDIA, Intel, Mental, an Imageworks affiliate, Breda University (Arauna), and Caustic. Right now we’re searching out academic groups or anyone else that want to show what they’re doing in the area. If you’ve got something to show or know someone that does, please contact Pete and me.