Today was the first day of I3D 2011. I3D (full name: ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games) is a great little conference – it’s small enough so you can talk to most of the people attending, and the percentage of useful papers is relatively high.
I’ll split my I3D report into several smaller blog posts to ensure timely updates; this one will focus on the opening keynote.
The keynote was titled “Image-Based Rendering: A 15-Year Retrospective”, presented by Richard Szeliski (Microsoft Research), who’s been involved with a lot of the important research in this area. He started with a retrospective of the research in this area, and then followed with a discussion of a specific application: panoramic image stitching. Early research in this area lead to QuickTime VR, and panoramic stitching is now supported in many cameras as a basic feature. Gigapixel panoramas are common – see a great example of one from Obama’s inaugural address by Gigapan (360 cities is another good panorama site). Street-level views such as Google Street View and Bing’s upcoming Street Slide feature use sequences of panoramas.
Richard then discussed the mathematical basis of image-based rendering: a 4D space of light rays (reliant on the fact that radiance is constant over a ray – in participating media this no longer holds and you have to go to a full 5D plenoptic field). Having some geometric scene information is important, it is hard to get good results with just a 4D collection of rays (this was the main difference between the first two implementations – lumigraphs and light fields). Several variants of these were developed over the years (e.g. unstructured lumigraphs and surface light fields).
Several successful developments used lower-dimensional “2.5D” representations such as layered depth images (a “depth image” is an image with a depth value as well as a color associated with each pixel). Richard remarked that Microsoft’s Kinect has revolutionized this area by making depth cameras (which used to cost many thousands of dollars) into $150 commodities; a lot of academic research is now being done using Kinect cameras.
Image-based modeling started primarily with Paul Debevec’s “Facade” work in 1996. The idea was to augment a very simple geometric model (which back then was created manually) with “view dependent textures” that combine images from several directions to remove occluded pixels and provide additional parallax. Now this can be done automatically at a large scale – Richard showed an aerial model of the city of Graz which was automatically generated in 2009 from several airplane flyovers – it allowed for photorealistic renders with free camera motion in any direction.
Richard also discussed environment matting, as well as video-based techniques such as video rewrite, video matting, and video textures. This last paper in particular seems to me like it should be revisited for possible game applications – it’s a powerful extension of the common practice of looping a short sequence of images on a sprite or particle.
Richard next talked about cases where showing the results of image-based rendering in a less realistic or lower-fidelity way can actually be better – similar to the “uncanny valley” in human animation.
The keynote ended by summarizing the areas where image-based rendering is currently working well, and areas where more work is needed. Automatic 3D pose estimation of the camera from images is pretty robust, as well as automatic aerial and manually augmented ground-level modeling. However, some challenges remain: accurate boundaries and matting, reflections & transparency, integration with recognition algorithms, and user-generated content.
For anyone interested in more in-depth research into these topics, Richard has written a computer vision book, which is also available freely online.
Pingback: 2011-02-28 Log
Pingback: Real-Time Rendering · I3D 2011 Report