7 Things for July 18th

Well, I have 69 links stored up, wade through them here if you want unedited content. I’ve decided that getting 7 links out per post is a good round number, so here’s the first.

  • This is my screen-saver du jour: Pixel City (put the .scr file in your Windows directory). It’s fully described (along with source) in this great set of articles; if you’re too busy to read it all (though you should: it’s an fun read and he has some interesting insights), watch the video summary on that page. If you feel like researching the area of procedural modeling of cities more thoroughly, start here.
  • The book Real-Time Cameras, which is about camera control for games, now has a sample excerpt on Gamasutra.
  • NPR: Forrester Cole has two worthwhile GPU methods for deriving visible line segments for a set of edges (e.g., computing partial visibility of geometric lines). He’s put source code for his methods up at his site, the program “dpix“. Note: you’ll need Qt to compile & link.
  • The author of the Legalize Adulthood blog has recently had a number of posts on using DirectX10.
  • DirectX9 is still with us. Richard Thomson has a free draft of his book about DirectX 9 online. He knows what he’s about; witness his detailed pipeline posters. The bad news is that the book’s coverage of shaders is mostly about 1.X shaders (a walk down memory lane, if by “lane” you mean “horrifically complex assembly language”). The good news is that there’s some solid coverage of the theory and practice of vertex blending, for example. Anyway, grist for the mill – you might find something of use.
  • Around September I have 6 weeks off, so like every other programmer on the planet I’ve contemplated playing around with making a program for the iPhone. The economics are terrible for most developers, but I’d do it just for fun. It’s also interesting to see people thinking about what this new platform means for games. Naturally, Wolfenstein 3D, the “Hello World” of 3D games, has been ported. Andrew Glassner recommended this book for iPhone development, he said it’s the best one he found for beginners.
  • Speaking of Andrew, he pointed me at an interesting little language he’s been messing with, Processing. It’s essentially Java with a lot of built-in 2D (and to a lesser extent, 3D) graphics support: color, primitives, transforms, mouse control, lerps, window, etc., all right there and trivial to use. You can make fun little programs in just a page or two of code. That said, there are some very minor inconsistencies, like transparency not working against the background fill color. Pretty elaborate programs can be made, and it’s also handy for just drawing stuff easily via a program. Here’s a simple image I did in just a few lines, based on mouse moves:
    Processing output
That’s seven – ship it.

Interactive Ray Tracing BOF at SIGGRAPH 2009

Pete Shirley’s organizing an interactive ray tracing Birds of a Feather meeting at SIGGRAPH 2009. The details, as copied from here:

Interactive Ray Tracing
A variety of academic and industry leaders provide presentations and demos, with questions and discussions encouraged.

Tuesday, 5 – 6 pm
Sheraton New Orleans
Waterbury Ballroom
Peter Shirley
pshirley (at) nvidia.com

I’ll be there to help out. Pete’s already lined up demos from NVIDIA, Intel, Mental, an Imageworks affiliate, Breda University (Arauna), and Caustic. Right now we’re searching out academic groups or anyone else that want to show what they’re doing in the area. If you’ve got something to show or know someone that does, please contact Pete and me.

SIGGRAPH 2009 Production Sessions

Another part of SIGGRAPH I like are the big film production sessions – they are like a DVD “behind the scenes” on steroids.  They do tend to have long lines, though.  This year, the SIGGRAPH production sessions have been brought under the wing of the Computer Animation Festival.  A full list of production sessions can be found here.  They all look pretty interesting, actually, but I think the following ones are most noteworthy:

Big, Fast and Cool: Making the Art for Fight Night 4 & Gears of War 2: This is the first SIGGRAPH production session discussing game production rather than film production, and I hope to see many more like it in future years.

The Curious Case of Benjamin Button marked a watershed in digital character technology – the first time anyone had successfully rendered a photorealistic human character with significant onscreen presence.  The production session for this film spends a fair amount of time discussing the character, and also touches upon some other interesting bits of tech used in the film.

ILM was heavily involved with three big, flashy effects shows this year: Transformers: Revenge of the Fallen, Terminator Salvation, and Star TrekThe production session discussing all three is sure to be a lot of fun (unfortunately, there are also sure to be long lines).

Sony Pictures ImageworksCloudy with a Chance of Meatballs has some very unusual scenes (including spaghetti twisters and Jell-O mountains); it is also unusual in being fully ray-traced.  The production session discusses both of these aspects.

Although not directly relevant to real-time rendering, I am fascinated by the way in which 3D modeling and rapid prototyping were used for facial expressions in the stop-motion film Coraline (and I wrote about it in a previous blog post).  There is a production session about this very topic – anyone else who thinks this is an interesting use of technology might want to attend this one.

SIGGRAPH 2009 Talks

The full list of SIGGRAPH 2009 talks is finally up here.

Talks (formerly known as sketches) are one of my favorite parts of SIGGRAPH.  They always have a lot of interesting techniques from film production (CG animation and visual effects), many of which can be adapted for real-time rendering.  There are typically some research talks as well; most are “teasers” for papers from recent or upcoming conferences, and some are of interest for real-time rendering.  This year, SIGGRAPH also has a few talks by game developers – hopefully next year will have even more.  Unfortunately, talks have the least documentation of all SIGGRAPH programs (except perhaps panels) – just a one page abstract is published, so if you didn’t attend the talk you are usually out of luck.

The Cameras and Imaging talk session has a talk on the cameras used in Pixar‘s “Up” which may be relevant to developers of games with scripted cameras (such as God of War).

From Indie Jams to Professional Pipelines has two good game development talks: Houdini in a Games Pipeline by Paulus Bannink of Guerilla Games discusses how Houdini was used for procedural modeling in the development of Killzone 2.  Although this type of procedural modeling is fairly common in films, it is not typically employed in game development.  This is of particular interest since most developers are looking for ways to increase the productivity of their artists.  In the talk Spore API: Accessing a Unique Database of Player Creativity Shodhan Shalin, Dan Moskowitz and Michael Twardos discuss how the Spore team exposed a huge database of player-created assets to external applications via a public API.

The Splashing in Pipelines talk session has a talk by Ken Museth of Digital Domain about DB-Grid, an interesting data structure for volumetric effects; a GPU implementation of this could possibly be useful for real-time effects.  Another talk from this session, Underground Cave Sequence for “Land of the Lost” sounds like the kind of film talk which often has nuggets which can be adapted to real-time use.

Making it Move has another game development talk, Fight Night 4: Physics-Driven Animation and Visuals by Frank Vitz and Georges Taorres from Electronic Arts.  Fight Night 4 is a game with extremely realistic visuals; the physics-based animation system described here is sure to be of interest to many game developers.  The talk about rigging the “Bob” character from Monsters vs. Aliens also sounds interesting; the technical challenges behind the rig of such an amorphous – yet engaging – character must have been considerable.

Partly Cloudy was the short film accompanying Pixar’s 10th feature film, Up.  Like all of Pixar’s short films, Partly Cloudy was a creative and technical triumph.  The talk by the director, Peter Sohn, also includes a screening of the film.

Although film characters have more complex models, rigs, and shaders than game characters, there are many similarities in how a character translates from initial concept to the (big or small) screen.  The session Taking Care of Your Pet has two talks discussing this process for characters from the movie Up.  There is also a session dedicated to Character Animation and Rigging which may be of interest for similar reasons.

Another game development talk can be found in the Painterly Lighting session; Radially Symmetric Reflection Maps by Jonathan Stone of Double Fine Productions describes an intriguing twist on prefiltered environment maps used in the game Brutal Legend.  The two talks on stylized rendering methods (Applying Painterly Concepts in a CG Film and Painting with Polygons) also look interesting; the first of these discusses techniques used in the movie Bolt.

Real-time rendering has long used techniques borrowed from film rendering.  One way in which the field has “given back” is the increasing adoption of real-time pre-visualization techniques in film production.  In this talk, Steve Sullivan and Michael Sanders from Industrial Light & Magic discuss various film visualization techniques.

The session Two Bolts and a Button has two film lighting talks that look interesting; one on HDRI-mapped area lights in The Curious Case of Benjamin Button, and one on lighting effects with point clouds in Bolt.

The Capture and Display session has two research talks from Paul Debevec’s group.  As you would expect, they both deal with acquisition of computer models from real-world objects.  One discusses tracking correspondences between facial expressions to aid in 2D parametrization (UV mapping), the other describes a method for capturing per-pixel specular roughness parameters (e.g. Phong cosine power) and is more fully described in an EGSR 2009 paper.  Given the high cost of creating realistic and detailed art assets for games, model acquisition is important for game development and likely to become more so.

Flower is the second game from thatgamecompany (not a placeholder; that’s their real name), the creators of FlowFlower is visually stunning and thematically unusual; the talk describing the creation of its impressionistic rendering style will be of interest to many.

Flower was one of two games selected for the new real-time rendering section of the Computer Animation Festival’s Evening Theater (which used to be called the Electronic Theater and was sorely missed when it was skipped at last year’s SIGGRAPH).  Fight Night 4 was the other; these two are accompanied by real-time rendering demonstrations from AMD and Soka University.  Several other games and real-time demos were selected for other parts of the Computer Animation Festival, including Epic GamesGears of War 2 and Disney Interactive‘s Split Second.  These are demonstrated (and discussed) by some of their creators in the Real Time Live talk session.

The Effects Omelette session has been presented at SIGGRAPH for a few years running; it traditionally has interesting film visual effects work.  This year two of the talks look interesting for game developers: one on designing the character’s clothing in Up, and one on a modular pipeline used to collapse the Eiffel Tower in G.I. Joe: The Rise of Cobra.

Although most of the game content at SIGGRAPH is targeted at programmers and artists, there is at least one talk of interest to game designers: in Building Story in Games: No Cut Scenes Required Danny Bilson from THQ and Bob Nicoll from Electronic Arts discuss how interactive entertainment can be used to tell a story.

As one would expect, the Rendering session has at least one talk of interest to readers of this blog.  Multi-Layer, Dual-Resolution Screen-Space Ambient Occlusion by Louis Bavoil and Miguel Sainz of NVIDIA uses multiple depth layers and resolutions to improve SSAO.  Although not directly relevant to real-time rendering, I am also interested in the talk Practical Uses of a Ray Tracer for “Cloudy With a Chance of Meatballs” by Karl Herbst and Danny Dimian from Sony Pictures Imageworks.  For years, animation and VFX houses used rasterization-based renderers almost exclusively (Blue Sky Studios, creators of the Ice Age series, being a notable exception).  Recently, Sony Pictures Imageworks licensed the Arnold ray-tracing renderer and switched to using it for features; Cloudy with a Chance of Meatballs is the first result.  Another talk from this session I think is interesting: Rendering Volumes With Microvoxels by Andrew Clinton and Mark Elendt from Side Effects Software, makers of the procedural modeling tool Houdini.  The micropolygon-based REYES rendering system (on which Pixar’s Photorealistic Renderman is based) has fascinated me for some time; this talk discusses how to add microvoxels to this engine to render volumetric effects.

Above, I mentioned previsualization as one case where film rendering is informed by game rendering.  A more direct example is shown in the talk Making a Feature-Length Animated Movie With a Game Engine (by Alexis Casas, Pierre Augeard and Ali Hamdan from Delacave), in the Doing it with Game Engines session (which I am chairing).  They actually used a game engine to render their film, using it not as a real-time renderer, but as a very fast renderer enabling rapid art iteration times.

All of the talks in the Real Fast Rendering session are on the topic of real-time rendering, and are worth attending.  One of these is by game developers: Normal Mapping With Low-Frequency Precomputed Visibility by Michal Iwanicki of CD Projekt RED and Peter-Pike Sloan of Disney Interactive Studios describes an interesting PRT-like technique which encodes precomputed visibility in spherical harmonics.

Finally, the Rendering and Visualization session has a particularly interesting talk: Beyond Triangles: GigaVoxels Effects In Video Games by Cyril Crassin, Fabrice Neyret and Sylvain Lefebvre from INRIA, Miguel Sainz from NVIDIA and Elmar Eisemann from MPI Informatik.  Ray-casting into large voxel databases has aroused interest in the field since John Carmack made some intriguing comments on the topic (further borne out by Jon Olick’s presentation at SIGGRAPH last year).  The speakers at this talk have shown interesting work at I3D this year, and I look forward to seeing their latest advances.

Utilities

Three events have got me thinking about utilities: Christer Ericson’s post, getting a Mac laptop, and sending my older son off to college (to Northeastern, in Computer Science – not my doing, he just liked his high school courses in programming). There are tons of useful utilities, from file searchers to spyware detectors to sound editors, and plenty of pages covering these. Many I use, such as FileZilla, Picasa, MWSnap, GIMP. Some I’m undecided on, such as IrfanView vs. XnView for quick image viewing (XnView is currently winning, but what I really want is trivial individual pixel examination built in – just tell me the RGB(A) that the mouse is over and I’ll be happy forever). Update: XnView wins! Going to the View menu, Display Colour Information can be toggled on, doing exactly what I wanted. That said, see the Comments below; now I have another one to try out, ddsview.

However, three stand out as just plain great, that everyone should know about:

  • Beyond Compare 3: compares files, that’s it. I’d been using version 2 for years; 3’s seriously better, and I’m happy to pay for the upgrade. I’ve found that which “diff” program is best is a matter of religious debate among programmers. Most of us have a favorite and can’t understand why anyone would use anything else. Anyway, this is my choice – compare files or folders, copy differences from one to another, easily edit either file, create reports, compare images (though this feature needs more oomph), plus a great try-before-you-buy policy: 30 days of use before it expires, not 30 days from first use.
  • Dropbox: This is my new best friend. For a number of reasons, I found myself often moving files between various machines via a USB flash drive. Slow, and a giant pain. Dropbox makes life easy for this and 58 other tasks. Install it, create an account, and there’s now a folder on your machine. Install it on other machines. Now when you move a file to this folder, the file is automatically uploaded to their server, then downloaded to all your other machines, almost immediately available on them. You can also put files in a Public subfolder and right-click to get an URL for this file, allowing you to serve up files to the web – extremely easy to do, beats manually FTPing, and you get 2 Gigs of storage free. You can also make private folders that can be shared with others of your choosing over the web. My latest use is putting my bookmarks HTML file into dropbox and pointing all my browsers on all my computers to it – update the file in one place and every machine then uses it automatically. Lovely. One caveat: when you move a file to your dropbox folder, by default you’re really moving it, since the folder’s local – delete it from any machine and it’s gone (well, recycled, but only on that machine). I tend to copy files instead, to avoid surprises.
  • Windirstat (Disk Inventory X on the Mac): This free utility does a great job showing you what’s taking up all that disk space. One key bit of info, that’s not obvious from the interface: almost everything in the window can be clicked (and right-clicked) on, giving still more information. Plus, it’s the only utility in its class with Phong shading (I knew I could tie this post to graphics somehow).

JGT is now the Journal of Graphics, GPU, and Game Tools

The name change for the journal formerly known as Journal of Graphics Tools was announced today.  This does not reflect a change in focus; rather, the new name more accurately reflects the existing focus of the journal.  The Journal of Graphics Tools was originally conceived as an ongoing successor to the Graphics Gems book series, and it has since been a great place for practical “nuggets” of graphics tech.  Many of the best articles were also collected in a recent book.

I recommend subscribing to this journal, but contributing to it is even better.  JGT is an excellent choice for any game developer who would like to publish a bit of tech they have created; while it is a fully peer-reviewed and citable publication, the focus is squarely on practical applications.  Authors are not expected to spend a lot of time writing up previous research, summary, conclusion, future work, etc.  For more information see the online author’s guide.

In the interests of full disclosure, I should note that all three authors of Real-Time Rendering are on the JGT editorial board.

More Statistics

One followup to Naty’s article (below): Ke-Sen Huang’s page has submission and acceptance stats for many recent conferences.

If you have five minutes to kill, it’s fun to search on various phrases at the Google Trends site. Buzzwords like “cloud computing” have trackable data, but most graphics terms don’t have enough traffic to be worth recording. Here are some examples of graphics-related terms that have sufficient hit-counts:

  • Ray Tracing – I like how Google Trends points out relevant articles for various spikes.
  • SSAO – some definite spikes there, and what’s with all the traffic from Brazil? Is this the end of some word in Portugese? But there aren’t really hits before 2007, so I guess it’s real…
  • Collision detection, SIGGRAPH, and computer graphics – is interest in these areas waning, or are they simply established and not newsworthy? But then, GPU is going up.
  • Companies and products are fun to try: Larrabee, NVIDIA, Crytek.
  • You can also compare various terms. Here’s “DirectX programming, OpenGL programming, iPhone programming“. Pretty easy to guess which one is going up. Surprisingly un-spikey for DirectX and OpenGL.
  • And of course, Real-Time Rendering – Various random spikes; South Korea loves us.
Happy hunting, and please do comment if you find any interesting results.

Graphics Conference Paper Acceptance Statistics

I recently ran across this link to acceptance rates for papers in graphics conferences.  The SIGGRAPH chart has some missing years (including the first four), presumably because data was not available.  Graphing the trends yields some interesting information:

Excluding years before 1985 (when the conference was still “finding its legs” and acceptance rates were very high), the acceptance rate has hovered between 14.9% (1998) and 23.7% (2007).  The long-term trend appears to be that the acceptance rate is flat, and the number of submitted and accepted papers steadily increase.  In the shorter term, submitted papers appear to be flat or even declining after 2003, with accepted papers following suit (2009 has the lowest number of accepted papers since 2002).  I’m not sure why that is; a 2003 flattening seems too late to be attributable to the dot-com collapse and too early to be related to the big graphics conference restructuring of 2008 (where Eurographics was moved to spring and SIGGRAPH Asia was introduced).  If anyone has a good guess, please leave a comment.

I didn’t bother graphing the other conferences.  The Eurographics table only has information from 1998 (the conference has existed since 1979, only five years less than SIGGRAPH).  From 2002 on the acceptance rate has been similar to SIGGRAPH (before that it was significantly higher).  The I3D table is pretty complete; it shows consistently high acceptance rates, between 25% (1999) and 42% (2008).  Graphics Interface and EGSR (EGWR in earlier years) have similarly high acceptance rates.

ShaderX7

ShaderX7 has been out for a few months now, but due to its size (at 773 pages, it is by far the largest of the series) I haven’t been able to finish going through it until recently.  Here are the chapters I found most interesting (click the link for the rest of this post): Continue reading

EGSR and HPG 2009 papers

Ke-Sen Huang has what looks like the full lists of papers for both HPG 2009 and EGSR 2009.  Both of these lists are only available on Ke-Sen’s site at the moment; presumably they will appear on the HPG and EGSR websites soon.  I have had high hopes for these conferences, especially given the somewhat disappointing real-time content of the SIGGRAPH 2009 papers program.  EGSR has historically had some good real-time stuff in it, and the new HPG (High-Performance Graphics) conference has a highly relevant area of focus.  So how do the paper lists stack up?

EGSR 2009 has a bunch of potentially interesting papers, including some on GPU-accelerated ray-tracing and photon mapping.  Some have intriguing titles (but no other information, so it’s hard to guess how relevant they are): Fast Global Illumination on Dynamic Height Fields, Efficient and Accurate Rendering of Complex Light Sources.  One paper I found particularly interesting is Hierarchical Image-Space Radiosity for Interactive Global Illumination (available here):  This paper extends an I3D 2009 paper (Multiresolution Splatting for Indirect Illumination) which described an “Instant Radiosity”-type approach (using lots of point light sources to simulate indirect bounces), rendering into a pyramid of frame buffers at different resolutions.  The pyramid was finally collapsed into a single frame buffer to generate the final frame.  I found the multiresolution rendering approach interesting, but the implementation was very slow.  The EGSR 2009 paper speeds this part of the algorithm up significantly, and adds some other extensions and improvements.  I wouldn’t run off and implement this paper into a game engine (it has some significant limitations, and is not nearly fast enough on current hardware), but it does suggest some interesting research directions.

What about HPG 2009, the new kid on the block?  Given the partial descent of this conference from the Interactive Ray-Tracing symposium, one would expect a fair bit of ray-tracing-related papers, but there aren’t that many: out of 21 papers, 4 papers explicitly mention ray-tracing, and 3 more deal with dynamic construction of bounding volume hierarchies (a particular concern of ray-tracing algorithms).  Many of the remaining papers deal with other (and to my mind, more interesting) rendering algorithms.  Data-Parallel Rasterization of Micropolygons With Defocus and Motion Blur appears to describe an algorithm similar to REYES (which powers Pixar’s Renderman).  There are two papers on image space techniques (Hardware-Accelerated Global Illumination by Image Space Photon Mapping and Image Space Gathering), which is a “hot” area right now following the popularity of SSAO and related techniques.  There are two papers relating to the important topic of antialiasing (A Directionally Adaptive Edge Anti-Aliasing Filter and Morphological Antialiasing).  One paper (Stream Compaction for Deferred Shading) relates to deferred shading, which is also a “hot” topic in game rendering at the moment.

I look forward to the preprints becoming available, so we can see if these papers live up to the promise of their titles (anmd perhaps discover some surprises among the more ambiguously-titled papers).