7 Things for April 22

Quite the backlog, so let’s whip through some topics:

  • GDC: ancient news, I know, but here is a rundown from Vincent Scheib and a summary of trends from Mark DeLoura.
  • I like when people revisit various languages and see how fast they now are on newer hardware and more efficient implementations. Case in point: Quake 2 runs in a browser using javascript and WebGL.
  • Morgan McGuire pointed out this worthwhile article on stereoscopic graphics programming. Quick bits: frame tearing is very noticeable since it is visible to only one eye, vsync is important which may force lower-res rendering, making antialiasing all that much more important. UI elements on top look terribly 2D, and aim-point UI elements need to be given 3D depths. For their game MotorStorm, going 3D meant a lot more people liked using the first-person view, and this view with stereo helped perception of depth, obstacles, etc. There are also some intriguing ideas about using a single 2D image and reprojection using the depth buffer to get the second image (it mostly works…).
  • I happened to notice ShaderX 7 is now available on the Kindle. Looking further, quite a few other recent graphics books are. What’s odd is the differential in prices varies considerably: a Kindle ShaderX 7 is only $3.78 cheaper, while Fundamentals of Computer Graphics is $20 less.
  • Speaking of ShaderX, its successor GPU Pro is not out yet, but Wolfgang started a blog about it (really, just the Table of Contents), in addition to his other blog. The real news: you can use Amazon’s Look Inside feature to view the contents of the book right now!
  • Here are way too many multithreading resources.
  • In case you somehow missed it, you must see Pixels.

Three ways to show off your game at SIGGRAPH 2010

I recently spent a weekend in downtown LA helping the SIGGRAPH 2010 committee put together the conference schedule.  Looking at the end result from a game developer’s perspective, this is going to be a great conference! More details will be published in early May, but you can see the emphasis on games already; of the current (partial) list of courses, over half have high relevance to games.

If you are a game developer, we need your participation to help make this the biggest game SIGGRAPH ever! A few months ago I posted about the February 18th deadline. That deadline is long gone, but several venues are still open. This is your chance to show off not just in front of your fellow game developers, but also before the leading film graphics professionals and researchers. The most relevant venues for game developers are:

  1. Live Real-Time Demos. The Electronic Theater, a nightly showcase of the best computer graphics clips of the year, has long been a SIGGRAPH highlight and the tentpole event of the Computer Animation Festival (which is an official qualifying festival for the Academy Awards). The Electronic Theater is shown on a giant screen in the largest convention center hall, before an audience packed with the world’s top computer graphics professionals and researchers. Last year SIGGRAPH introduced a new event before the Electronic Theater to showcase the best real-time graphics of the year. The submission deadline for Live Real-Time Demos is April 28th (a week and a half away), so time is short! Submitting your game to Live Real-Time Demos is as simple as uploading about 5 minutes of captured game footage (all submitted materials are held in strict confidentiality) and filling out a short online form. If you want your game submitted, please let your producer know about this ASAP; it will likely take some time to get approval.
  2. SIGGRAPH Dailies! (new for 2010) is where the artists get to shine; details here, including cool example presentations from Pixar. Other SIGGRAPH programs present graphics techniques; ‘SIGGRAPH Dailies!’ showcases the craft and artistry with which these techniques are applied. All excellent production art is welcome: characters, animations, level lighting, particle effects, etc. Each artist whose work is selected will get two minutes at SIGGRAPH to show a video clip of their work and tell an interesting story about creating it. The submission deadline for ‘SIGGRAPH Dailies!’ is May 6th. Submitting art to Dailies is just a matter of uploading 60-90 seconds of video and filling out an online form. If your studio is planning to submit more than one or two Dailies, you should use the batch submission process: designate a representative (like an art director or lead) to recruit presentations and get producer approval. Once the representative has a tentative list of submissions, they should contact SIGGRAPH (click this link and select ‘SIGGRAPH Dailies’ from the drop down menu) to give advance warning of the expected submission count. After all entries have video clips and backstory text files, the studio representative contacts SIGGRAPH again to coordinate a batch submission.
  3. Late-Breaking Talks. Although the initial talk deadline is past, there is one more chance to submit talks: the late-breaking deadline on May 6th. SIGGRAPH talks are 20-minute presentations, typically about practical, down-to-earth film or game production techniques. If you are a graphics programmer or technical artist, you must have developed several such techniques while working on your last game. If there is one you are especially proud of, consider submitting a Talk about it; this only requires a one-page abstract (if you happen to have video or additional documentation you can add them as supplementary material). To show the detail expected in the abstract and the variety of possible talks here are five abstracts from 2009: a game production technique, a game system/API, a game rendering technique, a film effects shot, and a film character.

Presenting at one of these forums is a professional opportunity well worth the small amount of work involved. Forward this post to other people on your team so they can get in on the fun!

More on God of War III Antialiasing

Since my recent post discussing the antialiasing method used in God of War III, Cedric Perthuis (a graphics programmer on the God of War III development team) was kind enough to email some additional details on how the technique was developed, which I will quote here:

“It was extremely expensive at first. The first not so naive SPU version, which was considered decent, was taking more than 120 ms, at which point, we had decided to pass on the technique. It quickly went down to 80 and then 60 ms when some kind of bottleneck was reached. Our worst scene remained at 60ms for a very long time, but simpler scenes got cheaper and cheaper. Finally, and after many breakthroughs and long hours from our technology teams, especially our technology team in Europe, we shipped with the cheapest scenes around 7 ms, the average Gow3 scene at 12 ms, and the most expensive scene at 20 ms.

In term of quality, the latest version is also significantly better than the initial 120+ ms version. It started with a quality way lower than your typical MSAA2x on more than half of the screen. It was equivalent on a good 25% and was already nicer on the rest. At that point we were only after speed, there could be a long post mortem, but it wasn’t immediately obvious that it would save us a lot of RSX time if any, so it would have been a no go if it hadn’t been optimized on the SPU. When it was clear that we were getting a nice RSX boost ( 2 to 3 ms at first, 6 or 7 ms in the shipped version ), we actually focused on evaluating if it was a valid option visually. Despite of any great performance gain, the team couldn’t compromise on quality, there was a pretty high level to reach to even consider the option. And as for the speed, the improvements on the quality front were dramatic. A few months before shipping, we finally reached a quality similar to MSAA2x on almost the entire screen, and a few weeks later, all the pixelated edges disappeared and the quality became significantly higher than MSAA2x or even MSAA4x on all our still shots, without any exception. In motion it became globally better too, few minor issues remained which just can’t be solved without sub-pixel sampling.

There would be a lot to say about the integration of the technique in the engine and what we did to avoid adding any latency. Contrarily to what I have read on few forums, we are not firing the SPUs at the end of the frame and then wait for the results the next frame. We couldn’t afford to add any significant latency. For this kind of game, gameplay is first, then quality, then framerate. We had the same issue with vsync, we had to come up with ways to use the existing latency. So instead of waiting for the results next frame, we are using the SPUs as parallel coprocessors of the RSX and we use the time we would have spent on the RSX to start the next frame. With 3 ms or 4 ms of SPU latency at most, we are faster than the original 6ms of RSX time we saved. In the end it’s probably a wash in term of latency due to some SPU scheduling consideration. We had to make sure we could kick off the jobs as soon as the RSX was done with the frame, and likewise, when the SPU are done, we need the RSX to pick up where it left and finish the frame. Integrating the technique without adding any latency was really a major task, it involved almost half of the team, and a lot of SPU optimization was required very late in the game.”

“For a long time we worked with a reference code, algorithm changes were made in the reference code and in parallel the optimized code was being optimized further. the optimized version never deviated from the reference code. I assume that doing any kind of cheap approximation would prevent any changes to the algorithm. There’s a point though, where the team got such a good grip of the optimized version that the slow reference code wasn’t useful anymore and got removed. We tweaked some values, made few major changes to the edge detection code and did a lot of testing. I can’t stress it enough. every iteration was carefully checked and evaluated.”

So it looks like my first impression of such techniques – that they are too expensive to be feasible on current consoles – was not that far off the mark; I just hadn’t accounted for what a truly heroic SPU optimization effort could achieve. I wonder what other graphics techniques could be made fast enough for games, given a similar effort?

ACM SIGGRAPH 2010 Election

I received my ACM SIGGRAPH 2010 Election form today, it provides some login info and a PIN. SIGGRAPH members can vote for up to three people for the Director-At-Large positions.

I can be pretty apathetic about these sorts of elections, ACM and IEEE, I have to admit. Sometimes I’ll get inspired and read the statements, sometimes I’ll skim, sometimes I’ll just vote for names I know, sometimes I’ll ignore the whole thing. This year’s ACM SIGGRAPH election is different for me, because of issues brought up by the Ke-Sen Huang situation. Specifically, the ACM’s copyright policy is lagging behind the needs of its members.

For this SIGGRAPH election I was happy to see that James O’Brien is on the slate. In the past James has worked to retain the rights to his own images, so he’s aware of the issues. In his election statement he writes:

The ACM Digital Library has been a great success, but the move to digital publishing has created conflicts between ACM and member interests. ACM and SIGGRAPH are fundamentally member service organizations and I believe that through thoughtful and progressive copyright policies we can better align organization and member needs. Successful copyright policy has to work across formats, and SIGGRAPH is unique among ACM SIGs in that member-generated content spans a diverse range encompassing text, images, and video. Other organizations have embraced Open Access initiatives, but SIGGRAPH and ACM should be leading the way in this area.

He has my vote. He’s also the only candidate who addresses this area of concern, and in a thoughtful and professional manner. If you’re a SIGGRAPH member, I hope you’ll take the time this year to read over the statements, figure out your login ID and user number, and then go vote.

Thought for the Day

From here, no idea who made it; I’d like to shake his hand. Found here, but the poster found it via StumbleUpon.

HDR in games

Having seen way too much bloom on the Atacama Desert map for the Russian pilots in Battlefield: Bad Company 2 (my current addiction, e.g. this and this), I can relate.

Edit: please do read the comments for why both pairs of images are wrong. I’m so used to HDR == tone mapping that I pretty much forgot the top pair is also technically incorrect (HDR is the range of the data, tone mapping can take such data and map it nicely to a displayable 8-bit image without banding) – thank you, Jaakko.

FMX and Triangle Game Conference 2010

This post is about two conferences that might not be as familiar to readers of this blog as SIGGRAPH and GDC.

FMX is an annual conference run by the Baden-Württemberg Film Academy, held in Stuttgart, Germany at the beginning of May.  This year is the 15th one for FMX, and the content appears quite promising.

There are a bunch of game talks, including talks about Split/Second, Heavy Rain, Fight Night 4, Alan Wake, Habbo Hotel, two talks on God of War III and one on Arkham Asylum (the last three are repeats of GDC talks).  However, most of the talks relate to film production, including presentations on The Princess and The Frog, Tangled, 2012, Alice in Wonderland, Ice Age 3, Iron Man 2, Clash of the Titans, Sherlock Holmes, District 9, Shutter Island, Planet 51, Avatar, A Christmas Carol, The Imaginarium of Dr. Parnassus, How to Train Your Dragon, and Where the Wild Things Are.  FMX 2010 also has various master classes on the use of various DCC applications and middleware libraries, recruiting talks, presentation of selected SIGGRAPH 2009 papers, and more.  Attendance fees are quite reasonable (200 Euros, 90 for students) so this conference should be good value for readers in Europe who can travel cheaply to Germany.

The Triangle Game Conference is held in Raleigh, North Carolina.  Its name comes from the “research triangle” defined by Raleigh, Durham, and Chapel Hill.  This area is home to several prominent game companies, such as Epic Games, Red Storm Entertainment, and branches of Electronic Arts and Insomniac Games.  I first heard of this conference last year, when it hosted a very good talk by Crytek on deferred shading in CryEngine 3.  This year, the content looks interesting, if a bit mixed; the presentations by Epic and Insomniac seem to be the best ones.  Definitely worth attending if you’re in that area, but I wouldn’t travel far for it.

Morphological Antialiasing in God of War III

Eric wrote a post back in July about a paper called Morphological Antialiasing which had been presented at HPG 2009 (source code for the paper is available here).  The paper described a post-processing algorithm which could smooth out edges as if by magic.  Although the screenshots were impressive, the technique seemed too expensive to be practical for games on current hardware; also there were reportedly bad artifacts when applied to moving images.  For these reasons I didn’t pay too much attention to the technique.  It was reported (including by us) that the game The Saboteur was using this technique on the PS3 but this turned out to be a false alarm.

However, God of War III is actually using Morphological Antialiasing.  I’ve looked closely at the game and the technique they use appears not to exhibit significant motion artifacts; it definitely looks better than the MSAA2X technique it replaced (which was used in the E3 2009 demo).  According to the game’s art director, the technique used “goes beyond” the original paper; this may mean that they improved it in ways that reduce the motion artifacts.

My initial impression that the technique is too expensive did not take into account the impressive horsepower of the PS3’s Cell chip.  After optimization, the technique runs in 20 milliseconds on a single SPU; running it on 5 SPUs in parallel enables it to complete execution in 4 milliseconds.  Most importantly, turning off MSAA saved them 5 milliseconds of GPU time, which on the PS3 is a significant gain (the GPU is most often the bottleneck on PS3 games).