Category Archives: Miscellaneous

HPG and EGSR CFPs

HPG is a great little conference squarely aimed at interactive rendering techniques, including areas such as hardware and ray tracing. It will be June 25-27 in Paris (France, not Texas), colocated with another excellent gathering of researchers, the Eurographics Symposium on Rendering. See the HPG call for participation and EGSR CFP for more information.

Entirely gratuitous image follows, a voxelized and 3d printed you-know-what (from here):

Game developers: SIGGRAPH deadline in two weeks!

Full Disclosure Update: in the original post, I forgot to mention my affiliation with the SIGGRAPH 2012 committee (I’m the Games Chair).

I’ve given several presentations at SIGGRAPH, and have spoken to many other game developers who have done the same. We have all found it to be an amazing experience; fun, career-enhancing, educational, and somehow simultaneously ego-boosting and humbling.

While there are many other conferences (GDC being uppermost in many game developer’s minds) SIGGRAPH holds a special place for anyone whose work involves computer generated visuals. For almost 40 years, SIGGRAPH has united the many disparate communities working in computer graphics, including academic research, CAD, fine arts, architecture, medical and scientific visualization, games, CG animation, and VFX. Each year the conference attracts the top technical and creative minds of the field for a week-long pressure cooker of learning, discussing, presenting, arguing, networking, and brainstorming about everything to do with computer graphics.

SIGGRAPH 2012 will take place in Los Angeles this August. There is a great opportunity for game developers to present at this year’s conference, but time is short since one of the most important deadlines is less than two weeks away.

Presenting at SIGGRAPH is a lot easier than most people think. While it is true that the quality bar is high, there are several programs that are seeking exactly the kind of practical, real-world advances and innovations that happen all the time in game development. Of these, the SIGGRAPH talk program is the most friendly to game developers; proposals for these 20-minute talks are easy to prepare and the topics covered vary from rendering and shading techniques through tool and workflow improvements to specific look development and production case studies. As a general rule of thumb, If it’s high-quality work and the kind of thing a graphics programmer or technical artist would do, chances are it would make a good SIGGRAPH talk proposal.

The general submission deadline for talks is in just under two weeks, on February 21. That isn’t a lot of time, but fortunately talk submissions only require preparing a one-page PDF abstract and filling out some web forms (additional materials can help if you have them – more details can be found on the talk submission page). Still, getting approval from management typically takes time, so you shouldn’t delay if you are interested. To get an idea of the level of detail expected in the abstract, and of the variety of possible talks, here are some film and game Talk abstracts from recent years: Making Faces – Eve Online’s New Portrait Rendering, MotorStorm Apocalypse: Creating Explosive and Dynamic Urban Off Road Racing, It’s Good to Be Alpha, Kami Geometry Instancer: putting the “smurfy” in Smurf Village, Practical Occlusion Culling in Killzone 3, and High Quality Previewing of Shading and Lighting for Killzone3.

If you are reading this, please consider submitting the coolest thing you’ve done last year as a Talk; the small time investment will repay itself many times over.

Good luck with your submissions!

My response to the OSTP research access RFI

A few days ago, I urged (besides other actions), submitting responses to the RFIs from the White House Office of Science and Technology Policy regarding access to research. I myself responded to the the RFI regarding peer-reviewed scholarly publications (I didn’t feel qualified to respond to the other one regarding access to research data sets since I don’t use those as much in my work). The reply I sent is after the break – please note that this is my (Naty’s) personal opinion, and may not reflect Eric and Tomas’ positions.

Continue reading

Predicting the Past

Inspired by Bing (a person, not a search engine) and by the acrobatics I saw tonight in Shanghai, time for a blog post.

So what’s up with graphics APIs? I’ve been working on a project for a fast 3D graphics system for Autodesk for about 4 years now; the base level (which hides the various flavors of DirectX and OpenGL) is used by Maya, Max, AutoCAD, Inventor, and other products. There are various higher-level optimizations we’ve added (and why Microsoft’s fxc effect compiler suddenly got a lot slower is a mystery), with some particularly nice work by one person here in the area of multithreading. Beyond these techniques, minimizing the raw number of calls to the API is the primary way to increase performance. Our rule of thumb is that you get about 1000-1500 calls a frame (CAD isn’t held to a 60 FPS rule, but we still need to be interactive). The usual tricks are to sort by state, and to shove as much geometry and processing as possible into a single draw call and so avoid the small batch problem. So, how silly is that? The best way to make your GPU run fast is to call it as little as possible? That’s an API with a problem.

This is old news, Tim Sweeney railed against API limitations 3 years ago (sadly, the article’s gone poof). I wrote about his ideas here and added my own two cents. So where are we since then? DirectX 11 has been out awhile, adding three more stages to the pipeline for efficient tessellation of higher-order surfaces. The pipeline’s feeling a bit unwieldy at this point, with a lot of (admittedly optional) stages. There are still some serious headaches for developers, like having to somehow manage to put lighting and material shading in the same pixel shader (one good argument for deferred lighting and similar techniques). Forget about optimization; the arcane API knowledge needed to get even a simple rendering on the screen is considerable.

I haven’t heard anything of a DirectX 12 in the works (except maybe this breathless posting, which I feel obligated to link to since I’m in China this month), nor can I imagine what they’d add of any significance. I expect there will be some minor XBox 72o (or whatever it will be called) -related tweaks specific to that architecture, if and when it exists. With the various CPU+GPU-on-a-chip products coming out – AMD’s Fusion family, NVIDIA’s Tegra 2, and similar from other companies (I think I counted 5, all totaled) – some access costs between the two processors become much cheaper and so change the rules. However, the API still looks to be the bottleneck.

Marketwise, and this is based entirely upon my work in scapulimancy, I see things shifting to mobile. If that isn’t at least the 247th time you’ve heard that, you haven’t been wasting enough time on the internet. But, it has some implications: first, DirectX 12 becomes mostly irrelevant. The GPU pipeline is creaky and overburdened enough right now, PC games are an important niche but not the focus, and mobile (specifically, iPad and other tablets) is fine with the functionality defined thus far by existing APIs. OpenGL ES will continue to evolve, but I doubt we’ll see for a good long while any algorithmically (vs. data-slinging) new elements added to the API that the current OpenGL 4.x and DX11 APIs don’t offer.

Basically, API development feels stalled to me, and that’s how it should be: mobile’s more important, PCs are a (large but slowly evolving) niche, and the current API system feels warped from a programming standpoint, with peculiar constructs like feeding text strings to the API to specify GPU shader effects, and strange contortions performed to avoid calling the API in order to coax the GPU to run fast.

Is there a way out? I felt a glimmer while attending HPG 2011 this year. The paper “High-Performance Software Rasterization on GPUs” by Samuli Laine and Tero Karras was one of my (and many attendees’) favorites, talking about how to efficiently implement a basic rasterizer using CUDA (code’s open sourced). It’s not as fast as dedicated hardware (no surprise there), but it’s at least in the same ball-park, with hardware being anywhere from 1.5x to 8.1x faster for their test cases, median being 3.6x. What I find exciting is the idea that you could actually program the pipeline, vs. it being locked away. They discuss ideas for optimization such as loosening the “first in, first out” rule for triangles currently enforced by all APIs. With its “yet another language” dependency, I can’t say I hope GPGPU is the future (and certainly CUDA isn’t, since it cuts out non-NVIDIA hardware vendors, but from all reports it’s currently the best way to experiment with GPGPU). Still, it’s nice to see that the fixed-function bits of the GPU, while important, are not an insurmountable limit in considering more flexible and general interactive rasterization programming models. Or, ray tracing – always have to stick that in there.

So it’s “forward to the past”, looking at traditional algorithms like rasterization and ray tracing and how to gain efficiency (both in raw speed and in development time) on various modern architectures. That’s ultimately what it’s about for me, at least: spending lots of time fighting the API, gluing together strings to make shaders, and all the other craziness is a distraction and a time-waster. That said, there’s a cost/benefit calculation implicit in all of this. For example, using C# or Java is way more productive than C++, I’d say about 2x, mostly because you’re not tracking down memory problems like leaks and access uninitialized or non-existent values. But, there’s so much legacy C++ code around that it’s still the language of graphics, as previously discussed here. Which means I expect none of the API weirdness to change for a solid decade, at the minimum. Please do go ahead and prove me wrong – I’d be thrilled!

Oh, and acrobatics? Hover your cursor over the image. BTW, the ERA show in Shanghai is wonderful, unlike current APIs.

CFP: IEEE CG&A special issue on material appearance

Passing on the word:

IEEE CGA special issue

Modeling and Rendering Material Appearance

Final submissions due: 1 July 2011
Publication date: March/April 2012

Modeling and rendering the appearance of materials is important in many computer graphics applications. Understanding material appearance draws on methods from diverse fields including the physics of light interaction with material (including models of BRDF, bidirectional reflectance distribution functions, and BSSRDF, bidirectional subsurface scattering reflection distribution functions), human perception of materials, and efficient data structures and algorithms.

This special issue will cover all aspects of material appearance in graphics, ranging from theory to application. Possible topics include (but are not limited to)

  • first-principle models for BRDF and BSSRDF;
  • procedural models of materials;
  • modeling of mesoscale material features including bumps, ridges, and so on;
  • measurement of material appearance including BRDF, BSSRDF, and BTF (bidirectional texture functions);
  • numerical simulation of material appearance;
  • new instruments for measuring appearance;
  • material-appearance models from photo collections;
  • new data structures for representing material appearance;
  • efficient rendering of BTF and BSSRDF;
  • new interfaces for designing material appearance;
  • methods for printing hard copies of material appearance;
  • psychophysics of material appearance with application to computer modeling;
  • material-appearance applications in industry such as the design of paints and coatings; and
  • nonphotorealistic rendering of material appearance.

Questions?

Contact Holly Rushmeier (holly@acm.org) or  Pierre Poulin (poulin@iro.umontreal.ca)

Submission Guidelines

Articles should be no more than 8,000 words, with each figure counting as 200 words. Cite only the 12 most relevant references, and consider providing technical background in sidebars for nonexpert readers. Color images are preferable and should be limited to 10. Visit CG&A style and length guidelines at www.computer.org/cga/author.html.

Please submit your article using the online manuscript submission service at https://mc.manuscriptcentral.com/cs-ieee. When uploading your article, select the appropriate special-issue title under the category “Manuscript Type.” Also include complete contact information for all authors. If you have any questions about submitting your article, contact the peer review coordinator at cga-ma@computer.org.

SIGGRAPH Asia 2011 Call for Submissions

The call for submissions for SIGGRAPH Asia 2011 has recently gone live. This fourth iteration of the SIGGRAPH Asia conference will take place in Hong Kong between December 12th and 15th. In previous years, the sketches and course programs have been of similar quality (if reduced quantity) compared to their North American counterparts. The SIGGRAPH Asia Technical Papers have been really good, better in my opinion than the relatively abstruse SIGGRAPH Technical Papers. If you want to see for yourself, the incomparable Ke-Sen Huang has your back, with paper link pages for SIGGRAPH Asia 2008, 2009 and 2010. Ke-Sen deserves an outstanding service award from ACM, instead of the more negative attentions he has received from them.

Here is the 2011 CFS text (a slightly more detailed version can be found here):

SIGGRAPH Asia 2011 sees the return of the Art Gallery and Emerging Technologies programs. Also calling for submissions are: Computer Animation Festival, Courses, Technical Papers, Technical Sketches & Posters.

Submit your research, theories, and innovations and you might be the next to have the valuable opportunity to present your work to audience-packed halls at SIGGRAPH Asia 2011 conference in Hong Kong this December.

For more information on SIGGRAPH Asia 2011, please visit www.siggraph.org/asia2011/.