Flocking (running a large number of independent agents with simple proximity-based rules and letting interesting behavior emerge) has been a popular graphics technique since the 1987 SIGGRAPH paper by Craig Reynolds. The idea is, of course, inspired by examples from the animal kingdom such as bird flocks and fish schools. Today I saw an internet clip of 300,000 (!) starlings flocking. With such a large number of entities, the flock looks like some kind of bizarre physical fluid or smoke simulation.
Author Archives: Naty
Radeon HD 5800 Demos
AMD has posted executables and videos for two new demos for the Radeon HD 5800 series. Both demos require Windows 7 (I guess that means that Vista support for DirectX11 isn’t quite here yet).
One of the demos show order-independent transparency; from the description it sounds like an A-buffer-like approach, which is interesting. The other shows a high-quality depth of field effect.
NVIDIA Optix Clarification
Regarding this somewhat alarmist post, NVIDIA were kind enough to contact me and provide some clarification.
After Fermi ships, NVIDIA plan to extend OptiX support to at least GT200 GeForce cards, and possibly down to G80 as well. So eventually you will indeed be able to run Optix on pretty much all consumer NVIDIA cards.
NVIDIA Optix ray-tracing API available – kind of
We’ve written about the NVIDIA Optix ray-tracing API (which used to be called NVIRT) once or twice before. Well, today it is finally available – for free. While it’s very nice of NVIDIA to make this available, there are a few caveats.
We already knew Optix would only work on NVIDIA hardware (duh), but the system requirements reveal another unwelcome fact; it does not even run on GeForce cards, only Tesla and Quadro (which are significantly more expensive than GeForce despite being based on exactly the same chips). They say GeForce will be supported on their new Fermi architecture – I call shenanigans.
NVIDIA Jumps on the Cloud Rendering Bandwagon
In January, AMD and OToy announced Fusion Render Cloud, a centralized rendering server system which would perform rendering tasks for film and even games, compressing the resulting video and sending it over the internet. In March, OnLive announced a similar system, but for the entire game, not just rendering. Now NVIDIA has announced another cloud rendering system, called RealityServer, running on racks of Tesla GPUs (presumably using Fermi in future iterations). This utilizes the iray ray tracing system developed by mental images, who also make mental ray (mental images has been owned by NVIDIA since 2007).
The compression is going to be key, since it has to be incredibly fast, extremely low bit rate and very high quality for this to work well. I’m a bit skeptical of cloud rendering at the moment but maybe all these companies (and investors) know something I don’t…
Award-Winning Architectural Renderings
I don’t know much about architectural renderings; I guess I always thought of them as utilitarian. This page of award-winners proved me very wrong – there is true artistry on display here. The bottom of the page also has a real-time category; of the five nominees in that category three (including the winner – Shockwave required) are available to view online.
SIGGRAPH 2009 Course Pages
The organizers of SIGGRAPH Courses often put up web pages dedicated to the course. These typically have the latest version of the course notes and the slides. I’ve found a bunch of SIGGRAPH 2009 course pages, and thought it would be convenient to have them all in one place:
- Advances in Real-Time Rendering in 3D Graphics and Games
- Beyond Programmable Shading
- Real-Time Global Illumination for Dynamic Scenes
- Efficient Substitutes for Subdivision Surfaces
- Scattering
- Advanced Illumination Techniques for GPU Volume Raycasting
- Next Billion Cameras
- Build Your Own 3D Scanner: Optical Triangulation for Beginners
- Computation and Cultural Heritage: Fundamentals and Applications
- Interactive Sound Rendering
SIGGRAPH courses are a consistently good source of information – if any of these courses are about a topic which interests you, you might want to take the time to read the course notes and slides.
Slides for “Advances in Real-Time Rendering in 3D Graphics and Games”
The slides (and some videos) for Natasha Tatarchuk’s excellent SIGGRAPH 2009 course are finally up. The course notes are not ready yet, but Natasha assures me they will be available soon.
NVIDIA Announces Fermi Architecture
Today at the GPU Technology Conference (the successor to last year’s NVISION), NVIDIA announced Fermi, their new GPU architecture (exactly one week after AMD shipped the first GPU from their new Radeon HD 5800 architecture). NVIDIA have published a Fermi white paper, and writeups are popping up on the web. Of these, the ones from Real World Technologies and AnandTech seem most informative.
With this announcement, NVIDIA is focusing firmly on the GPGPU market, rather than on graphics. No details of the graphics-specific parts of the chip (such as triangle rasterizers and texture units) were even mentioned. The chip looks like it will be significantly more expensive to manufacture than AMD’s chip, and at least some of that extra die area has been devoted to things which will not benefit most graphics applications (such as improved double-precision floating-point support and more general programming models). With full support for indirect branches, a unified address space, and fine-grained exception handling, Fermi is as general purpose as it gets. NVIDIA is even adding C++ support to CUDA (the first iterations of OpenCL and DirectCompute will likely not enable the most general programming models).
Compared to their previous architecture, NVIDIA has shuffled around the allocation of ALUs, thread scheduling units, and other resources. To make sense of the soup of marketing terms such as “warps”, “cores”, and “SMs”, I again recommend Kayvon Fatahalian’s SIGGRAPH 2009 presentation on GPU architecture.
Full List of SIGGRAPH Asia 2009 Papers
The full list of papers accepted to SIGGRAPH Asia 2009 (with abstracts) is finally up on the conference website. As usual, Ke-Sen Huang is ahead of the curve; his SIGGRAPH Asia 2009 papers page already has preprint links for 54 of the 70 accepted papers.
Three of the papers I mentioned in my first SIGGRAPH Asia 2009 post have since made preprints available: RenderAnts: Interactive Reyes Rendering on GPUs, Debugging GPU Stream Programs Through Automatic Dataflow Recording and Visualization, and Real-Time Parallel Hashing on the GPU.
The Real-Time Rendering paper session is, of course, most likely to contain papers of interest to readers of this blog. The most interesting paper, Micro-Rendering for Scalable, Parallel Final Gathering, was already discussed in a previous blog post. Since then, I’ve noticed many similarities between the technique described in this paper and the point-based color bleeding technique Pixar implemented in RenderMan. This approach to GPU-accelerated global illumination looks very promising. The other three papers in the session are also of interest: Depth-of-Field Rendering with Multiview Synthesis describes a depth-of-field method which occupies an interesting middle ground between the very high quality (and expensive) multiview methods used in film production and the much cheaper (but low-quality) post-processing methods commonly used in games; after some scaling down and optimizing, it may be appropriate for some real-time applications. Similarly to reprojection papers discussed previously, the Amortized Supersampling paper reprojects samples from previous frames to increase quality. Here the goal is anti-aliasing procedural shaders, but the technique could be applied to other types of expensive shaders. The remaining paper from the Real-Time Rendering session, All-Frequency Rendering With Dynamic, Spatially Varying Reflectance, does not yet have a preprint. The short abstract from the conference page does sound intriguing: “A technique for real-time rendering of dynamic, spatially varying BRDFs with all-frequency shadows from environmental and point lights”. Hopefully a preprint will become available soon.
I typically don’t pay very close attention to offline rendering papers, but one in particular looks interesting: Adaptive Wavelet Rendering takes a novel approach to Monte-Carlo ray tracing by rendering into an image-space wavelet basis, instead of rendering into image pixels or samples. This enables them to significantly reduce the number os samples required in certain cases.
The paper Continuity Mapping for Multi-Chart Textures attempts to solve a problem of interest (fixing filtering discontinuities at UV chart seams) but the solution is overly complex for most applications. While the authors claim to address MIP-mapping, their solution does not work well with trilinear filtering since their data structures need to be accessed separately for each MIP-map level and the results blended. They also do not address issues relating to derivative computation. Since their technique requires lots of divergent branching, it is likely to run at low efficiency. This technique might make sense for some specialized applications, but I don’t expect to see it being used for game texture filtering.
There are also some interesting papers on non-rendering topics such as animation and model acquisition. All in all, a very strong papers program this year.