Tracking the latest developments in interactive rendering techniques
Seven Things for 10/13/2011
Fairly new book: Practical Rendering and Computation with Direct3D 11, by Jason Zink, Matt Pettineo, and Jack Hoxley, A.K.Peters/CRC Press, July 2011 (more info). It’s meant for people who already know DirectX 10 and want to learn just the new stuff. I found the first half pretty abstract; the second half was more useful, as it gives in-depth explanation of practical examples that show how the new functionality can be used.
Two nice little Moore’s Law-related articles appeared recently in The Economist. This one is about how the law looks to have legs for a number of more years, and presents a graph showing how various breakthroughs have kept the law going over the past decades. Moore himself thought the law might hold for ten years. This one talks about how computational energy efficiency is doubling every 18 months, which is great news for mobile devices.
I used to use MWSnap for screen captures, but it doesn’t work well with two monitors and it hangs at times. I finally found a replacement that does all the things I want, with a mostly-good UI: FastStone Capture. The downside is that it actually costs money ($19.95), but I’m happy to have purchased it.
Ray tracing vs. rasterization, part XIV: Gavan Woolery thinks RT is the future, DEADC0DE argues both will always have a place, and gives a deeper analysis of the strengths and weaknesses of each (though the PITA that transparency causes rasterization is not called out) – I mostly agree with his stance. Both posts have lots of followup comments.
This shows exactly how far behind we are in blogging about SIGGRAPH: find the Beyond Programmable Shading course notes here – that’s just a mere two months overdue.
Tantalizing SIGGRAPH Talk demo: KinectFusion from Microsoft Research and many others. Watch around 3:11 on for the great reconstruction, and the last minute for fun stuff. Newer demo here.
OnLive – you should check it out, it’ll take ten minutes. Sign up for a free account and visit the Arena, if nothing else: it’s like being in a sci-fi movie, with a bunch of games being played by others before your eyes that you can scroll through and click on to watch the player. I admit to being skeptical of the whole cloud-gaming idea originally, but in trying it out, it’s surprisingly fast and the video quality is not bad. Not good enough to satisfy hardcore FPS players – I’ve seen my teenage boys pick out targets that cover like two pixels, which would be invisible with OnLive – but otherwise quite usable. The “no download, no GPU upgrade, just play immediately” aspect is brilliant and lends itself extremely well to game trials.
Thanks Eric for including my little article in your list! About the transparency and rasterization you have to consider that I wasn’t writing about current GPUs versus GPU or CPU rasterization, the gap in their usage is so wide that I don’t consider that such an interesting topic today. I was trying to write about rasterization in general, and in that case transparency (without refraction) is pretty trivial, we have many ways of resolving that (BSP trees, tile rendering, per-pixel solutions etc, you know better than I). Refraction would be really challenging, we can of course cheat with cubemaps but at a given point you start having to generate smaller and smaller frustums (i.e. more cubes…) to increase accuracy and then you’ll end up losing coherency and wasting setup time in a rasterizer… And there is where the rasterizer vs raytracer compromise manifests itself again, it’s just another example of what I was trying to explain.
Thanks Eric for including my little article in your list! About the transparency and rasterization you have to consider that I wasn’t writing about current GPUs versus GPU or CPU rasterization, the gap in their usage is so wide that I don’t consider that such an interesting topic today. I was trying to write about rasterization in general, and in that case transparency (without refraction) is pretty trivial, we have many ways of resolving that (BSP trees, tile rendering, per-pixel solutions etc, you know better than I). Refraction would be really challenging, we can of course cheat with cubemaps but at a given point you start having to generate smaller and smaller frustums (i.e. more cubes…) to increase accuracy and then you’ll end up losing coherency and wasting setup time in a rasterizer… And there is where the rasterizer vs raytracer compromise manifests itself again, it’s just another example of what I was trying to explain.
I learned much here, thanks