Really, another Minecraft article?

Here at RTR HQ we like to consider ourselves trailing edge, covering all the stories that have already been slashdotted and boingboinged, not to mention Penny Arcaded. My last post included the simulated 6502 project. The madness/brilliance of this ALU simulator boggles my mind. Yes, Minecraft is awesome, and for the low low price of $13.30 it’s had me in its terrible grasp for the past week, e.g. this.

I wanted to run through a few graphical bits about it. First, the voxel display engine is surprisingly fast for something that runs in the browser. Minecraft uses the Lightweight Java Game Library to drive OpenGL. Max McGuire figures that the program tracks the visible faces, i.e. all those between air and non-air, and then brute-force displays all these faces (using backface culling) within a given distance. The file format keeps track of 16x16x128 (high) chunks, so just the nearby chunks need display. I don’t know if the program’s using frustum culling on the chunks (I’d hope so!). Looks like no occlusion culling is done currently. The lighting model is interesting and nicely done, we haven’t quite figured it out; the game’s author, “Notch” (Markus Persson), notes that it was one of the trickier elements to make work efficiently.

Me, I’ve been looking at voxelization programs out there, to see if there’s a good one for turning models into voxel building plans (it’s a sickness, seriously). Patrick Min’s binvox (paired with his viewvox viewer) looks promising, since Patrick’s a good programmer (e.g., his CalcuDoku app), the program’s been around 6 years, and it’s open-source. Binvox uses the GPU to generate the voxel views, so it’s quite fast. It supports both parity counting and “carving”, and can also remove fully-interior voxels after processing. Parity count is for “watertight” models (closed and manifold, i.e. the polygon mesh correctly defines a solid object without gaps or self-intersections, etc.). Carving is taking 6 views and recording the closest occupied voxel from each direction. It won’t give you holes or crevices you can’t see from the 6 directions, but is otherwise good for polygonal models that are just surfaces, i.e., that don’t properly represent solids. See his page for references to all techniques he uses. I found a bug in Patrick’s OBJ reader yesterday and he fixed it overnight (fast service!), so I’m game to give it another go tonight.

Peripherally-Related Links

Here are a bunch of links to things that are graphical, but definitely not about hard-core interactive rendering. Basically, it’s stuff I found of interest that has a visual and technical component and that I’m compelled by the laws of the internet to pass on. It’s a pile of candy, so I recommend reading just a bit of this post each day. Which of course you won’t do, but at least your teeth won’t rot and you won’t gain 3 pounds.

Quick Gaussian Filtering

There are two speed tricks with Gaussian filtering using the pixel shader. The first is that the Gaussian filter (along with the box filter) is separable: you can filter horizontally, then vertically (or vice versa, of course). So for a 9×9 filter kernel you then have 18 texture samples in 2 passes instead of 81 samples in a single pass. The second trick is that each of the samples you use can actually be in-between two texels, e.g. if you need to sample texels 1 through 9, you could sample just once in between 1 and 2 and use the GPU to linearly interpolate between the two, between 3 and 4, etc., for a total of 5 samples. So instead of 18 samples you could get by with 10 samples. This is old news, dating back to at least ShaderX2 and GPU Gems, and we talk about it in our 3rd edition around page 469 on.

Some bits I didn’t know were discussed by this article by Daniel Rákos in his article, and also coded up by JeGX in a GLSL shader demo collection. First, I hadn’t thought of using the Pascal’s triangle numbers as the weights for the Gaussian (nice visualization here). To be honest, I’m not 100% sure that’s right, seems like you want the area under the Gaussian’s curve and not discrete samples, but the numbers are in the ball park. It’s also a heck of a lot easier than messing with the standard deviation; let’s face it, it’s a blur and we chop off the ends of the (infinite) Gaussian somewhat arbitrarily. That said, if a filtering expert wants to set me straight, please do.

The second tidbit: by using the linear interpolation trick, this shader was found to be 60% faster. Which sounds about right, if you assume that the taps are the main cost: the discrete version uses 9 taps, the interpolated version 5. Still, guessing and knowing are two different things, so I’m now glad to know this trick actually pays off for real, and by a significant amount.

The last interesting bit I learned was from a comment by heliosdev on Daniel’s article. He noted that computing the offset locations for the texture samples once (well, 4 times, one for each corner) in the vertex shader and passes these values to the pixel shader is a win. For him, it sped the process by 10%-15% on his GPU; another commenter, Panos, verified this result with his own benchmarks. Daniel is planning on benchmarking this version himself, and I’ll be interested what he finds. Daniel points out that it’s surprising that this trick actually gives any benefit. I was also under the impression that because texture fetches take so long compared to floating-point operations, that you could do a few “free” flops (as long as they weren’t dependent on the texture’s result) in between taps.

Long and short, I thought this was a good little trick, though one you want to benchmark to make sure it’s helping. Certainly, constants don’t want to get passed from VS to PS, that sort of thing gets optimized by the compiler (discussed here, for example). But I can certainly imagine computing numbers in the VS and passing them down could be more efficient – my main worry was that the cost of registering these constants in the PS inputs might have some overhead. You usually want to minimize the number of registers used in a PS, so that more fragments can be put in flight.

Clearing the Queue (a little)

Well, let’s see how far I get tonight in clearing the backlog of 219 potential resources I’ve stored up. Here goes:

  • NShader – If you use MSVC and you write shaders, this one’s for you. It highlights shader text as you’d expect, highlighting function names correctly and generally making code more readable. Worthwhile, I’ve installed it, it’s fine. That said, you can get 90% of the way there (and for sure 100% virus free) by simply using “Options | Text Editor | File Extension” and setting extension .fx (.fxh, etc.), choosing Microsoft Visual C++, then clicking Add. Do it now.
  • Speaking of shaders, I lost much of a day tracking down a bug in Cg: writing code like this, “max(0,someVar);” gave different results than HLSL when someVar was a float. My advice: always use the floating point version of numbers in shaders. So “max(0.0f,someVar);” fixed the problem. Reported to NVIDIA.
  • Morgan McGuire pointed out that John Carmack now has a Twitter account. It’s pretty interesting, in that he’s writing a ray tracer in OpenCL and relearning or rederiving various bits of knowledge that are not really written down anywhere. The guy’s unnervingly productive: “Goal for today: implement photon maps and contrast with my current megatexture radiosity gathering.” But what will he do after lunch?
  • Speaking of Carmack, you must see the Epic Citadel demo for the iPad. Demo video here. Stunning.
  • Speaking of Morgan, his Twitter feed mentioned a number of new resources: a new demo (with complete source) of ambient occlusion volumes at NVIDIA, a demo of sample distribution shadow maps (optimized z partitions for cascading maps) at Intel, and an introduction to DX 11 at Gamasutra. He also points out some entertaining visual bits, like the fascinating style of Devil’s Tuning Fork and a game map scale comparison chart (wow, WoW is small!). Morgan and others’ free multi-platform G3D Innovation Engine is now in release 8.0, and is supposed to be good for students, researchers, and indie game developers.
  • Speaking of Intel, they actually have three new DirectX 11 demos with source, as noted in this Geeks3D article.
  • Implementing some form of an A-buffer (multiple fragments stored in a pixel) is becoming more common. It’s an algorithm that can perform antialiasing and, more importantly, order-independent transparency. We already mentioned AMD’s efforts using DirectX 11; Cyril Crassin has taken their idea and improved cache coherency a bit, creating an OpenGL 4.0+ version of the linked-list approach, with source.
  • I love the idea of driving rendering computations by drawing some simple quad or whatever on the screen and having the pixel shader figure out where, at what depth, and what normal the surface is actually located. The quad is like a bounding volume, the pixel shader essentially a ray caster. Naty sent on this link: the game Hustle Kings uses this technique to great effect for rendering its billiard balls – gorgeous. Video here. Perfect spheres here look lovely. I have to wonder how many triangles in a mesh would be visually identical at closest approach; it seems like the eye is quite good at picking up slight irregularities in tessellated spheres when they are rotating.
  • There’s also a pretty displacement mapping demo in OpenGL 4.0 showing GPU tessellation at work.
  • I like seeing that 3D printers are becoming still cheaper yet, $1500, and this one looks like a fairly clean system (vs. cornstarch dust everywhere, etc.).
  • There’s a basic object/object intersection library now available for XNA, GeometricIntersection.
  • Blending Terrain Textures is a nice little article on just that. Lerping bad, thresholding good.
  • Valve discusses how it worked with vendors to optimize graphics performance on the Mac.
  • NASA provides a fair number of 3D models free to download – 87 models and counting.
  • HWiNFO32 is a free utility which provides tons of information about your system. The thing that appeals to me is that it appears to show GPU memory load over time (and I verified the load indeed increased when I ran a game). I hadn’t seen this before, and thought it was essentially impossible for Vista, so this makes me pretty happy. I did just notice that my desktop icons all turned to generic white document icons and are unusable, and this happened some time around when I tried this utility. Hmmm.
  • Colors in Cultures infographic – I like the concept, it could have been interesting, but it’s mostly just hard to read in this form. Lots of world maps with the countries colored would have perhaps revealed more. Anyway, there’s their data, for remixing.

Now I’m down to 196 potential resources – good! More later.

Some quick bits

We’ve updated our portal page a bit: added Black Rock Studio’s nice publications page, added some film and commercial research labs’ publication pages, and fixed some links. Also, with a bit of regret, I removed the links to the Google Group pages for comp.graphics.algorithms and comp.games.development.programming.algorithms (though the FAQ for c.g.algorithms is still handy). These groups (and I assume most groups in general) have turned into spam repositories. The first group in particular has a special meaning for me, as comp.graphics.* is where I met a lot of graphics programmers back in the 80’s and 90’s and learned a bunch of techniques.

Happily, there are packrats on the internet; Steve Hollasch has a nice collection of the best of comp.graphics and other internet graphics sources. Some web-rot there with the external links, but the comp.graphics postings are solid. A few are dated, but there’s much that is still relevant today. I noticed a page I hadn’t seen before, a SIGGRAPH ’92 satire – some funny bits there (e.g. The Freehand Generation of Fractal Curves using only a Lightpen and Caffeine).

Which reminds me of a classic page that everyone should at least skim: “WARNING: Beware of VIDEA“. It’s about a bogus conference, now long-gone (though the so-called institute that ran it back then still held 23 conferences this year). Werner Purgathofer and colleagues submitted silly abstracts to this conference and all were accepted without review. Check out the abstracts on his page. Werner’s overview page also gives links to a number of related publications scandals: nonexistent peer review, plagiarism, automatic paper writing, etc. If nothing else, check out SCIgen if you haven’t seen it before.

Rendering Equation in Wired

I’ve been on vacation this week. Kayaked this morning, biked this afternoon (I sound so studly. but it’s all been fairly easy stuff, though sweaty). Catching up on my Wired magazines while waiting for the shower, I ran into this surprising article. Who would have thought the Rendering Equation would be a little article in any popular magazine, ever? Sure, it’s mostly Wired establishing geek-cred – the equation could really use a figure and a bit more explanation to appreciate it – but still fun to see.

SIGGRAPH 2010 resource links

Naty and I (mostly Naty!) collected the links for most courses and a few talks given at SIGGRAPH 2010; see our page here. Enjoy! If you have links to any other courses and talks, please do send them on to me or post them as a comment.

Personally, I particularly liked the “Practical Morphological Anti-Aliasing on the GPU” talk. It’s good to see the technique take around 3.5 ms on an NVIDIA 295 GTX, and the author’s site has a lot of information (including code).

New Site for “Advances in Real-Time Rendering” SIGGRAPH Course

The SIGGRAPH Course “Advances in Real-Time Rendering for 3D Graphics and Games” has been held since 2006 with a consistently high level of quality. However the hosting of the materials is scattered around a few different websites, and the older years suffer from broken links and other issues. We are happy to host the course’s new home on a subdomain of this site: http://advances.realtimerendering.com/. At the moment only the SIGGRAPH 2010 course materials are present, but previous years will go up shortly.

Live Real-Time Demos at SIGGRAPH

So one problem with SIGGRAPH is that you hear about the cool thing that you missed and didn’t even know about until it was too late. Here’s one that’s getting repeated: the Computer Animation Festival’s Live Real-Time Demos session. Hall B, 4:30-5:15 pm Tuesday and Wednesday; I just caught the tail-end of Monday’s show and it was worth seeing, so I’ll go back for the rest tomorrow.

What else didn’t you miss yet? Hmmm, in Emerging Technologies Sony’s 360-degree autostereoscopic display is cute, I’ve heard the 3D multitouch table is very worthwhile, and you must try out the Meta Cookie (have someone take your picture while you’re in the headgear, it’s something your grandchildren will want to see). I was also interested to see QuintPixel from Sharp, as it justified their earlier Quattron “four primary colors” display.

More later – Mental Images reception time.

Fleet-Footed Faster Forward

The Fast Forward event at SIGGRAPH is set of very short presentations Sunday evening that runs through all the papers at SIGGRAPH. Lately SIGGRAPH has become a “big tent”, including a wide range of fields. This year there are, by my count, 133 SIGGRAPH papers, giving say 50 seconds to each presentation in the two-hour period. This is a pleasant-enough way to cull through all the papers and find which ones to see, and there is the occasional witty presentation, but to be honest, I’m a bit worn out on the method – too slow! In the past few years I find myself looking at my watch halfway through and thinking “egads, still another hour?” and my monocle pops from my eye with comic effect.

So I liked seeing that CGW is hosting a 3 minute 44 second video summary of some of the SIGGRAPH papers. Only 23 papers summarized, but I love that each gets just a sentence – you’re in, you’re out, and you have some sense if it’s a paper you need to see. I wish I had this for all the papers. Second in awesomeness would be a single web page that lists all the abstracts together, for a quick skim. I should write a Perl script that makes one from ACM’s SIGGRAPH 2010 TOC. Also at CGW’s site is a 2 minute 41 second (plus long credits) video summary of the Emerging Technologies area, purely visual – nice, it gives me a little taste, prepping my senses for what I will see there and want to learn more about.