SMOG Results

My wife just told me about the SMOG readability formula, which is evidently widely used. “SMOG” stands for Simple Measure of Gobbledygook. It looks for the number of polysyllabic words (3 syllables or more) used in a document. The square root of the result of dividing the number of polysyllabic words by the number of sentences is used to derive a readability grade level; read more on Wikipedia.

I ran the calculator here on a few passages in our book (those without equations, which I thought would throw the calculator off): Deferred Shading, Fresnel Equation, Scene Graphs, and the final chapter. Scene Graphs was simplest, at 12.56, Fresnel hardest, at 14.1. On average the level was a bit above 13, meaning College Freshman level. Pieces such as this one weigh in at 17.12. I took a piece of text from Hearn and Baker’s old Computer Graphics, C Version, 2nd Edition, on Fractals, and it came up as 14.47. So our book’s no Hop on Pop, but it’s at least not horrifically hard and seems in the ball park for our target audience.

By the way, this post’s SMOG grade is 11.21.

SIGGRAPH 2009 Course Pages

The organizers of SIGGRAPH Courses often put up web pages dedicated to the course.  These typically have the latest version of the course notes and the slides.  I’ve found a bunch of SIGGRAPH 2009 course pages, and thought it would be convenient to have them all in one place:

SIGGRAPH courses are a consistently good source of information – if any of these courses are about a topic which interests you, you might want to take the time to read the course notes and slides.

Looping Through Polygon Edges

We mostly avoid coding issues in our book, as our focus is on algorithms, not syntax and compiler vagaries. There’s a coding trick that I want to pass on, as it’s handy. Graphics programmers appear to be divided into two groups with this method: those who think it’s intuitively obvious and learnt it on their pappy’s knee, and those who never saw it before and are glad to find out.

You want to loop through the edges of a polygon. The vertex data is stored in some array vertexData[count], an array of count of some sort of Vertex data structure. The headache is attaching the last and first vertices together to make the connecting edge. There are plenty of weak ways to walk through the edges and connect last and first:

  • Double the beginning vertex so it’s added to the end of array; the final edge is then just another pair of adjacent points. This is perhaps even fastest to actually execute but is generally a hideous solution, adding a copy of a vertex to the array.
  • Form the last edge explicitly, outside the loop. Poor for maintenance, as you then need to copy whatever other code is inside the loop to be called one more time.
  • Use an “if” statement to know if you’re at the end of the loop; if so, then connect the first and last vertices for the last edge. The “if” special case is needed for only one vertex, which is wasteful and we’d like to avoid “ifs”.
  • Use modulo arithmetic on the counter for one of the vertices, so that it loops back to the start.

Modulo isn’t terrible, but is overkill and costs processing speed, as the modulo operation is truly needed for only the very last iteration:

for ( int v = 0; v < count; v++ ) {
   // access vertexData[v] and vertexData[(v+1)%count] for the edge
}

Here’s the solution I prefer:

for ( int v1 = count-1, int v2 = 0; v2 < count; v1 = v2++ ) {
   // access vertexData[v1] and vertexData[v2] for the edge
}

The simple trick is that v1 starts at the end of polygon, so dealing with the tough “bridge” case immediately; v2 counts through the vertices, and v1 follows behind. You can, similarly, make a pointer-based version, updating the pV1 pointer by copying from pV2. If register space is at a premium, then modulo might be a better fit, but otherwise this loop strikes me as the cleanest solution.

This copy approach can be extended to access any number of neighboring vertices per iteration. For example, if you wanted the two vertices vp and vn, previous and next to a given vertex, it’s simply:

int vp, v, vn;
for ( vp = count-2, v = count-1, vn = 0; vn < count; vp = v, v = vn++ ) {
   // access vertexData[vp], [v], [vn] for the middle vertex v.
}

I’ve seen this type of trick in code in Geometric Tools, and Barrett formally presents it in jgt. I mention it here because I think it’s a technique every computer graphics person should know.

NVIDIA Announces Fermi Architecture

Today at the GPU Technology Conference (the successor to last year’s NVISION), NVIDIA announced Fermi, their new GPU architecture (exactly one week after AMD shipped the first GPU from their new Radeon HD 5800 architecture).  NVIDIA have published a Fermi white paper, and writeups are popping up on the web.  Of these, the ones from Real World Technologies and AnandTech seem most informative.

With this announcement, NVIDIA is focusing firmly on the GPGPU market, rather than on graphics.  No details of the graphics-specific parts of the chip (such as triangle rasterizers and texture units) were even mentioned.  The chip looks like it will be significantly more expensive to manufacture than AMD’s chip, and at least some of that extra die area has been devoted to things which will not benefit most graphics applications (such as improved double-precision floating-point support and more general programming models).  With full support for indirect branches, a unified address space, and fine-grained exception handling, Fermi is as general purpose as it gets.  NVIDIA is even adding C++ support to CUDA (the first iterations of OpenCL and DirectCompute will likely not enable the most general programming models).

Compared to their previous architecture, NVIDIA has shuffled around the allocation of ALUs, thread scheduling units, and other resources.  To make sense of the soup of marketing terms such as “warps”, “cores”, and “SMs”,  I again recommend Kayvon Fatahalian’s SIGGRAPH 2009 presentation on GPU architecture.

Full List of SIGGRAPH Asia 2009 Papers

The full list of papers accepted to SIGGRAPH Asia 2009 (with abstracts) is finally up on the conference website.  As usual, Ke-Sen Huang is ahead of the curve; his SIGGRAPH Asia 2009 papers page already has preprint links for 54 of the 70 accepted papers.

Three of the papers I mentioned in my first SIGGRAPH Asia 2009 post have since made preprints available: RenderAnts: Interactive Reyes Rendering on GPUs, Debugging GPU Stream Programs Through Automatic Dataflow Recording and Visualization, and Real-Time Parallel Hashing on the GPU.

The Real-Time Rendering paper session is, of course, most likely to contain papers of interest to readers of this blog.  The most interesting paper, Micro-Rendering for Scalable, Parallel Final Gathering, was already discussed in a previous blog post.  Since then, I’ve noticed many similarities between the technique described in this paper and the point-based color bleeding technique Pixar implemented in RenderMan.  This approach to GPU-accelerated global illumination looks very promising.  The other three papers in the session are also of interest: Depth-of-Field Rendering with Multiview Synthesis describes a depth-of-field method which occupies an interesting middle ground between the very high quality (and expensive) multiview methods used in film production and the much cheaper (but low-quality) post-processing methods commonly used in games; after some scaling down and optimizing, it may be appropriate for some real-time applications.  Similarly to reprojection papers discussed previously, the Amortized Supersampling paper reprojects samples from previous frames to increase quality.  Here the goal is anti-aliasing procedural shaders, but the technique could be applied to other types of expensive shaders.  The remaining paper from the Real-Time Rendering session, All-Frequency Rendering With Dynamic, Spatially Varying Reflectance, does not yet have a preprint.  The short abstract from the conference page does sound intriguing: “A technique for real-time rendering of dynamic, spatially varying BRDFs with all-frequency shadows from environmental and point lights”.  Hopefully a preprint will become available soon.

I typically don’t pay very close attention to offline rendering papers, but one in particular looks interesting: Adaptive Wavelet Rendering takes a novel approach to Monte-Carlo ray tracing by rendering into an image-space wavelet basis, instead of rendering into image pixels or samples.  This enables them to significantly reduce the number os samples required in certain cases.

The paper Continuity Mapping for Multi-Chart Textures attempts to solve a problem of interest (fixing filtering discontinuities at UV chart seams) but the solution is overly complex for most applications.  While the authors claim to address MIP-mapping, their solution does not work well with trilinear filtering since their data structures need to be accessed separately for each MIP-map level and the results blended.  They also do not address issues relating to derivative computation.  Since their technique requires lots of divergent branching, it is likely to run at low efficiency.  This technique might make sense for some specialized applications, but I don’t expect to see it being used for game texture filtering.

There are also some interesting papers on non-rendering topics such as animation and model acquisition.  All in all, a very strong papers program this year.

Blog Redesign

We’ve been wanting to add a sidebar to the blog for a while; it was easier to do this with a new WordPress theme (Tarski) and in the process we did a little bit of graphic redesign.  The sidebar has various navigational niceties, including a search function, a tag cloud, archives and links to the most recent posts and comments.  Hope you like the new look!

Site Updates

I just spent a good part of this week revamping a few pages on this website, namely:

  • The main resources page: I removed a few dead links with Xenu (great free tool) and folded in resources from a year’s accumulation of 139 links. It barely shows – I don’t highlight the new links like I used to, since most of these have already been posted in the blog. I did spend way too much time updating the list of relevant books and related resources; remember to hit “refresh” on your browser.
  • The recommended books page: revamped to newer editions, some books added and a few dropped (e.g., I’ve given up waiting for the new Foley & Van Dam, at least on this page). Naty hopes to redo this page at some point when he finds time.
  • The portal page: the main addition is expansion of the obsessive-compulsive list of blogs I attempt to track.
  • The intersections page: unfortunately, some links had died and so were removed. One or two minor additions; this area of algorithm exploration seems mostly “done”, despite there being some obscure blank spots on the grid (most having to do with intersecting cones against other things).

Exhausting to do all this, and without a tremendous visual effect, but I’m glad to check it off the list.

First DirectX11 GPU Ships

Today, AMD shipped the Radeon HD 5870, the first GPU to support the DirectX11 feature set.  Most of the resources have been doubled in comparison to AMD’s previous top GPU, including two triangle rasterization units. The Tech Report has a nice writeup – to help make sense of the various counts of ALUs / “wavefronts” / cores / etc.  I recommend reading the slides from Kayvon Fatahalian’s excellent presentation at SIGGRAPH this year.