7 things for December 23

Here come seven more, until I run out:

  • The game Saboteur on the PS3 appears to be performing antialiasing by using MLAA. This is great to know that some form of MLAA is both fast enough and high enough quality to be usable in a commercial product. I found it interesting that it is used only in the PS3 version of the game.
  • The free glslDevil OpenGL shader debugger has recently been updated. Pretty cool (though maybe not so happy for content providers): you don’t even need the source code to debug into the shaders.
  • There is a new site dedicated to OpenCL and CUDA programming: gpucomputing.net. It is focused on university research efforts, and has some heavy-hitters in the research community involved. Just begun, not a lot there yet, but you could always subscribe to the blog.
  • Here’s a short little article on texture atlassing. If you want a bit more information, read Ivanov’s article on this topic on Gamasutra, published some years ago. For even more detail, NVIDIA’s white paper is helpful. My point: if you don’t know about texture atlassing, you should. Read one of these three and check it off your list.
  • Getting the X,Y coordinates of a pixel for a post-processing pass is typically done one of two ways: texture coordinates or using VPOS. This short article gives the details.
  • You’ll note the previous two entries came from the new blog/site gamerendering.com. There are plenty of other short articles here, most with code snippets or links to other sites. That said, I’ve asked the admin for a little more attribution of sources, e.g., the figure from our book here. Hopefully fixed by the time you see it…
  • This shader code is truly amazing (from easily my favorite graphics blog).

7 things for December 22

Some great bits have accumulated. Here they are:

  • I3D 2010 paper titles are up! Most “how would that work?!” type of title: “Stochastic Transparency”.
  • Eurographics 2010 paper titles are up! Most intriguing title: “Printed Patterns for Enhanced Shape Perception of Papercraft Models”.
  • An article in The Economist discusses how consumer technologies are being used by military forces. There are minor examples, like Xbox controllers being used to control robotic reconnaissance vehicles. I was interested to see that BAE Systems (a company that isn’t NVIDIA) talk about how using GPUs can replace other computing equipment for simulation at 1/100th the price. Of course, Iraq knew this 9 years ago.
  • I wish I had noticed this page a week ago, in time for Xmas (where X equals, nevermind): Christer Ericson’s recommended book page. I know of many of the titles, but hadn’t heard of The New Turing Omnibus before – this sounds like the perfect holiday gift for any budding computer science nerd, and something I think I’d enjoy, too. Aha, hmmm, wait, Amazon has two-day shipping… done!
  • A problem with the z-buffer, when used with a perspective view, is that the z-depths do not linearly correspond to actual world distances along the camera’s view direction. This article and this one (oh, and this is related) give ways to get back to this linear space. Why get the linear view-space depth? Two reasons immediately come to mind: proper computation of atmospheric effects, and edge detection due to z-depth changes for non-photorealistic rendering.
  • Wolfgang Engel (along with comments by others) has a great summary of order-independent transparency algorithms to date. I wonder when the day will come that we can store some number of layers per pixel without any concern about memory costs and access methods. Transparency is what kills algorithms like deferred shading, because all the layers are not there at the time when shading is resolved. Larrabee could have handled that… ah, well, someday.
  • Morgan McGuire has a paper on Ambient Occlusion Volumes (motto: shadow volumes for ambient light). I’ll be interested to see how this compares with Volumetric Obscurance in I3D 2010 (not up yet for download).

Amazon Stock Market update: one nice thing about having an Amazon Associates account is that prices at various dates are visible. The random walk that is Amazon’s pricing structure becomes apparent for our book: December 1st: $71.20, December 11-14: $75.65, December 18-22: $61.68. Discounted for the holidays? If so, Amazon’s marketing is aiming at a much different family demographic than I’m used to. “Oh, daddy, Principia Mathematica? How did you know? I’ve been wanting it for ever so long!”

Shader variations and ifdefs

Morgan McGuire’s page is the only twitter feed I follow (though Marc Laidlaw’s Trog Act Manly But is darn tempting), as he simply offers up worthwhile links on computer graphics and on game design. Strangely, though, some ideas cannot be expressed in 140 characters. So, here’s our first guest post, from Morgan:

When experimenting on a new algorithm, I have a zillion variations I’m testing packed into one shader and a lot of #ifdefs and helper functions to switch between them.  Often you need the invoking C++ code to line up, and I’m always forgetting to switch the routines in both the shaders and C++ to keep them in sync…

I just realized that I can put my #defines in a header and include the exact same header into HLSL, Cg, GLSL, CUDA, and C++ code, since they have exactly the same syntax.  So I now have both C++ and GLSL files that say #include “myoptions.h” at the top.  Cool!

(Ok, my GLSL infrastructure adds #include to the base spec, but I assume everyone else’s does too).

US Gov Requests Feedback on Open Access – ACM Gets it Wrong (Again)

By Naty Hoffman

In 2008, legislation was passed requiring all NIH-funded researchers to submit their papers to an openly available repository within a year of publication.  Even this modest step towards full open access was immediately attacked by rent-seeking scientific publishers.

More recently the White House Office of Science and Technology Policy started to collect public feedback on expanding open access.  The first phase of this process ends on December 20th.

From ACM’s official comment, it is clearly joining the rent-seekers.  This is perhaps not surprising, considering the recent ACM take-down of Ke-Sen Huang’s paper link pages (Bernard Rous, who signed the comment, is also the person who issued the take-down).  In the paper link case ACM did eventually see reason.  At the time, I naively believed this marked a fundamental change in ACM’s approach; I have been proven wrong.

ACM’s comment can be found towards the bottom of this link; I will quote the salient parts here for comment.

ACM: “We think it is imperative that deposits be made in institutional repositories vs. a centralized repository…”

A centralized repository is more valuable than a scattering of papers on author’s institutional web pages.  ACM evidently agrees, given that it has gone to the trouble of setting up just such a repository (the Digital Library).  ACM’s only problem with a central, open access repository is that it would compete with its own (closed) one.  Since an open repository contributes far more value to the community than one locked behind a paywall, ACM appears to value its revenue streams over the good of the community it supposedly exists to serve.

ACM: “…essentially everything ACM publishes is freely available somewhere on the Web… In our community, as in others, voluntary posting is working.”

This is demonstrably false.  Almost every graphics conference has papers which are not openly available.  Many computing fields are even worse off.

Most infuriatingly, ACM presents a false balance between its own needs and the needs of the computing community:

ACM: “…there is a fundamental balance or compromise in how ACM and the community have approached this issue – a balance that serves both… We think it is imperative that any federally mandated open access policy maintain a similar balance… There is an approach to open access that allows the community immediate access to research results but also allows scholarly publishers like ACM to sustain their publishing programs. It is all about balance.”

What nonsense is this?  The ACM has no legitimate needs or interests other than those of its members!  How would U.S. voters react to a Senator claiming that a given piece of legislation (say, one reducing restrictions on campaign financing) “strikes a fundamental balance between the needs of the Senate and those of the United States of America”?  ACM has lost its way, profoundly and tragically.

As much as Mr. Rous would like to think otherwise, ACM’s publishing program is not an end in itself, but a means to an end.  ACM arguing that an open repository of papers would be harmful because it “undermines the unique value” of ACM’s closed repository is like the Salvation Army arguing that a food stamp program is harmful because it “undermines the unique value” of their soup kitchens.

If you are an ACM member, these statements were made in your name.  Regardless of membership, if you care at all about access to research publications please make your opinion known.  Read the OSTP blog post carefully, and post a polite, well-reasoned argument in the comments.  Note that first you need to register and log in – the DigitalKoans blog has the details:

Note: To post comments on the OSTP Blog, you must register and login. There are registration and login links on the sidebar of the blog home page at the bottom right (these links are not on individual blog postings).

Hurry!  The deadline for Phase I comments (which include the ACM comment) is December 20th, though you can make your opinion known in the other phases as well.

Amazon Needs Programmers, We Suspect

… at least judging from an email received by Phil Dutre which he passed on. Key excerpt follows:

Dear Amazon.com Customer,

As someone who has purchased or rated Real-Time Rendering by Tomas Moller, you might like to know that Online Interviews in Real Time will be released on December 1, 2009.  You can pre-order yours by following the link below.

With a title-finding algorithm of this quality, Amazon appears to be in need of more CS majors.

Don’t fret, by the way, I’ll be back to pointing out resources come the holidays; things are just a bit busy right now. In the meantime, you can contemplate Morgan McGuire’s gallery of real photos that appear to have rendering artifacts or look like computer graphics. It’s small right now – send him contributions!

A Digression on Marketing

In my previous post on Larrabee I talked about the marketing of an ancient HP workstation. I ended with, “If anyone wants to confirm or deny, great!”

Followup from a reader: the story misses 3 additional changes to the machine. The more expensive business machine had: a bigger cabinet, more flashing lights on the front, and required 220 power. Apparently the business market wouldn’t take the machine seriously unless it required special power and wanted something with flashing lights so it looked more like a computer. The lights were completely random and the engineers wanted to hook them up so you could at least use them to see what was going on with the machine. And, no, the engineers didn’t win.

By the way, this isn’t meant as a diss against HP: I have two HP computers at home and love them, they make quality products. I’m just pointing out that even HP (which used to be known as the company that would market sushi by calling it “raw dead fish”) finds that marketing that contravenes rational thought is sometimes necessary. The “blinkenlights” story is a common theme, because it’s true. I recall an article (which I wish I had saved) from the early 90’s in the Wall Street Journal where people running the Social Security program were duped into thinking a computer company’s offerings were ready by being shown empty boxes with blinking lights inserted. “See, the computer is computing right now”. It was quite the scandal – front page news – when this ruse was uncovered.

Bonus quiz question: In researching (if you can call it that) this story, I ran across this site, which had an excellent question, “what was the world’s first personal computer?” Answer here. I was way wrong with the Altair. The answer, a computer I hadn’t heard of before, even bears on interactive computer graphics history, as it was the first computer experience for a famous graphics pioneer.

More on Larrabee

I wrote earlier on Larrabee being delayed. A coworker pointed out this article from Jon Peddie Research, who know (and usually charge) more than I do. It makes a plausible case that cancelling this first version of Larrabee was the correct move by Intel, and that the experience gained is not wasted. JPR argues that the high-performance computing market is also high-margin, so needs fewer sales to be profitable. There are other gains from the project to date – anyway, a worthwhile read. I’ll be interested to see what’s next for Larrabee.

The magic of marketing and price differentials is fascinating to me. Books like The Underground Economist have some entertaining tales of how prices are set. Here’s a marketing story I heard (elsewhere), and it might even be true: HP had two versions of the series 800 workstation in the late 80’s/early 90’s, the only difference being, literally, one bit on a ROM chip. If the bit was set, then HP-UX could not be run on the workstation. Amazingly, the price for this version of the workstation was higher, even though it was seemingly less capable. This version was marketed to hospital administration, which at the time didn’t use HP-UX (so didn’t care); the workstations that could run HP-UX were sold to engineers. HP could honestly say there was a difference between the two workstations, say that one was tailored to hospital admin and the other to engineers, and so justify the price differential. If anyone wants to confirm or deny, great!

HPG and EGSR 2010

Information on the 2010 iterations of the High Performance Graphics conference (HPG) and the Eurographics Symposium on Rendering (EGSR) is now available online.  The two conferences will be co-located in Saarbrucken, Germany in late June.  Fortunately (and unlike HPG’s co-location with SIGGRAPH this year) there is no overlap between the two – EGSR immediately follows HPG.  These are both excellent conferences with strong (albeit in HPG’s case, short) histories of high-quality real-time rendering work. For many of our European readers, the combination of the two conferences should prove irresistible.

Update: the HPG website and CFP are up.

Larrabee Chip Delayed/Cancelled

The news for the day is that the current hardware version of Larrabee, Intel’s new graphics processor, for the consumer market has been delayed (or cancelled, depending on what you mean by “cancelled”). Intel is not commenting on possible future Larrabee hardware, so the Larrabee project itself exists. I don’t see an official press release (yet) from Intel. The few solid quotes I’ve seen (in CNET) is:

“Larrabee silicon and software development are behind where we hoped to be at this point in the project,” Intel spokesperson Nick Knupffer said Friday. “As a result, our first Larrabee product will not be launched as a standalone discrete graphics product,” he said.

along with this:

Intel would not give a projected date for the Larrabee software development platform and is only saying “next year.”

The Washington Post gives this semi-quote:

Intel now plans its first Larrabee product to be used as a software development platform for both graphic and high performance computing, Knupffer said.

See more from The Inquirer, CNET, ZDNet, Washington Post, and the Wall Street Journal. Many more versions via Google News.

In my opinion, Intel has a tough row to hoe: catch up in the field of high-performance graphics, when all they’ve had before is the ~$2 chip low-end GMA series. This series probably has a larger market share in terms of units sold than NVIDIA and AMD GPUs combined (basically, any Intel computer without a GPU card has one), but I assume makes pennies per unit and by its nature is limited in a number of ways. Markets like high-performance computing, which make the most sense for Larrabee (since it appears to have the most flexibility vs. NVIDIA or AMD’s GPUs, e.g. it’s programmable in C++), is a tiny piece of the market compared to “I just want DirectX to run as fast as possible”. The people I know on the Larrabee team are highly competent, so I don’t think the problem was there. I’d love to learn what hurdles were encountered in the areas of design, management, algorithms, resources, etc. Even all the architectural choices of Larrabee are not understood in their particulars (though we have some good guesses), since it’s unreleased. Sadly, we’re unlikely to know most of the story; writing “The Soul of An Unreleased Machine” is not an inspiring tale, though perhaps a fascinating one.

Real-time Mandelbulb visualization with GigaVoxels

See this post on Cyril Crassin’s blog (I just saw it linked on Tim Farrar’s blog and had to mention it here since it is wicked awesome and I wouldn’t want anyone to miss it).

Cyril is the primary inventor of the Gigavoxels technique which has been the subject of several recent publications.  The Mandelbulb is similar to the Mandelbrot set, but in 3D.  Cyril evaluates the Mandelbulb on the fly to fill the brick cache used by the Gigavoxels.

Mandelbulb + Gigavoxels = real-time Mandelbulb visualization = pure win.