Category Archives: Resources

SIGGRAPH 2012 early registration deadline is today

If you’re reading this after June 18th, oh well…

Registration page is here.

Me, I wouldn’t rate SIGGRAPH the premier interactive rendering research conference any more: I3D or HPG publish far more relevant results overall. SIGGRAPH still has a lot of other great stuff going on, and there are enough things of interest to me this year that I’m happy to be attending:

  • I guess the courses are the main draw for me right now, and some of these have become informal venues for interactive rendering R&D presentations (e.g. the Advances course).
  • SIGGRAPH Mobile could be interesting. Given the huge profit margins of GPUs for mobile vs. PCs, it’s where the market has moved. It feels a little “back to the future”, with GPU speeds getting reset about a decade vs. PC performance, but there’s some interesting research being done, e.g. this paper (not at SIGGRAPH but at HPG, I noticed it today on Morgan McGuire’s Twitter feed and thought it was fascinating).
  • I was thinking of arriving Sunday afternoon, but then noticed some interesting talks in the Game Worlds talks on Sunday, 2-3:30 pm.
  • Other talks will be of interest, I’ll need to wade through the list.
  • Emerging Technologies and the Exhibition Floor usually have something that grabs my attention (if nothing else, I can browse through new books), and I maybe should give Real Time Live a visit.
  • And, meeting people, of course – it’s inspiring and fun to hear what others are up to. Sometimes a little chance conversation will later have great value.

Why submit when you can blog?

I was cleaning up the RTR portal page today. Of all the links on this page, I often use those linked in the first three items. I used to have about 30 blogs listed. Trying them all today, 5 have disappeared forever (being replaced by junk like this and this), and 10 more are essentially dead (no postings in more than a year). Understandable: blogs usually don’t live that long. One survey gives 126 days for the average lifetime of a typical blog. Another notes even the top 100 blogs last an average of less than 3 years.

Seeing good blogs disappear forever is sad for me. If I’m desperate, I can try finding them using the Wayback Machine, but sometimes will find only bits and pieces, if that. This goes for websites, too. If I see some article I like, I try to save a copy locally. Even then, such pages are hard to find later – I’m not that organized. Other people are entirely out of luck, of course.

My takeaway: feel free to start a blog, sure. But if you have some useful programming technique you’ve explained, and you want people to know about it for some time to come, then also submit it to a journal. One blog I mentioned last post, Morten Mikkelsen’s, shows one way to mix the two: he shows new results and experiments on his blog, and submits solid ideas to a journal. I of course strongly suggest the (new, yet old) Journal of Computer Graphics Techniques (JCGT), the spiritual successor to the journal of graphics tools (as noted earlier, all the editors have left the old journal). Papers on concise, practical techniques and ideas are what it’s for, just the sorts of thing I see on many graphics blogs. Now that is journal is able to quickly publish ideas, I dearly want to see more short, Graphics Gems-like papers. If and when you decide to quit blogging/get hit by an asteroid/have a kid, if prior to this you also submitted your work to a journal and had it accepted, you then have something permanent to show for it all, something that others can benefit from years later. It’s not that hard, honestly – just do it. JCGT prides itself on working with authors to help polish their work and bring out the best, but there are plenty of other venues, ranging from SIGGRAPH talks, Gamasutra articles, and GPU Pro submissions to full-blown ACM TOG papers.

Oh, I should also note that JCGT is fine with work that is not necessarily new, but fills a gap in the literature, explains an improved way of performing an algorithm, gives implementation advice, etc. Citing sources is important – don’t claim work that isn’t your own – but otherwise the goal is simple: present techniques useful for computer graphics programmers.

By the way, if you do run a website of any sort, here are my three top pet peeves, so please don’t do them:

  • Moving page locations and leaving no forwarding page at the old page’s location (I’m looking at you, NVIDIA and AMD) – you don’t care if someone directs traffic to your site?
  • Giving no contact email address or other feedback mechanism on your web pages – you don’t want to know when something’s broken?
  • Giving no “last updated on” date on your web pages – you don’t want people to know how fresh the info is?

Seven Things for June 7th

I’ll be gone this weekend, so my dream of catching up on resources by posting every day is slowed a bit. Here’s today’s seven:

  • The free Process Explorer has a lot more functionality than its name implies. One very cool feature is that it actually shows GPU usage. Run it, right-click a process that’s running and select Properties, then go to the GPU Graph tab to watch memory use and GPU load.
  • If you are seriously involved in implementing bump maps, parallax occlusion maps, etc., Morten Mikkelsen’s blog has a lot of chewy information, along with demos and source. He’s doing a lot of interesting work on autogenerating and blending mappings.
  • The game itself is no great shakes, but Google’s Cube has some lovely 3D rendering going on via javascript.
  • Another “3D in the browser” experiment (with WebGL) is sketchPatch. It’s not as simple as advertised, but I like the idea of an interpreted language you just type and see in the same window.
  • There are lots of reasons Unreal Engine 3 is the most popular commercial 3D engine for games. Here’s some nice eye candy from their tutorial on image reflection, which is also just plain educational.
  • Some cool results here using cone tracing for global illumination effects. Seeing these effects for dynamic objects at interactive rates is great stuff, especially since they’re having to update octrees on the fly.
  • I love the colored Japanese woodcuts of classic videogames that Jed Henry has been making:

Seven Things for June 6th

It’s D-Day and it’s been awhile, so let’s get going. This is a LIFO of the 486 backlogged links I’ve collected for this blog:

  • GPUView looks like an interesting profiling tool from some students at Stanford (done as interns at Microsoft, which has a more official page), though I’ve heard it’s a bit of work to set up. If you’ve used it, how did you find it?
  • Open source code for a fast and scalable GLSL GPU implementation of the Perlin noise with functions, not textures.
  • NV Path Rendering is not what you might think, it’s about rendering text and 2D paths with quite a bit of elaboration available (think SVG or other 2D vector descriptions). GTC presentation here.
  • The book “Physically Based Rendering” is now in eBook form, including PDF (so I assume no DRM?). Annoyingly, it costs considerably more than the physical book on Amazon, but that’s the publisher’s doing.
  • Proland looked intriguing, a procedural terrain generator that creates based on view. Appears fairly elaborate, and a quick way to get some plausible-looking terrain data.
  • Geekbench is a cross-platform benchmarking system; from what I’ve heard, mobile platforms kind of set the clock back a fair number of year in terms of performance. Still, 3D is doable (it certainly was in 2002); here’s a starter list of 3D CAD apps for Android (many are on the iPad, too). I need to search out more, I’m interested in what’s out there.
  • Finally, in the category “this looks like a painting but is reality”, a photo taken in Namibia:

Author-Izer, and what do publishers provide

There’s a new service provided by ACM’s Digital Library: Author-Izer. Short version: if you have published something with the ACM, and you have a preprint of the paper on your own or your company’s website, you can provide the ACM DL this link for your article and they’ll put it with the article reference. This is fairly sporting of the ACM. If you’re an author it’s worth this bit of effort to give your work wider dissemination. Linking also can provide the ACM with download statistics from your site and so give a better sense of the impact of your paper (or at least inflate your statistics compared to people not using Author-Izer).

As a reader without an ACM DL subscription, it’s still better to go to Ke-Sen Huang’s site or Google Scholar, where these external author sites have been collected without each author’s effort. For example, free preprints of 95% of SIGGRAPH 2011 papers are linked on Ke-Sen’s page. In a perfect world, the ACM would simply hire Ke-Sen for a few days and have him add all his external links of authors’ sites to their database. I’d personally toss in $20 towards that effort. I suspect there are 18 reasons given why this would not be OK – “we want individual authors to control their links” (but why not give a default link if the author has not provided one?), “we’re not comfortable having a third party provide this information” (so it’s better to have no information than potentially incorrect information leading you, at worst, to a dead link?), or the catch-all “that’s not how we do things” (clearly).

As Bernie Rous discusses, there’s a tension at the ACM between researchers, who want the widest dissemination of their work, and professional staff, who are concerned about the financial health of the organization. Author-Izer helps researchers, but there’s little direct benefit to the ACM’s bottom line. Unfortunately, currently the Author-Izer service seems to be virtually unknown. For example, the SIGGRAPH 2011 table of contents appears to have no Author-Izer links, though perhaps I’m missing them. I hope this post will help publicize this service a bit.

It’s nice that the ACM allows authors to self-archive, where they can provide preprints of their own work on their website or their institution’s. Most scholarly journals allow this archiving of preprints – more than 90%, according to one writer (and more that 60% allow self-archiving of the refereed final draft, which the ACM does not allow). For authors at academic institutions with such archives, great, easily done; for authors at games companies, film companies, self-employed, etc., it’s catch-as-catch-can. If the author hosts his own work and is hit by a meteor, or just loses interest, his website eventually fades away and the article is then available only behind a paywall. One understandable reason for ACM’s “must be hosted by the author or his institution” clause is that it disallows lower-cost paywalls from competing. But why not just specify that? I can see a non-compete clause like “the author will not host his preprint behind a paywall” (a restriction the ACM doesn’t currently have), but otherwise who cares where the article is hosted, as long as it’s free to download?

This restriction feels like a business model founded on being a PITA: instead of uploading his article to some central free access site and never thinking about it again, each author needs to keep track of access and deal with job changes, server reorganizations and redirects, company bankruptcy or purchase, Author-Izer updates, and anything else that can make his website go off the radar. Pose this problem to a thousand authors and the free system will be inherently weak and ineffectual, making the pay version more desirable.

I believe that many people in the ACM have their hearts in the right place, there’s no conspiracy here. However, the tension of running a paywall service like the Digital Library gives a “one hand tied behind my back” feel to efforts at more open access. If there were no economic constraints, clearly the ACM DL would be free and there would be no real point to Author-Izer. Right now there still are these financial concerns, very real ones.

A journal publisher used to offer:

  • Physical journal printing and binding
  • Copy editing, illustrations, and layout
  • Peer review and professional editors
  • Archiving
  • Distribution to subscribers and institutions
  • Reputation

The physical artifact of the journal itself is becoming rarer, and authors now do copy editing, illustrations, and most to all of the layout. The technical editors and reviewers are all unpaid, so their contributions are separate from the publisher itself – many journals have abandoned their publishers, as the recent Elsevier boycott has highlighted. So what is left that publishers provide?

Another way to look at it: what if publishers suddenly disappeared? Different systems would supplant their services, for good or ill: Google, for instance, might provide archiving for free (they already do this for magazines like Popular Science). Distribution is as simple as “get on the mailing list.” Reputation is probably the one with most long-term value. I don’t think I’d instead want to have a reddit up/down vote system, given the various problems it has. CiteSeer and Google Scholar are pretty good at determining reputation by citation count. You can even check your own citation count for free. There are ways of determining a paper’s impact beyond simple citation counts, lots of people think about this.

I can imagine a few answers for why publishers matter – these disconnected solutions I mentioned are not necessarily the best answers. However, the burden of proof is on the publisher, both commercial and non-profit, to justify its continued existence. It will be interesting how the various open access initiatives play out and how they affect publishers.

English Translations of tri-Ace CEDEC 2011 Slides Available

Yoshiharu Gotanda, the CEO and CTO of tri-Ace, has given many excellent graphics presentations over the years, mostly focused on physically based real-time rendering. Gotanda-san’s presentations are hosted on the tri-Ace research webpage. Some of these were originally given in English; these include presentations at  GDC 2005, 2009 and 2012, as well as SIGGRAPH 2010 courses on shading (slides, course notes) and color (slides, course notes). However, many of Gotanda-san’s presentations were given in Japanese at the CEDEC conference, and the slides are also in Japanese.

Fortunately, the slides for two of these presentations (from CEDEC 2011) have been translated into English, in a collaboration between Gotanda-san and Marc Heng (Square Enix Japan). In addition, reviews of the English version were performed by Sébastien Lagarde (DONTNOD Entertainment) and myself. These presentations deal with the theory and implementation of physically based rendering used in the tri-Ace 2011 demo trailer, as well as forthcoming titles. Both presentations are highly recommended.

Hopefully in the future, other good CEDEC graphics presentations by Gotanda-san and others (e.g. Masaki Kawase) will be translated into English.

1200 Books You May Have Rented

If you’re a member of ACM, you have access to about 1200 books online through Safari and Books24x7. Safari’s catalog is here, Books24x7 is here. Just login using your ACM ID & password. I can’t say the book selection is that exciting, some are half a decade old or older (bad for books about APIs), though there are few that might be of interest. If nothing else, there are some guides to popular packages and languages that might help you out.

Worth a reminder: if you’re an ACM SIGGRAPH member, you also get access to essentially all computer graphics papers in the ACM’s Digital Library. I took a peek today to see if the I3D 2012 papers were up yet (the conference runs this weekend) – no joy there, though 2011’s are available. At least, I’m pretty sure 2012’s are not up. Personally, I find their searcher kind of poor if you want to search through proceedings, but is otherwise serviceable for individual articles. As usual, Ke Sen Huang’s page is the place to go to get most of the latest articles (with no membership needed).

Oh, just to reel off some other free books that might be of interest: Autodesk’s “Imagine Design Create” book, a free PDF. More for designers but full of pretty pictures and there’s stuff on game graphic design, along with films and much else. If you’d rather have the coffee-table version, get it on Amazon.

Me, I just finished the crowd-sourced sci-fi “Machine of Death” compilation, which has nothing to do with computer graphics but was an interesting bedtime read. Free as an ebook or audiobook, or again in physical form from Amazon.

The Graphics Codex app is now available

The Graphics Codex is a little $3.99 Apple app developed by Morgan McGuire, a noted researcher and practitioner in graphics, especially interactive graphics. He’s written numerous research papers and a number of books on videogame development, consults for NVIDIA, teaches at Williams College, and has worked on games such as Titan Quest (recently named #65 in PC Gamer’s top 100 games of all time). From talking with him, the Graphics Codex is basically his reference notebook. It holds the compact nuggets of knowledge he wants to have instantly available at his fingertips (literally, since it’s an iPad/iPhone app; it also runs on iPods running iOS 5.x).

This Graphics Codex been available for a few weeks, but this new version, 1.2, has faster scrolling and display, among other features. Morgan felt this was an important improvement so I’ve been holding off blogging about the app. Upgrades are free and simple, as with most apps. Morgan says he’s working on version 1.3, which will focus on iOS 5.1 support, color theory, and diagrams useful for explaining computer graphics topics.

So, what is it? Well, let’s start with pictures:

We all have our own favorite pieces of information we like to see included. This codex fits me pretty well: I see the reflection and refraction formulas, and various matrix types described (perspective, rotation, scale, translation, skew, determinants, etc.). I see handy things like the formatting for printf, and the latex and HTML codes for Greek letters and for math symbols. I see pseudocode for object/object intersection and distance formulas, as well as various common sorting algorithms. I see raster and 3D file formats (nicely linking to original documents on the web, when available). That’s just for starters.

There are a lot of topics included, and you can see the whole list before purchasing. In the app topics are listed alphabetically, index-style, but that’s fine, as the normal way to access this work is to search the index. Some topics I don’t know a thing about, which is great – knowing what you don’t know is important. Seeing some of these concepts inspires me to learn about them (elsewhere – like I say, this app is a reference, not a textbook). Some topics I may rarely or never look at, such as the examples shown in the images above, but I like knowing they’re there. Given that the author is a professor and consultant, I understand why they exist: these are teaching aids, information you can easily pull up and show a student, client, or other developer to help explain a concept or algorithm.

That said, there are some minor gaps. Things that came to mind for me to test but that I didn’t find: compositing (“over”, etc.), sampling and filtering (e.g. sinc and Gaussian curves), dithering (but when did I last use that?), and regular expressions (which admittedly sometimes have variations between computer languages). There’s other stuff I wouldn’t mind having: all HTML letter codes, a decimal/hexadecimal table, etc., but these are trivial to find & bookmark from sites on the web. Some domain-specific things like the various architectural projections (e.g., the various axonometric projections) would be nice, but that’s very specific to me and Wikipedia mostly fills the gap. Someday I imagine there could be a framework for such codex apps to allow you to add your own index entries, similar to how you can make your own reference work on Wikipedia (update: Morgan notes that this feature exists for his app, it’s called “email him and ask for a topic to be added”). The difference is that this app gives you the core, relevant ideas and algorithms of computer graphics in a usable form, with a consistent style, and cross-referenced to only directly relevant articles. A single author and editor, focused on a single area, adds considerable value.

This is not the first time someone has collected such reference entries. The most direct “competitor” that comes to mind is Phil Dutre’s great Global Illumination Compendium. This is a free PDF, go get it. Its focus is indeed global illumination, and it’s quite an extensive reference in this area. I would say there’s about a 25% overlap with the Graphics Codex. Another resource that comes to mind is Steve Hollasch’s collection of USENET articles, free on the web. This collection is a bit ancient, but math and physical formulas don’t change quickly. It’s a pretty shotgun-scattered set of articles, more like Graphics Gemlets, but an interesting place to wander through and try for information.

Back to the Graphics Codex. Each index entry is nicely formatted and readable, and every page (except the Bibliography, which I have reported as a bug) can be made larger or smaller. This larger/smaller functionality works well, reformatting the entry to be fully visible side-to-side, vs. typical PDF zoom, where the page can become wider than the display.

All the entries are aimed to be for reference, such as hard information that you basically understand but want to get the precise formula or code. This is information you could eventually find on your bookshelf or on the web, but instead is quickly available for you by simply searching the index. You can’t actually search the entries themselves, and the bibliography doesn’t have back links, i.e., “which Codex entries reference this article?” These are minor niggles: entries use cross-references to other entries, and most entries have a reference to related books or papers, sometimes links directly to the reference, if online. Reference back-links are more useful in a textbook; for this reference, they’d probably mostly be clutter.

Summary

Negatives:

  • Can’t copy and paste, unlike a computer-viewable version. (There might be an app for that…?)
  • Doesn’t have everything I personally might need.
  • Entries themselves are not searchable.

Positives:

  • Searchable index makes finding things a snap.
  • Nicely formatted, color illustrations and pseudocode snippets.
  • Cross-referencing and original source references, with links.
  • Weighs much less than the related pile of books.
  • Has many to most things I like to have handy.

All in all, worth $3.99 to me.

The above are my own impressions, before reading the email Morgan sent me about the app. Here’s what Morgan said:

What I’m doing with the app is converting all of my course notes and the professional notes that I take with me consulting into easily searchable topics. This way I always have the reference material with me, without having to carry all of my graphics books between my home, office, lab, and remote sites. I usually cite not only the paper and book that material comes from, but the exact page, so that I can quickly find more information when I am in the same place as my books. DirectX, OpenGL, Unity, Mitsuba, G3D, and JOGL entry points link to their official documentation on the web. Unlike a PDF or Apple IBook, The Graphics Codex does all typesetting live so it reflows for the orientation and size of your mobile device, and zooming in recomputes the text rather than scrolling it off the side of the page.

I’m prioritizing topics that people e-mail me about and vote for on the website…and anything that I look up in my regular work immediately goes into the next version. Version 1.2 not only adds a bunch of new topics from convolution to quaternions but an all-new UI and the ability to show the types and units of subexpressions.

So, if you get the app and see something missing, email him and go vote!

A few new books

I’ve updated our books page a bit, adding the new books I know of at this point, adding links to authors sites and Google Books samples, etc. Please let me know what we’re missing.

A book I know nothing about, but from updating the books page I think I’ll get, is the OpenGL 4.0 Shading Language Cookbook. A reviewer on Gamasutra gives it strong praise, as do all the Amazon customer reviews.

One I’ve left off for now is Programming GPUs, which I expect is focused on computing with the GPU (no rendering), judging from the author’s background as a quant (his bio’s cute). I also left off a heckuva lot of books on using the Unity engine, to keep the list focused on direct programming vs. using higher-level SDKs.

Along the way I noticed a nice little blog called Video Game Math, by Fletcher Dunn and Ian Parberry, who recently released a second edition of their 3D Math Primer for Graphics and Game Development. Which is pretty good, by the way. My mini-review/endorsement: “With solid theory and references, along with practical advice borne from decades of experience, all presented in an informal and demystifying style, Dunn & Parberry provide an accessible and useful approach to the key mathematical operations needed in 3D computer graphics.” There’s an extensive Google Books sample of much of the first few chapters.

In the “old but awesome and free” category this time is Light And Color – A Golden Guide. Check it out before there’s some takedown notice sent out. Yes, it’s small, it’s colorful, and some bits are dated, but there are some pretty good analogies and explanations in there. No kidding. Lots more Golden Guides here (including, incredibly, this one).

I did find that there’s a new edition of “Real Time Rendering out, which was a surprise. The subtitle is the best: “Aalib, Aces of ANSI Art”. It’s even sold by Barnes & Noble and Books-A-Million. Happily, I couldn’t find it on Amazon, so maybe they’re scaling back on carrying these so-called books. This particular book is a paperback, and more expensive than the real thing (I like to think our’s is real – it’s the dash between “Real” and “Time” that keeps it real for me). Or I should say it’s more expensive unless you buy ours from these “double your intelligence or no money back” sellers. I believe this phenomenon is from computers tracking competitors’ prices and each one jacking up prices in response.

In case you missed my posts on Betascript Publishing, go here – short version is that they use a computer program to find related articles on Wikipedia, put on a cover (usually the most creative part of the process), and sell it. I’d be interested to know which book is better, their computer-generated one or my own Wikipedia-derived followup, GGGG:RTRtR (Game GPU Graphics Gems: Real-Time Rendering the Redux), reviewed by me here. I really should read my own book some day, there look to be some interesting Wikipedia articles in there.

Finally, I like the concept of book autopsies:

GPU Pro^3 is available for order

Like the title says, GPU Pro3, the next installment of the GPU Pro series, is now available for order. The publication date is realsoonnow (January 17th). The extended table of contents is a great way to get a sense of what it contains.

The GPU Pro series is essentially a continuation of the ShaderX series, just with a different publisher. I was given a look at the draft of this latest volume, and it appears in line with the others: some eminently practical and battle-tested approaches mixed with some pie-in-the-sky out-of-the-box done-with-the-metaphors ideas – having a mix keeps things lively. Articles such as the one covering the CryENGINE 3 is a fine combination of both, with solid algorithms alongside “this doesn’t always work but looks great when it does” concepts. Some of the material (including a fair bit of the CryENGINE 3 article) can be gleaned from presentations online from GDC and SIGGRAPH, but here it’s all polished and put in one place. Other articles are entirely fresh and new. Priced reasonably for a full-color book, it’s a volume that most graphics developers will find of interest.