Author Archives: Eric

WebGL Resources

by Patrick Cozzi, a guest blogger

(I was corresponding with Patrick and found he knew way too much about WebGL, so asked him to write something down. – Eric)

Although I am a long-time C++ and OpenGL developer, I’ve been developing full-time in JavaScript and WebGL for the past year and a half on an open-source 3D engine, Cesium, for virtual globes and maps. Here are some of my favorite WebGL resources.

Reading

SIGGRAPH

  • The WebGL BOF, organized by Ken Russell, will have a series of five-minute lighting talks with a focus on demos, including a Cesium demo I’m giving. Last year the room was packed – people standing, sitting on the floor, and crowding around the door. Let’s hope the room is a lot bigger this year.
  • Graphics Programming for the Web is a timely new course by Pushkar Joshi, Mikaël Bourges-Sévenier, Ken Russell, and Zhenyao Mo covering WebGL and other relavant HTML5 techniques. It sounds like it will be pretty broad, which is great for C++ developers like me that recently started to pretend to be web developers.
  • Although not WebGL-specific, I’ll be at the Rest 3D BOF organized by Rémi Arnaud. I’ll even miss part of Beyond Programmable Shading for it. Rest 3D is defining a REST API for accessing 3D content over HTTP. If it gets widespread adoption from content providers, WebGL apps using the API will have access to a ton of content, which is a big win for everyone.

Need to convince management/leads to consider WebGL?

  • WebGL is cross-platform, and doesn’t require an install, plugin, or admin rights. IE doesn’t support WebGL, but there are several options. We’ve found Chrome Frame to be the best because it installs without requiring admin rights, and also brings Chrome’s fast JavaScript engine.
  • WebGL browser support is increasing. Check out the WebGL Stats by Florian Bösch. It currently reports that 65.6% of desktop browsers across all OSes support WebGL. (more stats for Firefox here).
  • JavaScript doesn’t suck that much, really. JavaScript: The Good Parts by Douglas Crockford and his other JavaScript writings are great reads. There are downsides too, of course; for example, I have a much harder time rationalizing about performance in JavaScript than I do in C++. Fortunately, the built-in Chrome profiler is painless to use.

Tools

  • The WebGL Inspector by Ben Vanik allows us to step through WebGL calls or just draw calls, and view textures, buffers, shaders, and the current state – think gDEBugger for WebGL. I like to use it as a sanity check to make sure we are not making too many draw calls or loading too many textures.
  • Our WebGL Report uses a pipeline diagram to display the system’s WebGL capabilities such as maximum texture size and number of texture image units.

Finally, the WebGL wiki has a ton of great resources including a list of frameworks and more.

OpenGL Insights and more

In my previous blog post I mentioned the newly-released book OpenGL Insights. It’s worth a second mention, for a few reasons:

  • The editors and some contributors will be signing their book at SIGGRAPH at the CRC booth (#929) at Tuesday, 2 pm. This is a great chance to meet a bunch of OpenGL experts and chat.
  • Five free chapters, which can be found here. In particular, “Performance Tuning for Tile-Based Architectures” is of use to anyone doing 3D on mobile devices. Most of these devices are tile-based, so have a number of significant differences from normal PC GPUs. Reading this chapter and the (previously-mentioned) slideset Bringing AAA Graphics to Mobile Platforms (PDF version) should give you a good sense of the pitfalls and opportunities of mobile tile-based architectures.
  • The book’s website has an OpenGL pipeline map page (direct link to PDF here). Knowing what happens when can clarify some mysteries and solve some bugs.
  • The website also has a tips page, pointing out some of the subtleties of the API and the shading language.

While I’m at it, here are some other worthwhile OpenGL resource links I’ve been collecting:

  • ApiTrace: a simple set of wrapper DLLs that capture graphics API calls (also works for DirectX). You can replay and examine just about everything – think “PIX for OpenGL”, only better. For example, you can edit a shader in a captured run and immediately see the effect. Also, it’s open-source and as of this writing is actively being developed.
  • ANGLE: software to translate OpenGL ES 2.0 calls to DirectX 9 calls. This package is what both Chrome and Firefox use to run WebGL programs on Windows. Open source, of course. Actually, just assume everything here is open-source unless I say different (which I won’t).
    • Edit: Patrick Cozzi (one of the editors of OpenGL Insights) notes that there are several options for WebGL on IE. “Currently, I think the best option is to use Chrome Frame. It painlessly installs without admin rights, and also brings Chrome’s fast JavaScript engine to IE. We use it on http://cesium.agi.com and I actually demo it on IE (including installing Chrome Frame) by request quite frequently.”
  • Microsoft Internet Explorer won’t support WebGL, so someone else did as a plug-in.
  • Equalizer: a framework for coarse-parallel OpenGL rendering (think multi-display and multi-machine).
  • Oolong and dEngine: OpenGL ES rendering engines for the iPhone and iPad. Good for learning how things work. Oolong is 90% C++, dEngine is pure C. Each has its own features, e.g. dEngine supports bump mapping and shadow mapping.
  • A bit dated (OpenGL ES 1.1), but might be of interest: a readable rundown of the ancient Wolf3D engine and how it was ported to the iPhone.
  • gl2mark: benchmarking software for OpenGL ES 2.0
  • Matrix libraries: GLM is a full-blown matrix library based on OpenGL naming conventions, libmatrix is a template library for vector and matrix transformations for OpenGL, VMSL is a tiny library for providing modern OpenGL with the modelview/projection matrix functionality in OpenGL 1.0.
  • G3D: well, it’s more a user of OpenGL, but worth a mention. It’s a pretty nice C++ rendering engine that includes deferred shading, as well as ray tracing. I use it a lot for OBJ file display.
  • OpenGL works with cairo, a 2D vector-based drawing engine. Funky.

Seven things for July 25

Here we go:

  • Beautiful demo of various effects, the realtime hybrid raytracing demo RIGID GEMS. Do note there are controls. The foreground blurs for the depth-of-field are a little unconvincing, but the rest is lovely! (thanks to Steve Worley for the tip)
  • Books to check out at SIGGRAPH, or now (I’m sure there are more – let me know):  OpenGL Insights and Shadow Algorithms Data Miner. Five chapters of OpenGL Insights are free to read here. There are quite a few graphics books published since last SIGGRAPH, we have them listed here.
  • Scalable Ambient Obscurance looks worthwhile, and there’s even a demo and source.
  • I can’t say I grok it all yet, but Bringing AAA Graphics to Mobile Platforms (from GDC) has a lot of chewy information on what’s fast and slow on typical mobile hardware, as well as how it works. PDF version on the Unreal Engine site.
  • A somewhat older (a whole year or so old) article on changing resolution on the fly to maintain frame rate. (Thanks, Mauricio)
  • 3D printing opens up a wide range of legal issues, It Will Be Awesome if They Don’t Screw it Up gives an overview of some of these. There are a number of areas where the law hasn’t had to concern itself yet.
  • Echo chamber: stuff you should probably know about already, but just in case. 1) Ouya, a monster money-raiser Kickstarter project for an open console. Tim Lottes comments; my take is “Android games on a console? Weak.” but I’d love to see them succeed. 2) Source Filmmaker, a free film making system from Valve. People are getting busy with it.

To conclude, a photo that looks like a rendering bug; read about it here. If you like these sorts of things, see more at the “2 True 2B Good” collection.

So what am I missing?

My schedule for SIGGRAPH so far (sans social gatherings), using this technology where you can put everything on this incredibly light-weight portable screen with an extremely high battery life (though the erase feature sucks if you use the high-contrast “ink” display mode):

SIGGRAPH 2012

I’ve tried various apps over the years and this is what works for me. On the back is plenty of room for quick notes on things to follow-up on after SIGGRAPH, if I write small enough.

Oh, and yes, Emil Persson’s talk is going to happen twice (not his fault, and I consider this A Very Good Thing), as, apparently, is the Processing 2.0 talk, also. Ah, wait, I just heard back from Andres, and the second Processing talk (on Tuesday) is cancelled.

Edits: added Fast Forward (thanks, Hanspeter). Also, I entirely forgot to look at the Exhibitor Talks, which have a few things of interest.

Oh, and here’s a neat Google Calendar thingy for SIGGRAPH 2012 that Dan Wexler pointed me at: http://skitten.org/2012/07/siggraph-2012-google-calendars/

Seven Things for June 22nd

Here goes:

  • The Journal of Computer Graphics Techniques has published its first accepted article: “Importance Sampling of Reflection from Hair Fibers”. Free for download, of course.
  • Mauricio Vives pointed out that the thirteenth article in a series going back to 2010 is now up: Fluid Simulation for Video Games. Not my particular interest, but my gosh this collection is impressive, it’s practically book-length at this point, and includes code snippets and a demo with source (you’ll need to install TBB to compile).
  • A new book has been announced, coming out in October: The CUDA Handbook. The author, Nicholas Wilt, was a software architect at NVIDIA who worked on CUDA since its inception.
  • John Owens pointed out an interesting way to search Google Scholar, by publications with the word graphics. This gives an interesting weighing of influence – no real surprises, and note that some conferences are not included (I highly doubt that EGSR, I3D, and HPG don’t make the cut).
  • Sebastien Lagarde gives an in-depth analysis showing how the Phong and Blinn specular highlight models are related by a factor of 4 in the power. This is an old result from Fisher & Woo in Graphics Gems IV, but it’s nice to see an independent verification and analysis.
  • Thinking of Graphics Gems, I wanted to mention this old piece of news now: Jim Arvo (editor of Graphics Gems II) passed away back on October 19th of last year. He did seminal work in ray tracing, Monte Carlo sampling, light transport, and many other areas, and was also just a great guy. See his homepage while it still is there.
  • There are mutant women who live amongst us with a fourth type of cone cell. There’s more information on Wikipedia.

SIGGRAPH 2012 early registration deadline is today

If you’re reading this after June 18th, oh well…

Registration page is here.

Me, I wouldn’t rate SIGGRAPH the premier interactive rendering research conference any more: I3D or HPG publish far more relevant results overall. SIGGRAPH still has a lot of other great stuff going on, and there are enough things of interest to me this year that I’m happy to be attending:

  • I guess the courses are the main draw for me right now, and some of these have become informal venues for interactive rendering R&D presentations (e.g. the Advances course).
  • SIGGRAPH Mobile could be interesting. Given the huge profit margins of GPUs for mobile vs. PCs, it’s where the market has moved. It feels a little “back to the future”, with GPU speeds getting reset about a decade vs. PC performance, but there’s some interesting research being done, e.g. this paper (not at SIGGRAPH but at HPG, I noticed it today on Morgan McGuire’s Twitter feed and thought it was fascinating).
  • I was thinking of arriving Sunday afternoon, but then noticed some interesting talks in the Game Worlds talks on Sunday, 2-3:30 pm.
  • Other talks will be of interest, I’ll need to wade through the list.
  • Emerging Technologies and the Exhibition Floor usually have something that grabs my attention (if nothing else, I can browse through new books), and I maybe should give Real Time Live a visit.
  • And, meeting people, of course – it’s inspiring and fun to hear what others are up to. Sometimes a little chance conversation will later have great value.

Why submit when you can blog?

I was cleaning up the RTR portal page today. Of all the links on this page, I often use those linked in the first three items. I used to have about 30 blogs listed. Trying them all today, 5 have disappeared forever (being replaced by junk like this and this), and 10 more are essentially dead (no postings in more than a year). Understandable: blogs usually don’t live that long. One survey gives 126 days for the average lifetime of a typical blog. Another notes even the top 100 blogs last an average of less than 3 years.

Seeing good blogs disappear forever is sad for me. If I’m desperate, I can try finding them using the Wayback Machine, but sometimes will find only bits and pieces, if that. This goes for websites, too. If I see some article I like, I try to save a copy locally. Even then, such pages are hard to find later – I’m not that organized. Other people are entirely out of luck, of course.

My takeaway: feel free to start a blog, sure. But if you have some useful programming technique you’ve explained, and you want people to know about it for some time to come, then also submit it to a journal. One blog I mentioned last post, Morten Mikkelsen’s, shows one way to mix the two: he shows new results and experiments on his blog, and submits solid ideas to a journal. I of course strongly suggest the (new, yet old) Journal of Computer Graphics Techniques (JCGT), the spiritual successor to the journal of graphics tools (as noted earlier, all the editors have left the old journal). Papers on concise, practical techniques and ideas are what it’s for, just the sorts of thing I see on many graphics blogs. Now that is journal is able to quickly publish ideas, I dearly want to see more short, Graphics Gems-like papers. If and when you decide to quit blogging/get hit by an asteroid/have a kid, if prior to this you also submitted your work to a journal and had it accepted, you then have something permanent to show for it all, something that others can benefit from years later. It’s not that hard, honestly – just do it. JCGT prides itself on working with authors to help polish their work and bring out the best, but there are plenty of other venues, ranging from SIGGRAPH talks, Gamasutra articles, and GPU Pro submissions to full-blown ACM TOG papers.

Oh, I should also note that JCGT is fine with work that is not necessarily new, but fills a gap in the literature, explains an improved way of performing an algorithm, gives implementation advice, etc. Citing sources is important – don’t claim work that isn’t your own – but otherwise the goal is simple: present techniques useful for computer graphics programmers.

By the way, if you do run a website of any sort, here are my three top pet peeves, so please don’t do them:

  • Moving page locations and leaving no forwarding page at the old page’s location (I’m looking at you, NVIDIA and AMD) – you don’t care if someone directs traffic to your site?
  • Giving no contact email address or other feedback mechanism on your web pages – you don’t want to know when something’s broken?
  • Giving no “last updated on” date on your web pages – you don’t want people to know how fresh the info is?

Seven Things for June 7th

I’ll be gone this weekend, so my dream of catching up on resources by posting every day is slowed a bit. Here’s today’s seven:

  • The free Process Explorer has a lot more functionality than its name implies. One very cool feature is that it actually shows GPU usage. Run it, right-click a process that’s running and select Properties, then go to the GPU Graph tab to watch memory use and GPU load.
  • If you are seriously involved in implementing bump maps, parallax occlusion maps, etc., Morten Mikkelsen’s blog has a lot of chewy information, along with demos and source. He’s doing a lot of interesting work on autogenerating and blending mappings.
  • The game itself is no great shakes, but Google’s Cube has some lovely 3D rendering going on via javascript.
  • Another “3D in the browser” experiment (with WebGL) is sketchPatch. It’s not as simple as advertised, but I like the idea of an interpreted language you just type and see in the same window.
  • There are lots of reasons Unreal Engine 3 is the most popular commercial 3D engine for games. Here’s some nice eye candy from their tutorial on image reflection, which is also just plain educational.
  • Some cool results here using cone tracing for global illumination effects. Seeing these effects for dynamic objects at interactive rates is great stuff, especially since they’re having to update octrees on the fly.
  • I love the colored Japanese woodcuts of classic videogames that Jed Henry has been making:

Seven Things for June 6th

It’s D-Day and it’s been awhile, so let’s get going. This is a LIFO of the 486 backlogged links I’ve collected for this blog:

  • GPUView looks like an interesting profiling tool from some students at Stanford (done as interns at Microsoft, which has a more official page), though I’ve heard it’s a bit of work to set up. If you’ve used it, how did you find it?
  • Open source code for a fast and scalable GLSL GPU implementation of the Perlin noise with functions, not textures.
  • NV Path Rendering is not what you might think, it’s about rendering text and 2D paths with quite a bit of elaboration available (think SVG or other 2D vector descriptions). GTC presentation here.
  • The book “Physically Based Rendering” is now in eBook form, including PDF (so I assume no DRM?). Annoyingly, it costs considerably more than the physical book on Amazon, but that’s the publisher’s doing.
  • Proland looked intriguing, a procedural terrain generator that creates based on view. Appears fairly elaborate, and a quick way to get some plausible-looking terrain data.
  • Geekbench is a cross-platform benchmarking system; from what I’ve heard, mobile platforms kind of set the clock back a fair number of year in terms of performance. Still, 3D is doable (it certainly was in 2002); here’s a starter list of 3D CAD apps for Android (many are on the iPad, too). I need to search out more, I’m interested in what’s out there.
  • Finally, in the category “this looks like a painting but is reality”, a photo taken in Namibia:

Author-Izer, and what do publishers provide

There’s a new service provided by ACM’s Digital Library: Author-Izer. Short version: if you have published something with the ACM, and you have a preprint of the paper on your own or your company’s website, you can provide the ACM DL this link for your article and they’ll put it with the article reference. This is fairly sporting of the ACM. If you’re an author it’s worth this bit of effort to give your work wider dissemination. Linking also can provide the ACM with download statistics from your site and so give a better sense of the impact of your paper (or at least inflate your statistics compared to people not using Author-Izer).

As a reader without an ACM DL subscription, it’s still better to go to Ke-Sen Huang’s site or Google Scholar, where these external author sites have been collected without each author’s effort. For example, free preprints of 95% of SIGGRAPH 2011 papers are linked on Ke-Sen’s page. In a perfect world, the ACM would simply hire Ke-Sen for a few days and have him add all his external links of authors’ sites to their database. I’d personally toss in $20 towards that effort. I suspect there are 18 reasons given why this would not be OK – “we want individual authors to control their links” (but why not give a default link if the author has not provided one?), “we’re not comfortable having a third party provide this information” (so it’s better to have no information than potentially incorrect information leading you, at worst, to a dead link?), or the catch-all “that’s not how we do things” (clearly).

As Bernie Rous discusses, there’s a tension at the ACM between researchers, who want the widest dissemination of their work, and professional staff, who are concerned about the financial health of the organization. Author-Izer helps researchers, but there’s little direct benefit to the ACM’s bottom line. Unfortunately, currently the Author-Izer service seems to be virtually unknown. For example, the SIGGRAPH 2011 table of contents appears to have no Author-Izer links, though perhaps I’m missing them. I hope this post will help publicize this service a bit.

It’s nice that the ACM allows authors to self-archive, where they can provide preprints of their own work on their website or their institution’s. Most scholarly journals allow this archiving of preprints – more than 90%, according to one writer (and more that 60% allow self-archiving of the refereed final draft, which the ACM does not allow). For authors at academic institutions with such archives, great, easily done; for authors at games companies, film companies, self-employed, etc., it’s catch-as-catch-can. If the author hosts his own work and is hit by a meteor, or just loses interest, his website eventually fades away and the article is then available only behind a paywall. One understandable reason for ACM’s “must be hosted by the author or his institution” clause is that it disallows lower-cost paywalls from competing. But why not just specify that? I can see a non-compete clause like “the author will not host his preprint behind a paywall” (a restriction the ACM doesn’t currently have), but otherwise who cares where the article is hosted, as long as it’s free to download?

This restriction feels like a business model founded on being a PITA: instead of uploading his article to some central free access site and never thinking about it again, each author needs to keep track of access and deal with job changes, server reorganizations and redirects, company bankruptcy or purchase, Author-Izer updates, and anything else that can make his website go off the radar. Pose this problem to a thousand authors and the free system will be inherently weak and ineffectual, making the pay version more desirable.

I believe that many people in the ACM have their hearts in the right place, there’s no conspiracy here. However, the tension of running a paywall service like the Digital Library gives a “one hand tied behind my back” feel to efforts at more open access. If there were no economic constraints, clearly the ACM DL would be free and there would be no real point to Author-Izer. Right now there still are these financial concerns, very real ones.

A journal publisher used to offer:

  • Physical journal printing and binding
  • Copy editing, illustrations, and layout
  • Peer review and professional editors
  • Archiving
  • Distribution to subscribers and institutions
  • Reputation

The physical artifact of the journal itself is becoming rarer, and authors now do copy editing, illustrations, and most to all of the layout. The technical editors and reviewers are all unpaid, so their contributions are separate from the publisher itself – many journals have abandoned their publishers, as the recent Elsevier boycott has highlighted. So what is left that publishers provide?

Another way to look at it: what if publishers suddenly disappeared? Different systems would supplant their services, for good or ill: Google, for instance, might provide archiving for free (they already do this for magazines like Popular Science). Distribution is as simple as “get on the mailing list.” Reputation is probably the one with most long-term value. I don’t think I’d instead want to have a reddit up/down vote system, given the various problems it has. CiteSeer and Google Scholar are pretty good at determining reputation by citation count. You can even check your own citation count for free. There are ways of determining a paper’s impact beyond simple citation counts, lots of people think about this.

I can imagine a few answers for why publishers matter – these disconnected solutions I mentioned are not necessarily the best answers. However, the burden of proof is on the publisher, both commercial and non-profit, to justify its continued existence. It will be interesting how the various open access initiatives play out and how they affect publishers.