Category Archives: Resources

You May Want to Own Your Own Images

Now that the SIGGRAPH 2010 paper deadline is over, I thought it worth mentioning ways in which you can retain full use of your own images, should you be fortunate enough to have your work accepted for publication. This isn’t meant as an “ACM’s copyright policy is bad” article, rather it presents some possible workarounds while waiting for the policy to be improved. Think of these ideas as code patches.

A number of graphics people were talking about the ACM’s copyright policy. James O’Brien wrote:

I also am bothered by the fact that ACM claims to own images used in a publication. For example, if I render an image and use it to illustrate a paper, ACM now claims to own the copyright on the image and I am limited in what I can use that image for in the future. I’d like included images and other non-text content to be treated similarly to how 3rd party images are currently treated so that the authors retain copyright to the images and only grant ACM unlimited permission to use.

Larry Gritz replied:

James, why are you more bothered by “I painted the image, now they claim ownership” than “I wrote the words, now they claim ownership”? Aren’t they essentially the same situation?

James responded:

Not really, at least not to me. The images often represent a huge amount of work to demonstrate some algorithm. The words I wrote in an afternoon and I can always write some more words that say roughly the same thing if I had to. The images also have uses beyond the paper. For example, if “Time” magazine writes an article about me, they will want to run the images, or if a textbook author decides to talk about my algorithms s/he may want the images to illustrate the book. I also don’t see the argument for why ACM would benefit by owning the images. It’s a case where it costs the author something but gains ACM nothing, so why not change the policy to maximize everyone’s benefit?

In further discussions, we identified a few different ways to be able to use your own images. Mine is one that was first mentioned in the Ray Tracing News in 2005:

My advice (worth exactly nothing in court) to anyone publishing nowadays is to make two images of any image to be published, one from a fairly different view, etc. In this way you can reuse and grant rights to the second, unpublished image as you wish. That said, there’s an area of law where you compare one photo with another and if they match by 80% (by some eyeballing metric), then they’re considered the same photo for purposes of copyright. Usually this is meant to protect one photographer’s composition from being reused by another. What it means to 3D computer graphics, where it’s easy to change the view, etc., remains to be seen. Still, ACM’s rights to your work are less clear for a new, different image. This sort of thing is small potatoes, but taking action so that you have images and videos you fully own then removes the hassle-factor of granting permission to others wanting to use your work.

James O’Brien said the following:

I’ve bumped into this copyright issue with images a few times. The first was when a book author wanted to use an image of mine in her text. I said yes, but she was subsequently told by ACM that she needed ACM’s permissions and she had to pay a fee and include a notice crediting ACM rather than me.

If you are willing to be persistent, you can keep ownership of your copyright for your whole paper and just grant ACM unlimited permission. I did this in 2005 and if you download “Animating Gases with Hybrid Meshes,” SIGGRAPH 2005, from the DL you will see the copyright notice says “copyright held by author”. That was inserted by them instead of the regular notice after several days of discussion on the phone. It was very unclear what the motivation was for the ACM to insist on owning the images.

If the images are owned by a 3rd party they can only ask you to get permission. After 2005, I did a few papers where I included a note that the images were all copyright by UC Berkeley and used with permission. It’s not clear if that sort of note means anything.

The latest version of the ACM copyright form I’ve seen requires you to fill out an addendum listing 3rd-party-owned components and you have to get a separate permission form for them. My paper in SCA this summer required this form (images owned by Lucas Arts). It was a hassle to get Lucas to sign off on the permissions. But that’s not ACM’s fault… in fact Stephen Spencer was very flexible.

An anonymous person wrote:

Another option would be for people concerned about this to set up an organization, call it Digital Images LLC, that you assign the copyright to as soon as you generate the image. (That will likely require the permission of your university or employer, since the image is arguably a work-made-for-hire under the copyright law and therefore owned by the employer.)

Digital Images LLC then licenses its copyright in the images so that you can use it in papers, books, or other works. As far as ACM is concerned, it’s just like if you used a figure from another source with permission. The ACM policy makes that clear:

The author’s copyright transfer applies only to the work as a whole, and not to any embedded objects owned by third parties. An author who embeds an object, such as an art image that is copyrighted by a third party, must obtain that party’s permission to include the object, with the understanding that the entire work may be distributed as a unit in any medium.

So, there are at least three ways where you can retain full rights to your own images. Mine is “make another”, James’ is “request an exception”, and there’s finally “create an LLC”. If you have another, have information about the use of any of these, or just plain have an opinion, please comment.

7 things for January 22

There’s been some great stuff lately:

  • Gustavo Oliveira has an article in Gamasutra about writing an efficient cross-platform SIMD vector library and the tradeoffs involved. The last page was of particular interest, as I had wondered how effective the Intel C++ Compiler (ICC) was vs. Microsoft’s. He also provides downloadable source code and in-depth statistics.
  • NVIDIA has given some information abour Fermi, their next GPU. Warning: their page will automatically start some audio – annoying. You could just skip to the white paper. One big deal about Fermi is its support of doubles, which means it can be used for more science & engineering number-crunching. The Tech Report has a good overview article of other interesting features, and also presents benchmarking results.
  • Tests of OpenCL, the platform-independent parallel programming standard, have started to appear for AMD and NVIDIA GPUs.
  • Speaking of NVIDIA, their PhysX engine is getting some attention. The first video clip in this article gives a sense of the sorts of effects it can add. Pretty stuff, but the funny thing about PhysX is that it must accelerate computations that do not actually affect gameplay (i.e. it should not move around any objects in the scene differently than non-PhysX machines). This limits its use to particle systems and other eye candy. Not a diss—heck, most game graphics are about eye candy—but something to keep in mind.
  • Naty pointed out an article about how increasing the number of megapixels in a camera is just salesmanship and gains no actual benefit. The author later gives more explanation of his argument, which is that diffraction puts a physical limit on the useful size of a pixel for a given camera size.
  • Sony Pictures Imageworks has released a draft describing their Open Shading Language (OSL). While aimed at high-end rendering for films, it’s interesting to see what is built-in (e.g. deferred ray tracing) and what they consider important. Read the introduction for more information, or the draft itself.
  • My favorite infographic of the week: Avatar vs. Modern Warfare 2. Ignore the weird chartjunk concentric circles, focus on the numbers. The most amazing stat to me is the $200M advertising budget for MW2.

… and that’s seven; more later.

Sony Pictures Imageworks open source projects

In my HPG 2009 report, I mentioned that Sony Pictures Imageworks was releasing several of their projects as open source, most notably a shading language, OSL, tailored to ray-tracing. For a long time, there was no actual information available on OSL, but now (tipped off by a recent ompf post) I see that some has appeared.

OSL is hosted on Google Code, the main page is here, and an introductory document can be found here. The language has several features that seem well-designed for ray-tracing; someone with more knowledge in this area will have to weigh in on its usefulness.

I3D 2010 Registration Open

I3D 2010 is located just north of Washington, DC this year, during the weekend of February 19-21 (Friday through Sunday). It will be at the Bethesda Hyatt Regency, which is conveniently located right on the Metro Red Line.

The early registration deadline for I3D itself is January 20th; hotel registration at the conference discount rate of $115 is available until January 19th.

Ke-Sen Huang has added a few paper links since we last mentioned his page, though the majority are still not available from authors’ pages. Somewhat surprising, given that December 14 was the camera-ready deadline, but perhaps some people are still returning back to their universities & colleges and haven’t gotten around to putting theirs up. That said, conferences like I3D are only partly about the papers and posters themselves. They also offer a unique and wonderful opportunity to meet and talk with leading and up-and-coming researchers and practitioners. It’s a fantastic feeling to be in an area for a few days where just about everyone there is working on ideas that are of interest to you. Anyone you meet knows something you don’t, and vice versa, and most people talk freely about what works and what doesn’t. Energizing and useful. Plus, they’re just fun people to be around, at least for this nerd.

7 things for January 4th

First day of work, so here are a few from coworkers and others:

  • Naty passed on this blog post about RGBD, a compact way of storing HDR environment map colors.
  • Gamasutra has an excerpt from Game Engine Architecture, a book we’ve mentioned before. Added bonus info on the author, Jason Gregory: he was a lead programmer on Uncharted 2 (which my older son loves, as do many others).
  • Manny Ko mentioned the free program Mendeley, which he swears by for organizing his PDF collection of graphics papers. I’ll look into it once I’ve reloaded everything after my Windows 7 upgrade.
  • Physics in graphics? Here’s one person’s extensive collection of abstracts through 2005.
  • From Nicholas Wilt, interesting to hear how one brokerage firm is now using GPUs to run complex simulations for bond prices. That GPU Gems chapter on options pricing was prescient.
  • Speaking of brokers and lots of GPUs, there’s this article. I’m a little skeptical of a GPU cloud for graphics (vs. running OpenCL), since graphics cards are not quite interchangeable parts at this point. Also, CPUs don’t normally need driver updates, GPUs do. OTOY I’m super-skeptical about, I have to admit, though I’d love to see them pull it off. Anyway, fun to think about situations where network bandwidth > graphics compute power and cloud cost < local cost.
  • One more from the demoscene, Farbrausch’s The Cube – interesting effects, what looks like procedural clips and procedural surfaces using interior mapping. At least, that’s my guess. I wish they would spend a little time explaining what they did, though maybe that would ruin the magic.

7 things for December 25

A schedule for Christmas:

7 things for December 24

Here are 7 for the day:

7 things for December 23

Here come seven more, until I run out:

  • The game Saboteur on the PS3 appears to be performing antialiasing by using MLAA. This is great to know that some form of MLAA is both fast enough and high enough quality to be usable in a commercial product. I found it interesting that it is used only in the PS3 version of the game.
  • The free glslDevil OpenGL shader debugger has recently been updated. Pretty cool (though maybe not so happy for content providers): you don’t even need the source code to debug into the shaders.
  • There is a new site dedicated to OpenCL and CUDA programming: gpucomputing.net. It is focused on university research efforts, and has some heavy-hitters in the research community involved. Just begun, not a lot there yet, but you could always subscribe to the blog.
  • Here’s a short little article on texture atlassing. If you want a bit more information, read Ivanov’s article on this topic on Gamasutra, published some years ago. For even more detail, NVIDIA’s white paper is helpful. My point: if you don’t know about texture atlassing, you should. Read one of these three and check it off your list.
  • Getting the X,Y coordinates of a pixel for a post-processing pass is typically done one of two ways: texture coordinates or using VPOS. This short article gives the details.
  • You’ll note the previous two entries came from the new blog/site gamerendering.com. There are plenty of other short articles here, most with code snippets or links to other sites. That said, I’ve asked the admin for a little more attribution of sources, e.g., the figure from our book here. Hopefully fixed by the time you see it…
  • This shader code is truly amazing (from easily my favorite graphics blog).

7 things for December 22

Some great bits have accumulated. Here they are:

  • I3D 2010 paper titles are up! Most “how would that work?!” type of title: “Stochastic Transparency”.
  • Eurographics 2010 paper titles are up! Most intriguing title: “Printed Patterns for Enhanced Shape Perception of Papercraft Models”.
  • An article in The Economist discusses how consumer technologies are being used by military forces. There are minor examples, like Xbox controllers being used to control robotic reconnaissance vehicles. I was interested to see that BAE Systems (a company that isn’t NVIDIA) talk about how using GPUs can replace other computing equipment for simulation at 1/100th the price. Of course, Iraq knew this 9 years ago.
  • I wish I had noticed this page a week ago, in time for Xmas (where X equals, nevermind): Christer Ericson’s recommended book page. I know of many of the titles, but hadn’t heard of The New Turing Omnibus before – this sounds like the perfect holiday gift for any budding computer science nerd, and something I think I’d enjoy, too. Aha, hmmm, wait, Amazon has two-day shipping… done!
  • A problem with the z-buffer, when used with a perspective view, is that the z-depths do not linearly correspond to actual world distances along the camera’s view direction. This article and this one (oh, and this is related) give ways to get back to this linear space. Why get the linear view-space depth? Two reasons immediately come to mind: proper computation of atmospheric effects, and edge detection due to z-depth changes for non-photorealistic rendering.
  • Wolfgang Engel (along with comments by others) has a great summary of order-independent transparency algorithms to date. I wonder when the day will come that we can store some number of layers per pixel without any concern about memory costs and access methods. Transparency is what kills algorithms like deferred shading, because all the layers are not there at the time when shading is resolved. Larrabee could have handled that… ah, well, someday.
  • Morgan McGuire has a paper on Ambient Occlusion Volumes (motto: shadow volumes for ambient light). I’ll be interested to see how this compares with Volumetric Obscurance in I3D 2010 (not up yet for download).

Amazon Stock Market update: one nice thing about having an Amazon Associates account is that prices at various dates are visible. The random walk that is Amazon’s pricing structure becomes apparent for our book: December 1st: $71.20, December 11-14: $75.65, December 18-22: $61.68. Discounted for the holidays? If so, Amazon’s marketing is aiming at a much different family demographic than I’m used to. “Oh, daddy, Principia Mathematica? How did you know? I’ve been wanting it for ever so long!”

HPG and EGSR 2010

Information on the 2010 iterations of the High Performance Graphics conference (HPG) and the Eurographics Symposium on Rendering (EGSR) is now available online.  The two conferences will be co-located in Saarbrucken, Germany in late June.  Fortunately (and unlike HPG’s co-location with SIGGRAPH this year) there is no overlap between the two – EGSR immediately follows HPG.  These are both excellent conferences with strong (albeit in HPG’s case, short) histories of high-quality real-time rendering work. For many of our European readers, the combination of the two conferences should prove irresistible.

Update: the HPG website and CFP are up.