OK, so I like the publisher A.K. Peters, for obvious reasons. They’re also kind/smart enough to send me review copies of upcoming graphics-related books. I’ve received two recently, with one of particular interest:
Author Archives: Eric
… and free to veterans and unemployed professionals
Mauricio Vives pointed out that the Autodesk program I mentioned yesterday, where students and educators can get Autodesk products and training for free, also applies to veterans and “displaced professionals.” See this page for the logic. The fine print on the registration page is:
An Autodesk Assistance Program participant is either a veteran or unemployed individual who has (a) previously worked in the architecture, engineering, design or manufacturing industries, has completed the online registration for the Autodesk Assistance Program, and upon request by Autodesk is able to provide proof of eligibility for that program.
This is a nice thing.
All Autodesk software free to students and educators, and betas for everyone
I think I need to pop my head out of my gopher-hole more often and see what my company’s doing. It turns out Autodesk software – including Maya, 3DS Max, Mudbox, AutoCAD, and everything else – is now free to students and educators. Just register and you’re good to go. Wow, this is a big change from the old system, and is definitely great to see.
There are also a number of betas from Autodesk free to anyone: one is 123D, a modeler that is aimed to help out the Maker crowd and 3d printing. I’ve installed this but haven’t played with it yet.
Another project is Photofly 2.0, where you upload a number of images and it makes a 3D model from the data (i.e., photogrammetry). This is similar to My3dScanner. I tried these two out on a set of photos of a bunch of bananas, some taken with a flash and some without, a hard test case. I definitely didn’t follow the guidelines. My3dScanner threw up its hands, Photosynth’s point cloud was incomprehensible, Photofly gave it a sporting chance, getting a cloud and making a mesh – no magic bullet yet, but fun to try. I’m now even tempted to RTFM, as results were better than I thought.
Photosynth (examine set of photos here):
Photofly’s cubist rendering – it did output an interesting Wavefront OBJ model:
Seven Things for July 26th, 2011
- First, if you’re going to HPG 2011, I’ll save you five minutes of searching for where it is: it’s at the Goldcorp Centre for the Arts, Google map here. Note also that things don’t start until 1:30 on Friday.
- SIGGRAPH parties? I know nothing, except that the official SIGGRAPH reception is 9 to 11 PM Monday at the convention center, and the ACM SIGGRAPH Chapters Party is 8:30 PM to 2 AM on, oh, Monday again. Odd scheduling.
- Timothy Lottes cannot be stopped: FXAA 3.11 is out (with improvements for thin lines), and 3.12 will soon appear. Note that the shader has a signature change, so your calling shader code will have to change, too.
- At the Motorola developer site there’s a quick summary of various image compression types used for mobile phones and PCs.
- Sebastien Hillaire implement the God Rays effect from GPU Gems 3, showing results and problems. Code and executable available for download.
- I’ve been enjoying some worthwhile articles on patents and copyrights lately, both new and old. Worth a mention: Myrhvold madness; a comic (a bit old but useful) on copyright – a good overview; The Public Domain, a free book by a law professor who helped establish Creative Commons; the July 2011 CACM (behind the paywall, though) had a nice article on why the U.S. dropped “opt-in” copyright back in 1989 (blame Europe). Best idea gleaned, from The Public Domain: the length of copyright is meant to motivate people to create works for payment, so a retroactive increase in the length of copyright (e.g., to protect Mickey Mouse) makes no sense – it creates no motivation for works already created.
- Polygon Pictures’ office corridor would be a bad place to be if you worked way too many hours. Otherwise, nice!
“OpenGL Insights” CFP Reminder
The call for participation for the “OpenGL Insights” book ends in a month. If you have a good tutorial or technique about OpenGL that you’d like to publish, please send on a proposal to them for consideration.
FXAA Rules, OK?
So there are those people out there that punch other people’s punchlines. Someone’s three quarters of the way through telling a joke, and a listener says, “oh, right, this one ends ‘to get to the other side'”. You don’t want to be that guy, but that’s a little bit how I feel writing about FXAA, given that there’s a whole course at SIGGRAPH next month about these sorts of antialiasing techniques. I blame Morgan McGuire’s Twitter feed, as he (and 17 others) retweeted Timothy Lottes’ posting that he had released shader code for FXAA. I’d seen FXAA mentioned before, NVIDIA put it in their DirectX 11 SDK. Which, frankly, is sadly misleading – the implication is that it works only on GTX 200-level hardware and above, when in fact it works on DirectX 9 shader model 3.0 hardware, GLSL 1.20, XBox 360, and Playstation 3, to name a few, and is optimized in various ways for newer GPUs. Anyway, seeing this shader code available, I was interested to try it out. Morgan mentioning that he liked it a lot got me a lot more interested. A few hours later…
So what the heck am I blathering about? To start, there are a number of these ??AA methods that are based on post-processing a color (and sometimes, also normal and depth) buffer. MLAA, morphological antialiasing, was the first used for 3D images, back in 2009. The basic idea is “find edges and smooth them”. The devil’s in the details, which is what the SIGGRAPH course will delve into (and I’ll certainly attend): how wide an area do you search to try to find a straight edge? how do you deal with curves and corners? how do you avoid oversmoothing thin edges, blurring them twice? how does it look frame to frame? and, most important if you want to use it interactively, how do you do this efficiently?
I’ve wanted an MLAA-like solution for two years, since before HPG 2009 when I noticed the MLAA paper on Ke-Sen’s pages and talked to Alexander Reshetov about it (who was very helpful and forthcoming). I even got a junior programmer to attempt to implement it in a shader, but the implementation was quite slow (due to a very wide search area) and ultimately flawed, and we didn’t have time to get back to it. Last year at SIGGRAPH there was a talk by a group in France, led by Venceslas Biri and Adrien Herubel, about implementing MLAA on the GPU, and they released source code. I spent a bit of time with their code, but it was developed on Linux and I had some problems getting it to work on Windows properly. My “I’ll just take a few hours and see where I get” time was gone, and still no easy solution. There were some other interesting bits out there, like the article in GPU Pro 2, Practical Morphological Anti-Aliasing, with even a github project, but there were different versions for DX9 and 10 (and not OpenGL), lots of files involved, and I didn’t want to get involved. Even Humus had a code sample, but I was still a bit shy to committing more time. (Also, his needs geometric information, and I wanted to antialias NPR edges formed by dilation, i.e., image processing, which have no underlying geometry).
Then the FXAA shader code was released: well-commented, with clear integration instructions, just needs a color buffer, and all in one shader file. FXAA is not the solution to all of life’s problems (or is it?), but for me, it’s wonderful. It took me all of an hour to fold into our system as a shader (and then another three debugging why the heck it wasn’t registering properly – our shader system turns out to be very particular about path names). The code runs on just about everything and has extensive comments. There are control knobs for the fiddlers out there, but I haven’t messed with these – it looks great out of the box to me.
So, after all that breathless buildup, here’s the punchline:
On the left is your typical jaggy image, on the right is FXAA. Sure, it’s not perfect – nearly-vertical lines can look considerably better with a wider edge search area (as seen in MLAA), dropouts could be picked up by supersampling or MSAA, thin lines can have problems – but this shader gives a huge improvement with no extra samples, and just one pretty-quick pass (plus – full disclosure – a preprocess of computing the luminance/luma (grayscale) and shoving it in the alpha channel). Less than 1 millisecond cost per frame on a GTX 480? Works on sRGB and linear? Code’s in the public domain? Sign me up!
See lots more examples on Timoty Lottes’ page. Read his whitepaper for algorithm details and his newer posts for tweaks and improvements. An easy-to-use demo of an earlier version of his shader can be downloaded here – just hit the space bar to toggle FXAA on and off. Enjoy!
… and HPG 2011 papers are up
Last week EGSR 2011 papers started to appear at Ke-Sen’s site. Now the HPG 2011 papers are up. How does he do it? Search, search, search (or people let him know). The EGSR schedule is up here. The HPG Paper Chairs sent on the list of accepted paper titles and authors to Ke-Sen.
How to make money with your GPU
You’ve probably heard about bitcoins by now, the currency of cryptoanarchist libertarian computer geeks or something. It turns out that GPUs are particularly good at mining bitcoins, compared to CPUs: check out this chart – the key factor is Mhash/sec (though Mhash/Joule is also an entertaining concept). The most interesting page (for me) at the site is their explanation of why a GPU is (so much) faster than a CPU for this task. Not a shocker for anyone reading this blog; we all know that GPGPU can rip through certain tasks at amazing speeds. What’s more interesting to me is how and why one IHV’s GPUs are considerably faster than the other’s. I won’t spoil the surprise here, see the page to learn more.
EGSR 2011 papers becoming visible
Ke-Sen has begun his magic: he’s started collecting EGSR 2011 papers. I expect to see an HPG 2011 page starting soon, once their final draft deadline is passed in three days.
Loosening of ACM’s copyright policy
We’ve talked about this before, how ACM’s copyright policy stated that they, not you, control the copyright of any images you publish in their journals, proceedings, or other publications. For example, if your hometown newspaper wants to publish a story of “local boy makes good” and wish to include samples of your work, they needed to ask the ACM for permission (and pay the ACM $28 per image). Not a huge problem, but it’s a bureaucratic roadblock for a reasonable request. Researchers are usually surprised to hear they have lost this right.
While it was possible to be assertive and push to retain copyright to your images (or even article) and just grant ACM unlimited permission – certainly firms such as Pixar and Disney have done so with their content – the default was to give the ACM this copyright control.
James O’Brien brought it to our attention that this policy has been revised, and I asked Stephen Spencer (SIGGRAPH’s Director of Publications) for details. His explanation follows.
ACM has recently changed its copyright policy to include the option, under certain circumstances, of retaining copyright on embedded content in material published by ACM. Embedded content can now fall into one of three categories: copyright of the content is transferred to ACM as part of the rest of the paper (the default), the content is “third-party” material (not created by the author(s)), or the content is considered an “artistic image.”
The revised copyright form includes this definition of “artistic images”:
“An exception to copyright transfer is allowed for images or figures in your paper which have ‘independent artistic value.’ You or your employer may retain copyright to the artistic images or figures which you created for some purpose other than to illustrate a point in this paper and which you wish to exploit in other contexts.”
The ACM Copyright Policy page also documents this change in policy.
ACM’s electronic copyright system is being updated to implement this change; authors who wish to declare one or more pieces of embedded content in their papers as “artistic images” should contact Stephen Spencer (at <spencer@cs.washington.edu>) to receive a PDF version of the revised copyright form.
The copyright form includes instructions for declaring embedded content as “artistic images,” both in your paper and on the copyright form.
—-
Note that this change is “going forward”; if you have already given ACM the copyright, you cannot get it back. Understandable, as otherwise there could be a flood of requests for recategorization.
I’m happy to see this change, it is a good step in the right direction.