Tag Archives: Book

One week to go… Submit!

The Ray Tracing Gems early proposals deadline is June 21, a week away (the final deadline is October 15th). Submit a one-page proposal by June 21 and there’s the extra incentive offered by NVIDIA, a Titan V graphics card to the top five proposals (which I finally looked up – if you don’t want it, trade it in for a nice used car). Anyway, call for proposals for the book is here.

While some initial impetus for making such a book is the new DXR/VKRT APIs, we want the book to be broader than just this area, e.g.,  ray tracing methods using various hardware platforms and software, summaries of the state of the art, best practices, etc. In the spirit of Graphics Gems, GPU Gems, and the Journal of Computer Graphics Techniques, I see our book as a way to inform readers about implementation details and other elements that normally don’t make it into papers. For example, if you have a technique that was not long enough, or too technically involved, to publish in a journal article, now is your chance. Mathematics journals publish short results all the time – computer graphics journals, not so much.

I would also like to see summaries for various facets of the field of ray tracing. For example, I think of Larry Gritz’s article “The Importance of Being Linear” from GPU Gems 3 as a great example of this type of article. It is about gamma correction – not a new topic by any stretch – but its wonderful and thoughtful exposition reached many readers and did a great service for our field. I still point it out to this day, especially since it is open access (a goal for Ray Tracing Gems, too).

You can submit more than one proposal – the more the better, and short proposals are fine (encouraged, in fact). That said, no “Efficient Radiosity for Daylight Simulation in Closed Environments” papers, please; that’s been done (if that paper doesn’t ring a bell, you owe it to yourself to read the classic WARNING: Beware of VIDEA! page). In return, we promise fair reviewing and not to roll the die.

Update: a proposal is just a one-page or less summary of some idea for a paper, and can be written in any format you like: Word, PDF, plain text, etc. Proposals are not required, either by June 21 or after. They’re useful to us, though, as a way to see what’s coming, let each prospective contributor know if it’s a good topic, and possibly connect like-minded writers together. Also, a proposal that “wins” on June 21 does not mean the paper itself will automatically be accepted – each article submitted will be judged on its merits. The main thing is the paper itself, due October 15th. Send proposals to raytracinggems@nvidia.com – we look forward to what you all contribute!

Monument to the Anonymous Peer Reviewer,

Monument to the Anonymous Peer Reviewer

“Real-Time Rendering, 4th Edition” available in August 2018

As announced today at the Games Developers Conference by CRC Press / Taylor & Francis Group (booth 2104, South Hall – I’m told there’s a discount code to be had), we’re indeed finally putting out a new edition of Real-Time Rendering. It should be out by SIGGRAPH if all goes well. Tomas, Naty, and I have been working on this edition since August 2016. We realized that, given the amount that’s changed in area lighting, global illumination, and volume rendering, that we could use help, so asked Angelo Pesce, Michał Iwanicki, and Sébastien Hillaire to join us, which they all kindly and eagerly did. Their contributions both considerably improved the book and got it done.

If you want me to just shut up and tell you where to pre-order, go here. You’ll note the lack of cover, and lack of the new three authors. Those’ll get fixed once there’s a more official launch, and official pricing. I suspect the price won’t go down (which is a hint, and you can cancel later if I’m wrong; which reminds me, you should also book a room now for SIGGRAPH if you have the slightest chance of going, since you can also cancel up until July 22 without penalty).

One reason for no cover is that we’re still evaluating them. At the GDC booth you’ll see this artwork used:

fish cover candidate

This is a lovely, colorful model by Elinor Quittner. You can see the interactive model here, and definitely check out the Model Inspector feature on that page by clicking the “I” key (or the “layers” looking icon in the lower right) once the model’s loaded. I love this feature in Sketchfab, that you can examine the various elements. All that said, we’re still examining a number of other cover possibilities. Me, I’m happy we get to show off this potential design here now.

Back to the book itself. Let’s look at page count:

  • First edition, published 1999, 482 pages
  • Second edition, published 2002, 864 pages
  • Third edition, published 2008, 1045 pages
  • Fourth edition, to be published 2018, 1269? pages (1356?, including online)

This new edition is probably a worst-kept secret, in that anyone searching “Real-Time Rendering, 4th edition” on Amazon would have found the entry months ago, and CRC put it on their site some time before March 11. Also, doing a quick count just now, not including the editorial staff, 178 people helped us out in some way: reviewing sections or chapters, providing images, or clarifying concepts. The kind and generous support we’ve received is one of the reasons I love this field. There’s competition between companies, between research teams, and all the rest, it’s part of the landscape. But, underlying this “red in tooth and claw” veneer of competition, most everyone we asked genuinely wanted to share their knowledge and labor to help others understand how things work. I hope it’s the same in other fields, but I know it’s true for this one.

The progression of 3 years between 1st and 2nd, 6 between 2nd and 3rd, and 10 between 3rd and 4th is a reflection not so much of the length of time it takes for each new edition (which has indeed steadily increased), but rather how long it takes us to forget all the stress and pain involved in making a new edition. As a data point, our Google Doc of new references since the last edition is around 170 pages long, and does not include references we could easily dismiss, nor those we ran into later when more closely reading and writing. Each page has about 20 references on it (some duplicated among chapters), about 3200 in all. In the fourth edition we added “only” 1151 new references, and deleted 508 older ones, for a final total of 2059 references (this does not include references on collision detection – more on that in a minute).

We could have added all 3200 and more, but instead focused on that which sees use in applications, or is newest and presents a good overview of the state of the art in its area. The field has simply become far too large for us to cover every piece of research, and doing so would have been a disservice to most readers. On the other end of the spectrum, we have continued to avoid API-specific information and code, as there are plenty of books, repositories, and articles describing these – this website points to many of them (and will be updated in the coming months). We aim to be a guide to algorithms for practitioners.

To conclude, here’s the list of chapters:

1 Introduction
2 The Graphics Rendering Pipeline
3 The Graphics Processing Unit
4 Transforms
5 Shading Basics
6 Texturing
7 Shadows
8 Light and Color
9 Physically-Based Shading
10 Local Illumination
11 Global Illumination
12 Image-Space Effects
13 Beyond Polygons
14 Volumetric and Translucency Rendering
15 Non-Photorealistic Rendering
16 Polygonal Techniques
17 Curves and Curved Surfaces
18 Pipeline Optimization
19 Acceleration Algorithms
20 Efficient Shading
21 Virtual and Augmented Reality
22 Intersection Test Methods
23 Graphics Hardware
24 The Future

If you have a great memory, you’ll notice that the “Collision Detection” chapter from the 3rd edition is missing. We have a fully-updated chapter on this subject for the 4th edition. However, the page count was such that we decided to distribute it, along with the two math-related appendices in the 3rd edition, as online chapters free to download (Collision detection is not strictly a part of real-time rendering, but is an area we think is fascinating and where a fair bit of change has occurred – about 40% of the chapter is new material). We’ll be formatting all of these resources into PDF files nearer to release.

Because I have an addiction to text manipulation and analysis programs (more on that in a future blog post), I did some measures of how much the fourth edition is different than the third. The highly-precise but who knows how accurate number I computed was 59.81% new material by lines changed. By further weighting using the character count, I get a value of 68.99% new. These are probably high – if you change a word in a sentence, or even just join two lines into one, the whole line is considered new – but the takeaway is that a lot has changed in the past decade. We’ve learned a huge amount from writing the book, and by SIGGRAPH look forward to sharing it with you all.

Book “WebGL Insights” now free

The book WebGL Insights is now free to download as a PDF. Go get it.

Many of the articles are, of course, WebGL-centric, but some articles in the Rendering Section have general interest, especially for mobile developers. WebGL is “trailing edge,” in that it’s tied to OpenGL ES 2.0, which is what most mobile devices run. So techniques in that section will run in mobile apps in general. WebGL 2 (not covered in this book) is ES 3.0, basically, and current has 22% phone support and 8% tablet support – tablets don’t get refreshed as rapidly as phones.

 

GPU Pro 5 is out

Really, the title says it all, the book GPU Pro 5 is shipping. Sadly, there’s no “Look Inside” for the book on Amazon; I’ll hope they at least put the Table of Contents there. You can find a rough Table of Contents on the CRC site; rough in that you can’t see the number of pages for each article. A few articles are quite lengthy: Physically Base Area Lights is 34 pages long, Hi-Z Screen-Space Cone-Traced Reflections is an incredible 44. The rest are in the 10-20 page range.

You can get a taste of the book at the GPU Pro blog, it has previews of a large number of the articles. At $70 this is not a casual purchase, but if you’re a practitioner and just one article saves you 2 hours, the book’s more than paid for itself.

Me, I was amused to see the following, a model from Morgan McGuire’s high-quality model repository – hey, that’s from our world! (And you thought I was done with Minecraft references here.)

VoxeliaMC

Good points, some bad points

The recently and sadly departed Game Developer magazine had a great post-mortem article format of “5 things that went right/went wrong” with some videogame, by its creators. I thought I’d try one myself for the MOOC “Interactive 3D Graphics” that I helped develop. I promise my next posts will not be about MOOCs, really. The payoff, not to be missed, is the demo at the end – click that picture below if you want to skip the words part and want dessert now.

Good Points

Three.js: This layer on top of WebGL meant I could initially hide details critical to WebGL but overwhelming for beginners, such as shader programming. The massive number of additional resources and libraries available were a huge help: there’s a keyframing library, a collision detection library, a post-processing library, on and on. Documentation: often lacking; stability: sketchy – interfaces change from release to release; usefulness: incredible – it saved me tons of time, and the course wouldn’t have gone a third as far as it did if I used just vanilla WebGL.

Web Stuff: I didn’t have to handle any of the web programming, and I’m still astounded at how much was possible, thanks to Gundega Dekena (the assistant instructor) and the rest of the Udacity web programmers. Being able to show a video, then let a student try out a demo, then ask him or her a question, then provide a programming exercise, all in a near-seamless flow, is stunning to me. Going into this course we didn’t know this system was going to work at all; a year later WebGL is now more stable and accepted, e.g., Internet Explorer is now finally going to support it. The bits that seem peripheral to the course matter a lot: Udacity’s forum is nicely integrated, with students’ postings about particular lessons directly linked from those pages. It’s lovely having a website that lets students download all videos (YouTube is slow or banned in various places), scripts, and code used in the course.

Course Format: Video has some advantages over text. The simple ability to point at things in a figure while talking through them is a huge benefit. Letting the student try out some graphics algorithm and get a sense of what it does is fantastic. Once he or she has some intuition as to what’s going on, we can then dig into details. I wanted to get stuff students could sensibly control (triangles, materials) on the screen early on.  Most graphics books and courses focus on dreary transforms and matrices early on. I was able to put off these “eat your green beans” lessons until nearly halfway through the course, as three.js gave enough support that the small bits of code relating to lights and cameras could be ignored for a time. Before transforms, students learned a bit about materials, a topic I think is more immediately engaging.

Reviewers and Contributors: I had lots of help from Autodesk co-workers, of course. Outside of that, every person I asked “can I show your cool demo in a lesson?” said yes – I love the graphics community. Most critical of all, I had great reviewers who caught a bunch of problems and contributed some excellent ideas and revisions. Particular kudos to Gundega Dekena, Mauricio Vives, Patrick Cozzi, and at the end, Branislav Ulicny (AlteredQualia). I owe them each like a house or something.

Creative Control: I’m happy with how most of the lessons came out. I overreached with a few lessons (“Frames” comes to mind), and a few lines I delivered in some videos make me groan when I hear them. However, the content itself of many of the recordings are the best I’ve ever explained some topics, definite improvements on Real-Time Rendering. That book is good, but is not meant as an introductory text. I think of this course as the prequel to that volume, sort of like the Star Wars prequels, only good. The scripts for all the lessons add up to about 850 full-sized sheets of paper, about 145,000 words. It’s a book, and I’m happy with it overall.

Some Bad Points

Automatic Grading: A huge boon on one level, since grading individual projects would have been a never-ending treadmill for us humans. Quick stats: the course has well over 30,000 enrollments, with about 1500 people active in any given week, 71% outside the U.S. But, it meant that some of the fun of computer graphics – making cool projects such as Rube Goldberg devices or little games or you name it – couldn’t really be part of the core course. We made up for this to some extent by creating contests for students. Some entries from the first contest are quite nice. Some from the second are just plain cool. But, the contests are over now, with no new ones on the horizon. My consolation is that anyone who is self-motivated enough to work their way through this course is probably going to go off and do interesting things anyway, not just say, “Computer graphics, check, now I know that – on to basket weaving” (though I guess that’s fine, too).

Difficulty in Debugging: The cool thing about JavaScript is that you can debug simple programs in the browser, e.g. in Chrome just hit F12. The bad news is that this debugger doesn’t work well with the in-browser code development system Udacity made. The workarounds are to perform JSHint on any code in the browser, which catches simple typos, and to provide the course code on Github; developing the code locally on your machine means you can use the debugger. Still, a fully in-browser solution with debugging available would have been better.

Videos: Some people like Salman Khan can give a lecture and draw at the same time, in a single take. That’s not my skill set, and thankfully the video editors did a lot to clean up my recordings and fix mistakes as found. However, a few bugs still slipped through or were difficult to correct without me re-recording the lesson. We point these out in the Instructor Notes, but re-recording is a lot of time and effort on all our parts, and involves cross-country travel for me. Text or code is easy to fix and rearrange, videos are not. I expect this limitation is something our kids will someday laugh or scratch their heads about. As far as the format itself goes, it seems like a pain to me to watch a video and later scrub through it to find some code bit needed in an upcoming exercise. I think it’s important to have the PDF scripts of the videos available to students, though I suspect most students don’t use them or even know about them. I believe students cope by having two browser windows open side-by-side, one with the paused video, one with the exercise they’re working on.

Out of Time: Towards the end of the course some of the lessons become (relatively) long lectures and are less interactive; I’m looking at you, Unit 8. This happened mostly because I was running out of time – it was quicker for me to just talk than to think up interesting questions or program up worthwhile exercises. Also, the nature of the material was more general, less feature-oriented, which made for more traditional lectures that were tougher to simply quiz about. Still, having a deadline focused my efforts (even if I did miss the deadline by a month or so), and it’s good there was a deadline, otherwise I’d endlessly fiddle with improving bits of the course. I think my presentation style improved overall as the lessons go on; the flip side is that the earlier lessons are rougher in some ways, which may have put students off. Looking back on the first unit, I see a bunch of things I’d love to redo. I’d make more in-browser demos, for starters – at the beginning I didn’t realize that was even possible.

Hollow Halls: MOOCs can be divided into two types by how they’re offered. One approach is self-paced, such as this MOOC. The other has a limited duration, often mirroring a real-world class’s progression. The self-paced approach has a bunch of obvious advantages for students: no waiting to start, take it at your own speed, skip over lessons you don’t care about, etc. The advantages of a launched course are community and a deadline. On the forum you’re all at the same lesson and so study groups form and discussions take place. Community and a fixed pace can help motivate students to stick it through until the end (though of course can lose other students entirely, who can then never finish). The other downside of self-pacing is that, for the instructor(s), the course is always-on, there’s no break! I’m pretty responsible and like answering forum posts, but it’s about a half hour out of my day, every day, and the time piles up if I’m on vacation for a week. Looking this morning, there are nine forum posts to check out… gotta go!

But it all works out, I’m a little freaked out. For some reason that song went through my head a lot while recording, and gave a title to this post.

Below is one of the contest entries for the course. Click on the image to run the demo; more about the project on the Udacity forums. You may need to refresh to get things in sync. A more reliable solution is to pick another song, which almost always causes syncing to occur. See other winners here, and the chess game is also one I enjoyed.

Musical Turk

 

“Interactive 3D Rendering” is finally complete!

Short version: the Interactive 3D Graphics course is now entirely out, the last five units have been added: Lights, Cameras, Texturing, Shader Programming, Animation. Massive (22K people registered so far), worldwide (around 128 countries, > 70% students from outside U.S.). Uses three.js atop WebGL. Start at any time, work at your own pace, only basic programming skills needed. Free.

That’s the elevator talk, Twitterized (well, maybe 3 tweets worth). I won’t blab on and on about it, just a few things.

First, it’s so cool to be able to show a student a video, then give a quiz, then let them interact with a demo, then have them write some code for an exercise, all in the browser. Udacity rocketh, both the web programmers and video editors.

Second, I’m very happy about how a whole bunch of lessons turned out. The tough part in all this is trying to not lose your audience. I think I push a bit hard at times, but some of my explanations I like a lot. Mipmapping, antialiasing, gamma correction – a number of the later lectures in particular felt quite good to me, and I thought things hung together well. Shhh, don’t tell me otherwise. Really, it’s not pride so much; I’m just happy to have figured out good ways to explain some things simply.

Third, I wrote a book, basically: it’s about 850 full-sized pages and about 145,000 words. It’s free to download, along with the videos and code. I think of this course as the precursor to Real-Time Rendering, sort of like “Star Wars: Episode 1”, except it’s good. I should really say “we wrote a book”: Gundega Dekena, Patrick Cozzi, Mauricio Vives, and near the end Branislav Ulicny (AlteredQualia) offered a huge amount of help in reviewing, catching various mistakes and suggesting numerous improvements. Many others kindly helped with video clips, interviews, permission to show demos, on and on it goes. Thanks all of you!

Fourth, I love that the demos from the course are online for anyone to point at and click on. Some of these demos are not absolutely fascinating, but each (once you know what you’re looking at) is handy in its own way for explaining some graphics phenomenon. The code’s all downloadable, so others can use them as a basis to make better ones. I’ve wanted this sort of thing for 16 years – took awhile to arrive, but now it’s finally here.

Fifth, working with students from around the world is wonderful! I love helping people on the forums with just a bit of effort on my end. Also, I just noticed a study group starting up. I’ve also enjoyed seeing contest entries, e.g.,  here are the drinking bird entries, click a pic to see it in WebGL:

 

What’s making a MOOC itself like? See John Owens’ excellent article – my experience is pretty much the same.

A close-up in the recording studio, my little world for a few weeks:

Just Cause 2 makes 3

A demo of the game Just Cause 2 is available on Steam today. What’s interesting is that this is the third DirectX 10-only game to be released. There have been any number of DirectX 10 enhanced games, but until a few months ago there was just one DirectX 10-only game release, Stormrise, a mediocre game released in March 2009. Shattered Horizon then came out in November from Futuremark, who are known more for their graphics benchmarks. Just Cause 2 is a sequel, and distributed by a well-known publisher. Humus describes the logic in going DirectX 10-only.

I’m looking forward to see how DirectX 11’s DirectCompute gets used in commercial applications. Perhaps the day there’s a DirectX 11-only game of any significance is the day we need to start writing a fourth edition. Let’s see: DirectX 10 was released November 2006 with Vista, so it took about three and a quarter years for an anticipated game to be released that was DirectX 10-only (and even now it’s considered dangerous by many to do so). DirectX 11 was released in October 2009, so if the same rule holds, then we’ll need to start writing in February 2013. Pre-order today!

Even now, 13% of Steam gamers have only SM 2.0. Games like World of Warcraft and Left 4 Dead 2 don’t require more, for example. So what’s the magic percentage where the AAA games decide to set the minimum level to the next shader model? I don’t recall it being much of a deal between shader model 2.0 and 3.0 games; there was a little hype, but I think this was because going from SM 2.0 to 3.0 involved just a card upgrade, vs. an OS upgrade. Which is funny, in that an OS upgrade is usually cheaper than a new GPU, but I think it’s also because it’s more critical, like a heart transplant vs. a cornea transplant.

Poking around, I found the interesting graphs below. I’m sure games have been left off, and some are miscategorized, e.g. Cryostatis is the only one under SM 4.0, and it doesn’t require DirectX 10. But, let’s assume this data is semi-reasonable; I’m guessing the games are categorized more by a “recommended configuration” than a minimum. So Shader Model 1.x game releases (and remember, 1.x was pretty darn limited) peaked in 2006, 2.0 peaked in 2007 but outnumbered 3.0 until 2009. SM 3.0 hasn’t peaked yet, I’d say (ignore 2010 and 2011 graph values at this point, of course). Remember that SM 2.0 hardware came out around 2002, so it peaked 5-6 years later and still was strong 7 years later (and perhaps longer, we’ll see). SM 3.0 came out in 2004, and seems likely to continue to be strong through 2010 and into 2011. 4.0 came out in 2006, so I’d go with it peaking in 2011-2012 from just staring at these charts. Which entirely ignores the swirl of other data—Vista and Windows 7, Xbox trends, GPU trends, blah-di-blah—but it’ll be interesting to see if this prediction is about right. (Click on a graph for the lists of games for that shader model.)

Shader Model 1.x

Shader Model 2.0

Shader Model 3.0

7 things for December 22

Some great bits have accumulated. Here they are:

  • I3D 2010 paper titles are up! Most “how would that work?!” type of title: “Stochastic Transparency”.
  • Eurographics 2010 paper titles are up! Most intriguing title: “Printed Patterns for Enhanced Shape Perception of Papercraft Models”.
  • An article in The Economist discusses how consumer technologies are being used by military forces. There are minor examples, like Xbox controllers being used to control robotic reconnaissance vehicles. I was interested to see that BAE Systems (a company that isn’t NVIDIA) talk about how using GPUs can replace other computing equipment for simulation at 1/100th the price. Of course, Iraq knew this 9 years ago.
  • I wish I had noticed this page a week ago, in time for Xmas (where X equals, nevermind): Christer Ericson’s recommended book page. I know of many of the titles, but hadn’t heard of The New Turing Omnibus before – this sounds like the perfect holiday gift for any budding computer science nerd, and something I think I’d enjoy, too. Aha, hmmm, wait, Amazon has two-day shipping… done!
  • A problem with the z-buffer, when used with a perspective view, is that the z-depths do not linearly correspond to actual world distances along the camera’s view direction. This article and this one (oh, and this is related) give ways to get back to this linear space. Why get the linear view-space depth? Two reasons immediately come to mind: proper computation of atmospheric effects, and edge detection due to z-depth changes for non-photorealistic rendering.
  • Wolfgang Engel (along with comments by others) has a great summary of order-independent transparency algorithms to date. I wonder when the day will come that we can store some number of layers per pixel without any concern about memory costs and access methods. Transparency is what kills algorithms like deferred shading, because all the layers are not there at the time when shading is resolved. Larrabee could have handled that… ah, well, someday.
  • Morgan McGuire has a paper on Ambient Occlusion Volumes (motto: shadow volumes for ambient light). I’ll be interested to see how this compares with Volumetric Obscurance in I3D 2010 (not up yet for download).

Amazon Stock Market update: one nice thing about having an Amazon Associates account is that prices at various dates are visible. The random walk that is Amazon’s pricing structure becomes apparent for our book: December 1st: $71.20, December 11-14: $75.65, December 18-22: $61.68. Discounted for the holidays? If so, Amazon’s marketing is aiming at a much different family demographic than I’m used to. “Oh, daddy, Principia Mathematica? How did you know? I’ve been wanting it for ever so long!”

Amazon Needs Programmers, We Suspect

… at least judging from an email received by Phil Dutre which he passed on. Key excerpt follows:

Dear Amazon.com Customer,

As someone who has purchased or rated Real-Time Rendering by Tomas Moller, you might like to know that Online Interviews in Real Time will be released on December 1, 2009.  You can pre-order yours by following the link below.

With a title-finding algorithm of this quality, Amazon appears to be in need of more CS majors.

Don’t fret, by the way, I’ll be back to pointing out resources come the holidays; things are just a bit busy right now. In the meantime, you can contemplate Morgan McGuire’s gallery of real photos that appear to have rendering artifacts or look like computer graphics. It’s small right now – send him contributions!