Author Archives: Eric

“Ray Tracing Gems” Book Call for Participation

Given the recent DXR announcements, Tomas Akenine-Möller and I are coediting a book called Ray Tracing Gems, to come out at GDC 2019. See the Call for Participation, which pretty much says it all. The book is in the spirit of the Graphics Gems series and journals such as JCGT. Articles certainly do not have to be about DXR itself, as the focus is techniques that can be applied to interactive ray tracing. The key date is October 15th, 2018, when submissions are due.

To self-criticize a tiny bit, the first sentence of the CFP:

Real-time ray tracing – the holy grail of graphics, considered unattainable for decades – is now possible for video games.

would probably be more factual as “Real-time ray tracing for video games – … – is now possible.” But, the book is not meant to be focused on just video game techniques (though video games are certainly likely to be the major user). I can see ray tracing become more a standard part of all sorts of graphics programs, e.g., much faster previewing for Blender, Maya, and the rest.

As far as “considered unattainable for decades” goes, interactive ray tracing has been attained long ago, just not for (non-trivial) video games or other interactive applications. My first encounter with an interactive ray tracer was AT&T’s Pixel Machine back in 1987. I had put out the Standard Procedural Databases on Usenet the week before SIGGRAPH, and was amazed to see that they had grabbed them and were rendering some in just a few seconds. But the real excitement was a little postage-stamp (well, maybe 6 stamps) sized rendering, where you could interactively use a mouse to control a shiny sphere’s position atop a Mandrill plane texture.

The demoscene has had real-time ray tracers since 1995, including my favorite, a 252 byte program (well, 256, but the last four bytes are a signature, “BA2E”) from 2001 called Tube by 3SC/Baze. Enemy Territory: Quake Wars was rendered using ray tracing on a 20-machine system by Daniel Pohl at Intel a decade ago. OptiX for NVIDIA GPUs has been around a long time. Shadertoy programs usually perform ray marching. Imagination Technologies developed ray tracing support for mobile some years back. There are tons more examples, but this time it feels different – DXR looks here to stay, with lots of momentum.

Ray tracing is, in my opinion, more easily adopted by computer-aided design and modeling programs, as users are willing to put up with slower frame rates and able to wait a few seconds every now and then for a better result. Systems such as KeyShot have for some years used only ray tracing, performing progressive rendering to update the screen on mouse up. Modelers such as Fusion 360 allow easy switching to progressive ray tracing locally, or for finished results can render at higher speeds on the cloud. I think DXR will make these few seconds into a handful of milliseconds, and near-interactive into real-time.

In a sense, this history misses the point: for interactive rendering we use whatever gives us the best quality in an allotted amount of time. We usually don’t, and probably shouldn’t, trace rays everywhere, just for the purity of it. Rasterization works rapidly because of coherence exploited by the GPU. Ray tracing via DXR is a new piece of functionality, one that looks general enough and with support enough that it has the potential to improve quality, simplify engine design, and reduce the time spent by artists in creating and revising content (often the largest expense in a video game).

Long and short, DXR is the start of an exciting new chapter in interactive rendering, and we look forward to your submissions!

Ray Tracing at GDC (and beyond)

One reason I love interactive graphics is that every now and then something happens in the field – programmable shaders, powerful mobile devices, DX12/Vulkan/Metal, VR, AR, and now this – that changes what’s possible and how we think about interactive rendering. New algorithms arise to exploit new and different functionality. It’s a fun world!

Microsoft added ray tracing support to its DirectX API. And this time it’s not an April Fool’s Day spoof, like a decade ago. Called DirectX Raytracing, DXR for short, it adds the ability to cast rays as shader invocations. There are already a bunch of articles and blog posts.

Here are the resources I’ve noticed so far (updated as I see new ones – let me know):

It will be interesting to see if there’s any spike of interest for ray tracing on Google’s analytics. While I doubt having DXR functionality will change everything – it still has to be performant compared to other specialized techniques – it’s great seeing another tool in the toolbox, especially one so general. Even if no ray tracing is done in an interactive renderer that is in development, it will now be much easier to get a ground-truth image for comparison when testing other techniques, since shader evaluations and all the rest now fit within a ray tracing fragment. Ray and path tracing, done long enough (or smart enough), give the correct answer, versus screen-based techniques.

Doing these fast enough is the challenge, and denoisers and other filtering techniques (just as done today with rasterized-buffer-based algorithms) will see a lot of use in the coming months and years. I’m going to go out on a limb here, but I’m guessing GPUs will also get faster. Now if we can just get people to stop upping the resolution of screens and stop adding more content to scenes, it’ll all work out.

Even within the Remedy talk, we see ray tracing blending with other techniques more appropriate for diffuse global illumination effects. Ambient occlusion is of course a hack, but a lovely one, and ray tracing can stand in for screen-space methods and so avoid some artifacts. I think getting away from screen-space techniques is potentially a big win, as game artists and engineers won’t have to hack models or lighting to work around major artifacts seen in some situations, so saving time and money.

I’m also interested to see if this functionality gets used in other applications, as there are plenty of areas – all sorts of audio design applications, various other types of engineering analyses – that could benefit from faster turnaround on computations.

Enjoy exploring! I look forward to what we all find.

Some of the eye-candy videos:

Deep Learning: From Basics to Practice

Andrew Glassner wrote another book, Deep Learning: From Basics to Practice. It’s two volumes, find it on Amazon here and here. It is meant as a full introduction to the topic, 1650 pages of text (with an additional 90 page glossary at the end). It uses about 1000 figures to build up mental models of how the various algorithms and processes work, and explains how to use the popular Keras neural net API with Python. There’s a free sample chapter, on backpropagation, at his site. I’ve read about a quarter of the book and look forward to getting to “the meat” – Glassner lays the groundwork with chapters on probability, test data and analysis, information theory, and other relevant topics before plunging into deep learning itself. He aims to be accessible to math-averse readers, but does not dumb down the material. While the writing style is informal and approachable, it sometimes takes a bit of work to absorb, which is as it should be.

Full disclosure: I’m friends with Andrew and helped review a portion of the book. I’ve received no pay, and bought the books for my own education, as they look to be useful. I’m impressed by his dedication in writing such a tome, 20 months of labor, working through a large number of academic papers (each chapter ends with a set of references, along with URLs). From past works, I feel confident that what I’m going to read is factually correct and written in a clear fashion.

If you already know about the topic and are lecturing on the subject, he’s made all the figures free to download and use under Fair Use, along with his Python/Jupyter notebooks for all examples. Here’s a figure from the style transfer section of Chapter 28.

Style Transfer

My only regret is there’s no back cover (e-books don’t need them), for relevant quotes from famous people. I even suggested a few:

  • “With artificial intelligence we are summoning the demon.” – Elon Musk (source)
  • “I think the development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking (source)
  • “Artificial intelligence is the future, not only for Russia but for all of mankind… Whoever becomes the leader in this sphere will become the ruler of the world.” – Vladimir Putin (source)

Wouldn’t you want to read a book explaining the methods that will bring about the downfall of our civilization? Of course, they mean general intelligence, not the specialized tasks deep learning is aimed at. Books such as Incognito show how little we know of our own internal workings, how consciousness is just a small part of what the brain’s about. It’s hard to imagine we’re going to suddenly crack the problem of creating general intelligence any time soon, let alone create a runaway paperclip maximizer.

This existential threat feels way overblown, something that makes for great movies, sort of like how elevators go into free fall in Hollywood but never in real-life (problem was essentially solved a century ago). I saw Steven Pinker give a talk last night (his new book seems cheery, nice review here), and he noted that nuclear war and climate change catastrophes are much more real and important than fictitious runaway AIs. (Fun fact: Pinker was once an assembly language programmer.) His opinion piece is a great read, pointing out the dangers of apocalyptic thought. But I digress…

So, whether you’re waiting for the end of the world or for the Singularity (or both), Glassner’s book looks to be a good one to read in the meantime to get a grounding in this old-yet-new field and learn how to use deep learning systems available (for free!). Oh, and the two volumes are ridiculously cheap, and I find I can even read them on my cell phone.

GPU Zen Two CFP, and LAA

The article collection GPU Zen was a ridiculously good deal at $10 for the electronic version of the book. A call for participation for GPU Zen 2 is now out. First important date: March 30th for submitting proposals (i.e., not the first draft, which is due August 3rd).

Just because I wanted to have a title with a series of 3 letter bits, I wrote out the Two. I recently read some little tidbit about some old book passage with the longest-known (at least, to him) string of 3 letter words in a row, that someone found from analyzing a huge pile of Project Gutenberg texts or similar. Can’t find the article now, thought it was at the Futility Closet site, but maybe not. Which is my roundabout way of saying that site is sometimes entertaining, it has an odd historical oddities & mathematical recreations bent to it.

To continue to ramble, in memory of the first anniversary of his death (and LAA), I’ll end with this quote from the wonderful Raymond Smullyan: “I understand that a computer has been invented that is so remarkably intelligent that if you put it into communication with either a computer or a human, it can’t tell the difference!”

HPG 2018; oh, and a hyphen for “Physically-Based” (don’t!)

I mostly wanted to pass on the word that High-Performance Graphics 2018 has their call for participation up. Due date for papers is April 12th. HPG 2018 is co-located with SIGGRAPH 2018 in Vancouver in August.

Also, let’s talk about hyphens. See Rule 1: Generally, hyphenate two or more words when they come before a noun they modify and act as a single idea. This is called a compound adjective.

Update: John Owens wrote and said “Go read Rule 3,” which is: An often overlooked rule for hyphens: The adverb very and adverbs ending in ly are not hyphenated.

So, he’s right! The hyphen is indeed NOT needed, my mistake! I didn’t do all the work, reading through all eleven rules and noting that “physically” is indeed an adverb.

Here’s the rest of my incorrect post, for the record. I guess I’m in good company – about a quarter of authors get this wrong, judging from the list of publications below.

The phrase “High-Performance Graphics” is good to go; “Real-Time Rendering” is also fine. Writing “Physically Based Rendering,” as seen on Wikipedia and elsewhere, not quite [I’m wrong]. The world doesn’t end if the hyphen’s not there, especially in a title of just the phrase itself. Adding the hyphen just helps the reader know what to expect: Is the word “based” going to be a noun or part of a compound adjective? If you read the rest of Rule 1, note you don’t normally add the hyphen if the adjective is after the noun. So:

“Physically-based [that’s wrong] rendering is better than rendering that is spiritually based.”

is correct, “spiritually based” should not be hyphenated. Google came up with no direct hits for “spiritually-based rendering” that I could find – it’s an untapped field.

Not a big deal by any stretch, but we definitely noticed that “no hyphen” was the norm for a lot of authors for this particular phrase [and rightfully so], to the point where when the hyphen actually exists, as in a presentation by Burley, the course description leaves it out.

In no particular scientific sample, here are some titles found without the hyphen:

  • SIGGRAPH Physically Based Shading in Theory and Practice course
  • Graceful Degradation of Collision Handling in Physically Based Animation
  • Physically Based Area Lights
  • Antialiasing Physically Based Shading with LEADR Mapping
  • Distance Fields for Rapid Collision Detection in Physically Based Modeling
  • Beyond a Simple Physically Based Blinn-Phong Model in Real-Time
  • SIGGRAPH Real-time Rendering of Physically Based Optical Effect in Theory and Practice course
  • Physically Based Lens Flare
  • Implementation Notes: Physically Based Lens Flares
  • Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
  • Approximate Models for Physically Based Rendering
  • Physically Based Hair Shading in Unreal
  • Revisiting Physically Based Shading at Imageworks
  • Moving Frostbite to Physically Based Rendering
  • An Inexpensive BRDF Model for Physically based Rendering
  • Physically Based Lighting Calculations for Computer Graphics
  • Physically Based Deferred Shading on Mobile
  • SIGGRAPH Practical Physically Based Shading in Film and Game Production course
  • SIGGRAPH Physically Based Modeling course
  • Physically Based Shading at DreamWorks Animation

Titles found with:

  • Physically-Based Shading at Disney
  • Physically-based and Unified Volumetric Rendering in Frostbite
  • Fast, Flexible, Physically-Based Volumetric Light Scattering
  • Physically-Based Real-Time Lens Flare Rendering
  • Physically-based lighting in Call of Duty: Black Ops
  • Theory and Algorithms for Efficient Physically-Based Illumination
  • Faster Photorealism in Wonderland: Physically-Based Shading and Lighting at Sony Pictures Imageworks
  • Physically-Based Glare Effects for Digital Images

I suspect some authors just picked what earlier authors did. The hyphen’s better, go with it [no, don’t].

Now, don’t get me started on capitalization… Well, it’s easy, the word after the hyphen should be capitalized. There’s an online tool for testing titles, in fact, if you have any doubts – I use Chicago style.

But I digress. Submit to HPG 2018.

Links for the holidays

In my self-inflicted weekly reports for Autodesk I always included a “link for the week,” some graphics-related or -unrelated tidbit I found of interest. Did you pick up on the “d” in “included”? Out of the blue I was laid off from Autodesk three weeks ago (along with ~1149 others, 13% of the workforce), and it’s fine, no worries.

But, it meant that I had collected a bunch of links I was never going to use. So, here’s the curated dump, something to click on during the holidays. Not a sterling collection of the best of the internet, just things that caught my eye. Enjoy! Or not!

Seven Things for October 25, 2017

Seven links for today:

  • Prof. Min Chen has assembled a page of all the STAR (State of the Art), review, and survey papers in Computer Graphics Forum. Such articles are great for getting up to speed on a topic.
  • Jendrik Illner has been writing a weekly roundup of recent blog posts and other online resources for computer graphics. Some good stuff in there, articles I missed, and I’m happy to see someone filtering through and summing up what’s out there. I hope he doesn’t burn out anytime soon.
  • ACM TOG is now encouraging submitting code with articles, so as to be able to reproduce results and build off previous work. I’m happy to see it.
  • There is now a Monument to an Anonymous Peer Reviewer at Moscow’s Higher School of Economics (more pics here, and Kickstarter page). I liked, “Researchers from across the world will visit to touch the “Accept” side in the hope that the gods of peer review will smile down upon them.”
  • Some ARKit apps in development look like wonderful magic. Which is often how demos look, vs. reality, but let me have my dreams for now.
  • One more AI post: the jobs of people who name colors are not yet at risk. Though I do like the computer’s new color name “Snowbonk” and some of the others. Certainly “Stanky Bean” is descriptive, no worse than puce.
  • I should have reposted months ago, but many others already have. Just in case you missed it, Stephen Hill’s SIGGRAPH 2017 link collection is wonderfully useful, as usual.

Seven Things for October 24, 2017

Machine learning, and especially deep learning, is all the rage, so here are some (vaguely) graphics-related tie ins:

Book “WebGL Insights” now free

The book WebGL Insights is now free to download as a PDF. Go get it.

Many of the articles are, of course, WebGL-centric, but some articles in the Rendering Section have general interest, especially for mobile developers. WebGL is “trailing edge,” in that it’s tied to OpenGL ES 2.0, which is what most mobile devices run. So techniques in that section will run in mobile apps in general. WebGL 2 (not covered in this book) is ES 3.0, basically, and current has 22% phone support and 8% tablet support – tablets don’t get refreshed as rapidly as phones.

 

IKEA Reality

I ran across this article from 2014, which is a worthwhile read about IKEA’s transition from real-world photography to virtual. It had an interesting quote:

…the real turning point for us was when, in 2009, they called us and said, “You have to stop using CG. I’ve got 200 product images and they’re just terrible. You guys need to practise more.” So we looked at all the images they said weren’t good enough and the two or three they said were great, and the ones they didn’t like were photography and the good ones were all CG! Now, we only talk about a good or a bad image – not what technique created it.”