Ray Tracing Gems 2 updated deadlines

I’ll quote the tweet by Adam Marrs (and please do RT):

Due to the unprecedented worldwide events since announcing Ray Tracing Gems 2, we have decided to adjust submission dates for the book.

Author deadlines have been pushed out by five months. RTG2 will now publish at SIGGRAPH in August 2021, in Los Angeles.

More info here.

To save you a click, here are the key dates:

  • Monday February 1st, 2021: first draft articles due
  • Monday March 22nd, 2021: notification of conditionally and fully accepted articles
  • Monday April 5th, 2021: final revised articles due

And if you’re wondering, SIGGRAPH 2021 starts August 1st, 2021.

The key thing in the CFP: “Articles will be primarily judged on practical utility. Though longer articles with novel results are welcome, short practical articles with battle-tested techniques are preferred and highly encouraged.”

It’s nice to see this focus on making the book more about “gems,” concise “here’s how to do this” articles. There are lots of little topics out there covered in (sometimes quite) older books and blogs; it would be nice to not have to read five different ones to learn best practices. So, please do go propose an article. Me, I’m fine if you want to mine The Ray Tracing News, Steve’s Computer Graphics Index, etc.

 

The Center of the Pixel is (0.5,0.5)

With ray tracing being done from the eye much more now, this is a lesson to be relearned: code’s better and life’s easier if the center of the pixel is the fraction (0.5, 0.5). If you are sure you’re doing this right, great; move on, nothing to see here. Enjoy this instead.

Mapping the pixel center to (0.5,0.5) is something first explained (at least first for me) in Paul Heckbert’s lovely little article “What Are the Coordinates of a Pixel?”, Graphics Gems, p. 246-248, 1990.

That article is hard to find nowadays, so here’s the gist. Say you have a screen width and height of 1000. Let’s just talk about the X axis. It might be tempting to say 0.0 is the center of the leftmost pixel in a row, 1.0 is the center next to it, etc. You can even then use rounding, where a floating-point coordinate of 73.6 and 74.4 both then go to the center 74.0.

However, think again. Using this mapping gives -0.5 as the left edge, 999.5 as the right. This is unpleasant to work with. Worse yet, if various operators such as abs() or mod() get used on the pixel coordinate values, this mapping can lead to subtle errors along the edges.

Easier is the range 0.0 to 1000.0, meaning the center each pixel is at the fraction 0.5. For example, integer pixel 43 then has the sensible range of 43.0 to 43.99999 for subpixel values within it. Here’s Paul’s visualization:

OpenGL has always considered the fraction (0.5,0.5) the pixel center. DirectX didn’t, at first, but eventually got with the program with DirectX 10.

The operations for proper conversion from integer to float pixel coordinates is to add 0.5; float to integer is to use floor().

This is old news. Everyone does it this way, right? I bring it up because I’m starting to see in some ray tracing samples (pseudo)code like this for generating the direction for a perspective camera:

 float3 ray_origin = camera->eye;
 float2 d = 2.0 * 
     ( float2(idx.x, idx.y) / 
       float2(width, height) ) - 1.0;
 float3 ray_direction =
     d.x*camera->U + d.y*camera->V + camera->W;

The vector idx is the integer location of the pixel, width and height the screen resolution. The vector d is computed and used to generate a world-space vector by multiplying it by two vectors, U and V. The W vector, the camera’s direction in world space, is added in. U and V represent the positive X and Y axes of a view plane at the distance of W from the eye. It all looks nice and symmetric in the code above, and it mostly is.

The vector is supposed to represent a pair of values from -1.0 to 1.0 in Normalized Device Coordinates (NDC) for points on the screen. However, the code fails. Continuing our example, integer pixel location (0,0) goes to (-1.0,-1.0). That sounds good, right? But our highest integer pixel location is (999,999), which converts to (0.998,0.998). The total difference of 0.002 is because this bad mapping shifts the whole view over half a pixel. These pixel centers should be 0.001 away from the edge on each side.

The second line of code should be:

    float2 d = 2.0 *
        ( ( float2(idx.x, idx.y) + float2(0.5,0.5) ) / 
            float2(width, height) ) - 1.0;

This then gives the proper NDC range for the centers of pixels, -0.999 to 0.999. If we instead transform the floating-point corner values (0.0,0.0) and (1000.0,1000.0) through this transform (we don’t add the 0.5 since we’re already in floating point), we get the full NDC range, -1.0 to 1.0, edge to edge, proving the code correct.

If the 0.5 annoys you and you miss symmetry, this formulation is elegant when generating random values inside a pixel, i.e., for when you’re antialiasing by shooting more rays at random through each pixel:

    float2 d = 2.0 *
        ( ( float2(idx.x, idx.y) + 
                float2( rand(seed), rand(seed) ) ) /
            float2(width, height) ) - 1.0;

You simply add a random number from the range [0.0,1.0) to each integer pixel location value. The average of this random value will be 0.5, at the center of the pixel.

Long and short: beware. Get that half pixel right. In my experience, these half-pixel errors would occasionally crop up in various places (cameras, texture sampling, etc.) over the years when I worked on rasterizer-related code at Autodesk. They caused nothing but pain on down the line. They’ll appear again in ray tracers if we’re not careful.

Seven Things for April 17, 2020

Seven things, none of which have to do with actually playing videogames, unlike yesterday’s listing:

  • Mesh shaders are A Big Deal, as they help generalize the rendering pipeline. If you don’t yet know about them, Shawn Hargreaves gives a nice introduction. Too long? At least listen to and watch the first minute of it to know what they’re about, or six minutes for the full introduction. For more more more, see Martin Fuller’s more advanced talk on the subject.
  • I3D 2020 may be postponed, but its research papers are not. Ke-Sen Huang has done his usual wonderful work in listing and linking these.
  • I mentioned in a previous seven things that the GDC 2020 content for graphics technical talks was underwhelming at that point. Happily, this has changed, e.g., with talks on Minecraft RTX, World of Tanks, Wolfenstein: Youngblood, Witcher 3, and much else – see the programming track.
  • The Immersive Math interactive book is now on version 1.1. Me, I finally sat still long enough to read the Eigenvectors and Eigenvalues chapter (“This chapter has a value in itself”) and am a better person for it.
  • Turner Whitted wrote a retrospective, “Origins of Global Illumination.” Paywalled, annoyingly, something I’ve written the Editor-in-Chief about – you can, too. Embrace being that cranky person writing letters to the editor.
  • I talk about ray tracing effect eye candy a bit in this fifth talk in the series, along with the dangers of snow globes. I can neither confirm nor deny the veracity of the comment, “This whole series was created just so Eric Haines would have a decent reason to show off his cool glass sphere burn marks.” BTW, I’ll be doing a 40 minute webinar based on these talks come May 12th.
  • John Horton Conway is gone, as we likely all know. The xkcd tribute was lovely, SMBC too. In reading about it, one resource I hadn’t known about was LifeWiki, with beautiful things such as this Turing machine.

Seven Things for April 16, 2020

Here are seven things, with a focus on videogames and related things this time around:

  • Minecraft RTX is now out in beta, along with a tech talk about it. There’s also a FAQ and known issues list. There are custom worlds that show off effects, but yes, you can convert your Java worlds to Bedrock format. I tried it on our old world and made one on/off video and five separate location videos, 1, 2, 3, 4, 5. Fun! Free! If you have an RTX card and a child, you’ll be guaranteed to not be able to use your computer for a month. Oh, and two pro tips: “;” toggles RTX on/off, and if you have a great GPU, go to Advanced Video settings and crank the Ray Tracing Render Distance up (you’ll need to do this each time you play).
  • No RTX or home schooling? Try Minecraft Hour of Code instead, for students in grades 2 and up.
  • There’s now a minigame in Borderlands 3 where you solve little DNA alignment puzzles for in-game bonuses. The loot earned is absurdly good at higher levels. Gearbox finally explained, with particularly poorly chosen dark-gray-on-black link text colors, what (the heck) the game does for science. It seems players are generating training sets for deep learning algorithms, though I can’t say I truly grok it.
  • Beat Saber with a staff is hypnotic. You can also use your skills outside to maintain social distancing.
  • A few Grand Theft Auto V players now shoot bullets to make art. Artists have to be careful to not scare the NPCs while drawing with their guns, as any nearby injuries or deaths can affect the memory pool and so might erase the image being produced.
  • Unreal Engine’s StageCraft tech was used to develop The Mandalorian. I’m amazed that a semicircular wall of LED displayscould give realistic backgrounds at high enough resolution, range, and quality in real time. It has only 28 million pixels for a 270 degree display, according to the article – sounds like a lot, but note a single 4K display is 3840 * 2160 = 8.3 million pixels.
  • Stuck inside and want to make your housing situation an infernal hellscape, or at least more of one? Doomba‘s the solution. It takes your Roomba’s movement information and turns it into a level of classic Doom.

Made it this far? Bonus link, since it’s the day after U.S. taxes were due, but now deferred until July 15th: Fortnite virtual currency is not taxable.

Seven Things for March 31, 2020

Seven things, mainly because I want to bring attention to the first and last items:

Previous post followup: I noticed today I got a work unit for coronavirus on Folding@Home. The psychology here is pretty odd, “oh, foo, my work unit is just helping cure cancer – better luck next time.” Oh, and join our team! We’re in 2326th place, but with your help we’ll get to 2325th in no time. In first place is CureCoin, crypto-currency meets protein folding, aka, “what does anything even mean any more?” We’re living in a William Gibson novel.

 

It Can’t Hurt: Folding@Home

Remember Folding@Home? I admit it fell off my radar. You, yes, you, have an overpowerful computer and barely use most of its resources most of the time. Folding@Home applies these resources to medical research. They’re now including exploration into COVID-19 as part of their system (update here).

Download it. Good installation instructions here (though join our team, not them). FAQ here. I invite you to join my “I created it 10 minutes ago” team: RealTimeResearching, #239487 (update: and thanks to the people who’ve joined – it’s comforting to me knowing you’re doing this, too. 35+ strong and counting.)

More Folding@Home COVID-19 related information here. Folding@Home is now more powerful than the top seven supercomputers in the world, combined.

Update: word has spread like, well, a virus. They’re currently getting so much home support that they don’t have enough work units to go around! Keep your client running. Want updates? Follow Greg Bowman.

 

Attending (or wanting to attend) I3D 2020?

Registration will go up in a few weeks for I3D 2020, May 5-7, at the ILM Theater, The Presidio, San Francisco. Before then, here a few things you should know and take advantage of.

COVID-19 update: At this point I3D 2020 is proceeding on schedule for May 5-7. We continue to watch the situation and expect to make a final determination in early April. The advance program, registration, and more details will be available soon. Before then we recommend booking refundable elements, such as your hotel room.

First, are you a student? For the first time, we have a travel grant program in place, due to strong support from sponsors this year. The main thing you need is a PDF of a letter from your advisor. If your department cannot fully support the cost of your attendance, you should apply for it. Deadline is March 1. Deadline’s passed.

Next, the posters deadline is March 13th.

Third, and important to all, reserve your lodging now. Really. The reason is that lodging might get tight; I noticed that IBM’s Think conference, at Moscone Center, is the same week. In the past, over 30,000 people attended that event. We are currently looking into a hotel discount, but that seems unlikely for the Marina District. If we do get one, you can always switch.

Here’s a search for lodging, using Kayak, which gave the most options of the searchers I tried. You’ll want to zoom into the Presidio area (shame on Kayak for not recording my current map view), shown below, if you want to be able to walk to the conference. I capped this search at $308/day – change that value and the dates at the top of that search page as you wish. Pay attention to cancellation details. If you are sharing lodging with a few people, move that cost slider up, as there are some property rentals for larger groups.

Other area searches: AirBNB and TripAdvisor. Me, I’m at the Days Inn – Lombard. It’s a bed, which is all I need, it’s on NVIDIA’s approved lodging list, and it’s a 15 minute walk to the Yoda Fountain. You want something a little nicer and much closer? This Travelodge is a 4 minute walk from the fountain (the $272 on the left):

If you want to take a look down Lombard and scope it out, starting at the gate to the Presidio at Lyon Street, click here. If you’d like to see too many photos of the I3D 2020 venue (spoiler alert!), you can see my fact-finding album, with a mix of beautiful and boring.

The number of submissions for I3D this year continues to be high:

Total reviewed submissions: 60
Conference paper acceptance: 17
PACM journal acceptance: 9
Acceptance rate: 43%

The schedule will also be up soon. In the meantime, our keynote speakers will be great, with one more speaker from Unreal to be announced.

Hope to see you there!

Seven things for February 16, 2020

Finally, a long weekend, and little else to do. So, seven things:

  • Good explanation of some older AA schemes and of the new VRSS method for VR systems. It assumes the user is looking at the center of the display, which I’m guessing is like 95% of the time. It’d be interesting to know the real statistics – someone want to do this research, or (more likely) point out previous studies to me?
  • Have a relatively new iPhone or iPad? Apple’s nice little site of AR models (view it on Safari) is well done – click one and it’s there.
  • Wrapping your head around interactive ray tracing? I’m enjoying Will Usher’s latest blog entries. His “miss shader shadow test” method (“RFO”) gave my own stress-test sample program a little boost. Also see his publications.
  • Painful: Venezuelans play Runescape and other videogames to earn money and turn them into Bitcoins after their currency becomes nearly worthless.
  • This tweet on wave programming resources by Kostas Anagnostou reminds me again how vast my ignorance is (at least I had already read the Drobot COD slideset, for RTR4).
  • ArtStation runs 3D modeling contests. The sheer number of entries and contests themselves gives a glimpse of how many people are doing such work.
  • Adversarial T-shirt designs from a bunch of researchers at Northeastern, MIT, and IBM (paper here, more designs shown here). They kindly shared their latest images, so I made holiday presents for my family.

About that glass ball…

Here’s a classic image you’re probably familiar with, which is having its 40th anniversary:

I recently joined Reddit’s Raytracing feed, where I noticed it recently here, in Reddit’s Vintage CGI feed. I’d been playing with a real-time demo of this scene in OptiX 7, as it’s a sample program, optixWhitted. Examining that demo, it pointed out something that never dawned on me: the glass ball is actually mostly hollow, not solid glass!

Here’s optixWhitted with an inner radius of 0.96 (vs. 1.0 for the outer radius) vs. a solid glass ball:


Quite different!

I wrote Turner Whitted, as I had a theory:
Why did you make the glass sphere in “An improved illumination model for shaded display” hollow?
My theory is “it looked better” – you can see a bit of refraction, but there’s not so much that it’s confusing to the viewer.


Turns out, that wasn’t really it. Turner replied:

Obviously a solid sphere would be too heavy. 🙂

Concentric spheres offered a more interesting ray tree with internal reflection and also served as a testbed for using the outer sphere as a bounding volume for the inner one. I didn’t really get far with bounding volumes until teaming up with Steve Rubin for SIGGRAPH ’80. As you point out, concentric spheres also look better.

I also wasn’t sure when exactly his paper was officially published, as the paper was presented at SIGGRAPH ’79, but the ACM Digital Library shows just an abstract. The published version is stored as a Communications of the ACM paper in 1980. Turner comments:

As for publication, the entire paper and not just the abstract was distributed at SIGGRAPH ’79, but only the abstract was included in the proceedings. In those days the papers committee chose a couple of papers each year to forward to CACM. Those papers were distributed to SIGGRAPH attendees in a supplement. After 1980 they stopped doing that and published full papers in the conference proceedings and picked 3 or 4 to re-publish in TOG.

It’s a bit hard for me to remember how slow and friction-filled it was back then, where you pretty much had to use the mail to get or give any information and had to go to the university library to look up and photocopy articles (if you were lucky enough to find them on the shelves). And we walked to school through the snow uphill both ways.

To conclude, here’s a physical homage to the paper, with various transparent balls I have lying around on its first page. The one on the left is a glass shell, though quite wobbly in its thickness.

And if you just can’t get enough, here’s one with a plastic shell instead, which is more uniform but where the shell gets thicker toward its bottom (in the upper right part of it in this view).

Seven Things for December 21, 2019

I’ve been collecting too many links, with all of them begging me to share them. Here’s the triage:

  • High Performance Graphics 2020’s call for participation is up. Key due date is April 16th (but you need to register the paper 3 days earlier than that). Retweet here.
  • A high-res scan of a Nefertiti sculpture is now available. The story of it being freed up after a three-year effort is a good read, with a knee-jerk “we own it” attitude by the museum. I hadn’t heard of “the gift shop defense” before, and them carving a CC license in the base is pretty ironic, given how much they were defending not releasing it. He’s now pushing on the Musée Rodin for The Thinker. The Nefertiti file is in OBJ format with textures – it’s pretty nice, 12.8M tris. Download from the page using the link in the upper right, despite what the text below says; you can also view and download it on Sketchfab. (thanks to Adam Marrs for the link)
  • Fascinating inverse rendering applications with Mitsuba2’s differentiable renderer – see the video from 2:38 to 4:50 in particular. Code’s not available yet.
  • Trying to explain ray tracing to your relatives during the holidays? Of course you are. Here’s my contribution to the cause (or with Mandarin subtitles), in good part based on Pete Shirley’s Intro to RT talk (see links at the bottom of the page). Tweeted here.
  • Thanks to a number of people, much of the ancient globillum mailing list archive is now available for download. If you know of a good email text file viewer, please let me know.
  • I’m in the wrong field. People research the rubber pencil illusion, about the only magic trick I learned as a kid. Research into the RPI is here and here, which conclude that the pencil’s appearance of rubberiness is affected by different factors. This so needs a good three.js demo. I also want to see a character in some expansion of the game Control explain how this phenomenon is actually a paranormal event.
  • Someday the photographer for your wedding will ask “crowd-source point cloud or laser scan?” – article; model on Sketchfab.