Author Archives: Eric

Seven Things for January 1, 2024

Time to look both forward and back!

  1. It’s Public Domain Day, when various old works become legal to share and draw upon for new creative endeavors. The original Mickey Mouse, Lady Chatterly’s Lover, Escher’s Tower of Babel, and much else is now free, at least in the US. (Sadly, Canada’s gone the other direction, along with New Zealand and Japan.) Reuse has already begun.
  2. Speaking of copying, “3D prints” of paintings, where the robot uses brushes to reproduce a work, is now a commercial venture.
  3. Speaking of free works, happily the authors have put the new, 4th edition of Physically Based Rendering, published in March 2023, free on the web. Our list of all free graphics books (we know) is here.
  4. Speaking of books, Jendrik Illner started a page describing books and resources for game engine development. His name should be familiar; he’s the person that compiles the wonderful Graphics Programming weekly posts. I admit to hearing about the PBR 4th edition being up for free from his latest issue, #320 (well, it’s been free since November 1st, but I forgot to mark my calendar). This issue is not openly online as of today, being sent first to Patreon subscribers. Totally worth a dollar a month for me (actually, I pay $5, because he deserves it).
  5. ChatGPT was, of course, hot in 2023, but isn’t quite ready to replace graphics programmers. Pretty funny, and now I want someone to add a control called Photon Confabulation to Arnold (or to every renderer). Make it so, please.
  6. The other good news is that our future AI overlords can be defeated by somersaults, hiding in cardboard boxes, or dressing up as a fir tree.
  7. What’s the new graphics thing in 2023? NeRFs are so 2020. This year the cool kids started using 3D Gaussian splatting to represent and render models. Lots and lots of papers and open source implementations came out (and will come out) after the initial paper presentation at SIGGRAPH 2023. Aras has a good primer on the basic ideas of this stuff, at least on the rendering end. If you just want to look at the pretty, this (not open source) viewer page is nicely done. Me, I like both NeRFs and gsplats – non-polygonal representation is fun stuff. I think part of the appeal of Gaussian splatting is that it’s mostly old school. Using spherical harmonics to store direction-dependent colors is an old idea. Splatting is a relatively old rendering technique that can work well with rasterization (no ray casting needed). Forming a set of splats does not invoke neural anything – there’s no AI magic to decode (though, as Aras notes, they form the set of splats “using gradient descent and ‘differentiable rendering’ and all the other things that are way over my head”). I do like that someone created a conspiracy post – that’s how you know you’ve made it.

OBJ lives on

Yesterday I accidentally drag-and-dropped an OBJ file onto Visual Studio 2022 (I was aiming for G3D’s viewer, which I love for OBJs, despite one annoying bug – hoping it’s fixed someday). To my surprise, VS2022 opened and displayed it! It even looked fairly reasonable. Googling a bit, they seem to also support FBX and DAE (who uses DAE nowadays?). Sadly, this feature doesn’t work for glTF or USD files (yet…).

The controls are little odd, e.g., use the mouse wheel and the model moves a bit up and down. But, control-mousewheel does what you’d expect, move in and out. Shift-mousewheel moves right and left. Control-leftmouse pans, alt-leftmouse rotates. Lots of other UI buttons around the viewport to explore. Fun!

VS2022, you surprised me. G3D’s prettier view follows.
G3D, with customized background. However, there’s a bug where the R of RGBA is interpreted as the A.

USD and glTF Resources Page

Here’s my new page about USD and glTF resources: https://www.realtimerendering.com/usd_gltf.html – if you see anything missing or wrong, let me know. I’m no expert, still learning. Which is in part why I put this page up, to get feedback.

This past year I made two minor contributions to the USD Working Group Assets repo. Being involved in the ASWF‘s activities and Slack channel has been a great help in learning about USD (and also a bit about glTF along the way). Fun thing: if you have an iPhone, click here – I like that Apple’s AR Quick Look is simply built in. Here’s one Minecrafty test model on my office floor.

Lights on, no one quite home

I gave a talk about ray tracing yesterday. For my entertainment, before the talk I asked ChatGPT what ray tracing is. Its full answer was pretty good overall. However, it stumbled at the start – I’ll let you figure it out (hint: Plato). There are some other small oddities (e.g., “follow similar paths”) and “I wouldn’t have put it exactly like that” bits. Here’s the question and its answer.

What is ray tracing?

ChatGPT:
Ray tracing is a rendering technique used in computer graphics and computer-generated imagery (CGI) to simulate the way light interacts with objects in a virtual environment. It is a method for generating highly realistic images by tracing the path of individual rays of light as they interact with the objects in a scene. Ray tracing can simulate complex optical effects, such as reflections, refractions, shadows, and global illumination, with a high degree of accuracy and realism.

Here’s a simplified explanation of how ray tracing works:

1. Ray Generation: In a ray tracing system, a virtual camera is placed within a 3D scene. Rays of light are generated from the camera’s viewpoint, with one ray typically cast for each pixel on the screen.

2. Ray Intersection: Each ray is traced into the scene, and its path is followed as it interacts with objects. When a ray encounters an object, the software calculates how it intersects with that object’s geometry.

3. Shading and Lighting: After determining the intersection point, the renderer calculates how the object at that point interacts with light. This involves considering factors like material properties (e.g., reflectivity, transparency), surface normals, and lighting conditions (e.g., direct and indirect lighting).

4. Secondary Rays: In addition to primary rays cast from the camera, secondary rays may be generated to simulate effects like reflections, refractions, and shadows. These rays follow similar paths as primary rays but may be traced in different directions based on the materials and lighting in the scene.

5. Recursive Ray Tracing: To simulate complex lighting interactions, ray tracing often involves recursion. For example, when a ray hits a reflective surface, a new ray is generated to simulate the reflection, and this new ray undergoes the same process of intersection and shading. This recursion continues until a termination condition is met.

6. Image Formation: As the rays interact with objects and lights in the scene, they carry information about the color and intensity of light at each pixel. This information is used to generate the final image that is displayed on the screen.

Ray tracing is known for its ability to produce photorealistic images, making it a popular choice in fields like computer gaming, visual effects in film and television, architectural visualization, and more. However, it is computationally intensive and can require significant processing power, so real-time ray tracing in complex scenes is still a demanding task that often requires powerful hardware, such as high-end graphics cards. Advances in hardware and software continue to make ray tracing more accessible and practical for a wider range of applications.

HPG 2023 and EGSR 2023 Paper Links Available

The indefatigable Ke-Sen Huang has collected links for papers at both HPG 2023 and EGSR 2023:
HPG at https://www.realtimerendering.com/kesen/hpg2023Papers.htm – includes direct links to recordings
EGSR at https://www.realtimerendering.com/kesen/egsr2023Papers.htm

I’ve heard both were well-attended this year (and I’m sad I missed them – it didn’t time out), with about 150 attendees at EGSR, something of a record.

Elena Garces and I are co-chairing papers for EGSR 2024, so keep up the great work and interest, all! I would expect the papers deadline to be around April, similar to this year’s Call for Papers deadlines.

Coolerer Color QR Codes

It turns out that, at the bottom right of the project page I pointed to in my previous post, there’s a link to a Windows implementation of their system. I hadn’t noticed that; thanks to James Hung-Kuo Chu, first author of the work, for the tip-off. He has a nice example of a colored QR code on his web page.

The Windows program works! Set a URL or text to search, upload a picture, ask for a QR code.

Here’s an example. Blur your eyes to help you figure out what well-known computer graphics image it represents. I find you’ll want to have the QR code be not very large in your camera’s view for it to be detected as one.

A smaller one, of me. Again, blur eyes. When changing the size it’s likely best (?) to “pixel resize” (sample and hold, etc.), not resampling.

Addendum: James Hung-Kuo Chu, the first author on the paper, noted that using the “Apply contrast enhancement” can give a better result. Here’s one with that:

Addendum 5/28/2023: I just noticed this article, pointing to another QR image generator. The color one’s more fun, IMO, but I wanted to scribble down where this other tool is.
And addendum 6/7/2023: this cool QR maker guided by Stable Diffusion (thanks to Andrew Glassner for pointing it out). Actual creator tool here.
Also, 12/14/2023: a research topic is to make a 2D barcode that tries to be more aesthetically pleasing than a QR code. Me, I think that ship has sailed – adopting a new barcode format seems unlikely – but, interesting topic and a view of “what might have been.”

Cooler QR Codes

In talking with Niloy Mitra, I noticed he had a cool QR code on his page. I asked him about it, he pointed me to this decade-old work he coauthored. Searching a bit more, I found a cute web browser version for making your own. It’s a bit flaky, in that a lot of the images I tossed in look really bad half-toned. Maybe if I had a clue what the options were it’d come out better. But, with one it worked pretty well! I just put in the URL I want the code to lead to.

Addendum: but wait, there’s more! Another generator, from the author, that also permits color output. See my next post.