Let’s focus on colors. We’re in, we’re out, it’s quick, unlike yesterday’s post:
Jos Stam told me about the color named Isabelline. The story’s probably apocryphal, but fun. This one’s now in my neurons along with chartreuse and puce. As well as the trademarked pink (I found the full rundown here).
Is it fuschia or fuchsia? I finally forced myself to learn the right one a few years back.
Harvard’s art museum has a collection of pigment for colors. It’s off limits to most people, in part because some contain dangerous chemicals. Someone just tipped me off that there are video tours, however. Here are three: short, medium, longer.
Wikipedia has a crazy long list of colors, divided into parts. Here’s G-M. I like the duh text there, “Colors are an important part of visual arts, fashion, interior design, and many other fields and disciplines.” Proof that Wikipedia is meant for aliens who have never visited Earth before.
If you just can’t get enough about colors, consider getting the book The Secret Lives of Color. Some you’ll know, some’s obscure knowledge (when did red, yellow, and blue become the primaries?), some’s more obscure knowledge (the long list of colors, each with a story, making up the bulk of the book). It’s relatively cheap and, well, colorful – nicely produced.
Need more colors? AI’s got your back. So so many great names here, I can’t even pick out just one. Just a few shown below.
Time to catch up with too many links collected this past year. Today’s theme: interactive web pages. Plus, bonus nostalgia!
Karl Sims’ Reaction Diffusion Tool. Play with the sliders, or just hit the Example button (up to 20 times) for some fantastic presets. If you’re old enough or nerdy enough, you’ll know Karl Sims from his SIGGRAPH papers in the early ’90s, such as this one. You’ll also then likely know of the two reaction-diffusion SIGGRAPH papers from 1991 (1, 2), based on Alan Turing’s paper in 1952. Now together, in one lovely web app.
Recursive Game of Life. Use the mouse scroll to zoom in and out, left-drag to pan. In high school I used to hand-evolve Life patterns on graph paper, then later print out generations on green and white paper using the school’s administrative computer. This infinite recursive Life evolver would have fried my brain back then. Look closely and admire how cells get filled using glider guns and there’s some message passing between cells done with glider streams. (Follow-up: secrets revealed here. Thanks to Eran Guendelman for the tipoff.)
Omio.io works. The recursive game of Life is the first one here, followed by many others. I’ve looked at only about half of these so far, as it’s like a box of delicious chocolates. You want to savor but one at a time, though maybe nibbling just one more would be OK… And each is a little puzzle: what’s this about? how do I interact? A few are just links to projects – consider these strawberry creams or coconut clusters from an interactivity standpoint, but they’re still interesting. There are earlier works (as GIFs) from the creator, who I now am most definitely following on Twitter (and thanks to Jacco Bikker for the tip-off).
The Origami Simulator is misnamed, in that it simulates all sorts of papercraft patterns. Poke around under the Examples menu to see what I mean. While unlikely to help you make any of these, it’s fun to look at and drag the Fold Percent slider. For me, instant nostalgia: I attended the OrigaMIT convention last month, mostly admiring the amazing creations. Some pics, plus a NeRF I made of one display, and another NeRF.
I mentioned Townscaper in the browser last year in a blog post around this time of year. It’s a lovely way to build a picturesque town, but what about destroying one? Behold Toy City. It takes a bit to load and initialize; you’re ready when the red ball (i.e., you) appears. Then WASD, arrow keys, and space bar your way to cute toppling. This was made with “Spline,” described as “A friendly 3d multiplayer design tool that runs in the browser.” I haven’t explored this app further yet…
I asked Andrew Glassner if he had the original paper issues of the Ray Tracing News available. He replied, “About an hour ago I entered the vault, filled with nitrogen to prevent decay, put on the fiber-free white gloves, and was allowed to view the original manuscripts.” In other words, he found them in some box (I suspect I have them in some different box, too, somewhere…). He kindly scanned all four and they’re now available as PDFs, hosted here.
Andrew started this informal journal for us ray tracing researchers immediately after SIGGRAPH 1987, where he had organized the first “ray-tracing roundtable.” It was no mean feat to gather us together, check the email list at the end of the first issue. Tip: I’m no longer at hpfcla!hpfcrs!eye!erich@hplabs.HP.COM. Delivery was like the Pony Express back then.
And, the issues have filler cartoons, made by Andrew – these follow. Hey, I enjoyed them. Ray tracing is not a rich vein of comedy gold; there isn’t exactly an abundance of comics on the subject (I know of this, this, and this one, at most – xkcd and SMBC, step up your game. Well, SMBC at least had this, and xkcd this).
We finally, a mere 30-odd years later, have tracing tablets (if you view some Shadertoys on an iPad).
I just finished the book Euler’s Gem. Chapter 7 starts off with this astounding statement:
On November 14, 1750, the newspaper headlines should have read “Mathematician discovers edge of polyhedron!” On that day Euler wrote from Berlin to his friend Christian Goldbach in St. Petersburg. In a phrase seemingly devoid of interesting mathematics, Euler described “the junctures where two faces come together along their sides, which, for lack of an accepted term, I call ‘edges.'”
The book uses as a focus Euler’s polyhedron formula, V-E+F = 2. I agree with the author that this thing should be taught in grade schools, it’s so simple and beautiful and visual. I also agree that it’s amazing the ancient Greeks or anyone before Euler didn’t figure this out (well, maybe Descartes did – read the book, p. 84, or see here).
He continues some pages later:
Amazingly, until he gave them a name, no one had explicitly referred to the edges of a polyhedron. Euler, writing in Latin, used the word acies to mean edge. In “everyday Latin” acies is use for the sharp edge of a weapon, a beam of light, or an army lined up for battle. Giving a name to this obvious feature may seem to be a trivial point, but it is not. It was a crucial recognition that the 1-dimensional edge of a polyhedron is an essential concept.
Even though Euler came up with the formula (though was not able to prove it – that came later), the next mind-blowing thing was reading that he didn’t call vertices vertices, but rather:
Euler referred to a vertex of a polyhedron as an angulus solidus, or solid angle.
In 1794 – 44 years after edges – the mathematician Legendre renamed them:
We often use the word angle, in common discourse, to designate the point situated at its vertex; this expression is faulty. It would be more clear and more exact to denote by a particular name, as that of vertices, the points situated at the vertices of the angles of a polygon, or of a polyhedron.
Me, I found this passage a little confusing and circular, “the points at the vertices of the angles of a polygon.” Sounds like “vertices” existed as a term before then? Anyway, the word wasn’t applied as a name for these points until then. If someone has access to an Oxford English Dictionary, speak up!
Addendum: Erik Demaine kindly sent on the OED’s “vertex” entry. It appears “vertex” (Latin for “whirl,” related to “vortex”) was first used for geometry back in 1570 by J. Dee in H. Billingsley’s translation of Euclid’s Elements Geom. “From the vertex, to the Circumference of the base of the Cone.” From this and the other three entries through 1672, “vertex” seems to get used as meaning the tip of a pyramid. (This is further backed up by this entry in Entymonline). In 1715 the term is then used in “Two half Parabolas’s [sic] whose Vertex’s are C c.” Not sure what that means – parabolas have vertices? Maybe he means the foci? (Update: David Richeson, author of Euler’s Gem, and Ari Blenkhorn both wrote and noted the “vertex of a parabola” is the point where the parabola intersects its axis of symmetry. David also was a good sport about my comments later in this post, noting his mother didn’t finish it. Ari says in class she illustrates how you get each of the conic sections from a cone by slicing up ice-cream cones, dipping the cut edges in chocolate syrup, and using them to print the shapes. Me, I learned a new term, “latus rectum” – literally, “right side.”)
It’s not until 1840 that D. Lardner says “These lines are called the side of the angle, and the point C where the sides unite, is called its vertex.” So, I think I buy Euler’s Gem‘s explanation: Euler called the corners of a polyhedron “solid angles” and Legendre renamed them to a term already used for points in other contexts, “vertices.” OK, I think we’ve beat that to death…
So, that’s it: “edges” will be 272 years old as of next Monday (let’s have a party), and “vertices” as we know them are only 228 years old.
By the way, I thought the book Euler’s Gem was pretty good. Lots of math history and some nice proofs along the way. The proofs sometime (for me) need a bit of pencil and paper to fully understand, which I appreciate – they’re not utterly dumbed down. However, I found I lost my mojo around chapter 17 of 23. The author tries to quickly bring the reader up to the present day about modern topology. More and more terms and concepts are introduced and quickly became word salad for me. But I hope I go back to these last chapters someday, with notebook and pencil in hand – they look rewarding. Or if there’s another topology book that’s readable by non-mathematicians, let me know. I’ve already read The Shape of Space, though the first edition, decades ago, so maybe I should (re-)read the newest edition.
On the strength of the author’s writing I bought his new book, Tales of Impossibility, which I plan to start soon. I found out about Euler’s Gems through a book by another author, called Shape. Also pretty good, more a collection of articles that in some way relate to geometry. His earlier book, a NY Times bestseller, is also a fairly nice collection of math-related articles. I’d give them each 4 out of 5 stars – a few uneven bits, but definitely worth my while. They’re no Humble Pi, which is nothing deep but I just love; all these books have something to offer.
Oh, and while I’m here, if you did read and like Humble Pi, or even if you didn’t, my summer walking-around-town podcast of choice was A Podcast of Unnecessary Detail, where Matt Parker is a third of the team. Silly stuff, maybe educational. I hope they make more soon.
Bonus test: if you feel like you’re on top of Euler’s polyhedral formula, go check out question one (from a 2003 lecture, “Subtle Tools“), and you might enjoy the rest of the test, too.
Oh, and the 272nd birthday of the term “edge” was celebrated with this virtual cake.
“Done puttering.” Ha, I’m a liar. Here’s a follow up to the first article, a follow-up which just about no one wants. Short version: you can compute such groups of colors other ways. They all start to look a bit the same after a while. Plus, important information on what color is named “lime.”
So, I received some feedback from some readers. (Thanks, all!)
Peter-Pike Sloan gave my technique the proper name: Farthest First Traversal. Great! “Low discrepancy sequences” didn’t really feel right, as I associate that technique more with quasirandom sampling. He writes: “I think it is generally called farthest point sampling, it is common for clustering, but best with small K (or sub-sampling in some fashion).”
Alan Wolfe said, “You are nearly doing Mitchell’s best candidate for blue noise points :). For MBC, instead of looking through all triplets, you generate N*k of them randomly & keep the one with the best score. N is the number of points you have already. k is a constant (I use 1 for bn points).” – He nailed it, that’s in fact the inspiration for the method I used. But I of course just look through all the triplets, since the time to test them all is reasonable and I just need to do so once. Or more than once; read on.
Matt Pharr says he uses a low discrepancy 3D Halton sequence of points in a cube:
I should have thought of trying those, it makes sense! My naive algorithm’s a bit different and doesn’t have the nice feature that adjacent colors are noticeably different, if that’s important. If I would have had this sequence in hand, I would never have delved. But then I would never have learned about the supposed popularity of lime.
Bart Wronski points out that you could use low-discrepancy normalized spherical surface coordinates:
Since they’re on a sphere, you get only those colors at a “constant” distance from the center of the color cube. These, similarly, have the nice “neighbors differ” feature. He used this sequence, noting there’s an improved R2 sequence (this page is worth a visit, for the animations alone!), which he suspects won’t make much difference.
Veedrac wrote: “Here’s a quicker version if you don’t want to wait all day.” He implemented the whole shebang in the last 24 hours! It’s in python using numpy, includes skipping dark colors and grays, plus a control to adjust for blues looking dark. So, if you want to experiment with Python code, go get his. It takes 129 seconds to generate a sequence of 256 colors. Maybe there’s something to this Python stuff after all. I also like that he does a clever output trick: he writes swatch colors to SVG, instead of laborious filling in an image, like my program does. Here’s his pattern, starting with gray (the only gray), with these constraints:
Towaki Takikawa also made a compact python/numpy version of my original color-cube tester, one that also properly converts from sRGB instead of my old-school gamma==2.2. It runs on my machine in 19 seconds, vs. my original running overnight. The results are about the same as mine, just differing towards the end of the sequence. This cheers me up – I don’t have to feel too guilty about my quick gamma hack. I’ve put his code here for download.
John Kaniarz wrote: “When I was reading your post on color sequences it reminded me of an on the fly solution I read years ago. I hunted it down only to discover that it only solved the problem in one dimension and the post has been updated to recommend a technique similar to yours. However, it’s still a neat trick you may be interested in. The algorithm is nextHue = (hue + 1/phi) % 1.0; (for hue in the range 0 to 1). It never repeats the same color twice and slowly fills in the space fairly evenly. Perhaps if instead of hue it looped over a 3-D space filling curve (Morton perhaps?), it could generate increasingly large palettes. Aras has a good post on gradients that use the Oklab perceptual color space that may also be useful to your original solution.”
Looking at that StackOverflow post John notes, the second answer down has some nice tidbits in it. The link in that post to “Paint Inspired Color Compositing” is dead, but you can find that paper here, though I disagree that this paper is relevant to the question. But, there’s a cool tool that post points at: I Want Hue. It’s got a slick interface, with all sorts of things you can vary (including optimized for color blindness) and lots of output formats. However, it doesn’t give an optimized sequence, just an optimized palette for a fixed number of colors. And, to be honest, I’m not loving the palettes it produces, I’m not sure why. Which speaks to how this whole area is a fun puzzle: tastes definitely vary, so there’s no one right answer.
Josef Spjut noted this related article, which has a number of alternate manual approaches to choosing colors, discussing reasons for picking and avoiding colors and some ways to pick a quasirandom order.
Nicolas Bonneel wrote: “You can generate LDS sequences with arbitrary constraints on projection with our sampler :P” and pointed to their SIGGRAPH 2022 paper. Cool, and correct, except for the “you” part ;). I’m joking, but I don’t plan to make a third post here to complete the trilogy. If anyone wants to further experiment, comment, or read more comments, please do! Just respond to my original twitter post.
Pontus Andersson pointed out this colour-science Python library for converting to a more perceptually uniform colorspace. He notes that CAM16-UCS is one of the most recent but that the original perceptually uniform colorspace, CIELAB, though less accurate, is an easier option to implement. There are several other options in between those two as well, where increased accuracy often requires more advanced models. Once in a perceptually uniform colorspace, you can estimate the perceived distance between colors by computing the Euclidean distances between them.
Andrew Glassner asked the same, “why not run in a perceptual color space like Lab?” Andrew Helmer did, too, noting the Oklab colorspace. Three, maybe four people said to try a perceptual color space? I of course then had to try it out.
Tomas Akenine-Möller pointed me at this code for converting from sRGB to CIELab. It’s now yet another option in my (now updated) perl program. Here’s using 100 divisions (i.e., 0.00, 0.01, 0.02…, 1.00 – 101 levels on each color axis) of the color cube, since this doesn’t take all night to run – just an hour or two – and I truly want to be done messing with this stuff. Here’s CIELab starting with white as the first color, then gray as the first:
Get the data files here. Notice the second color in both is blue, not black. If you’re paying attention, you’ll now exclaim, “What?!” Yes, blue (0,0,255) is farther away from white (255,255,255) than black (0,0,52) is from white, according to CIELab metrics. And, if you read that last sentence carefully, you’ll note that I listed the black as (0,0,52), not (0,0,0). That’s what the CIELab metric said is farthest from the colors that precede it, vs. full black (0,0,0).
I thought I had screwed up their CIELab conversion code, but I think this is how it truly is. I asked, Tomas replied, “Euclidean distance is ‘correct’ only for smaller distances.” He also pointed out that, in CIELab, green (0,255,0) and blue (0,0,255) are the most distant colors from one another! So, it’s all a bit suspect to use CIELab at this scale. I should also note there are other CIELab conversion code bits out there, like this site’s. It was pretty similar to the XYZ->CIELab code Tomas points at (not sure why there are differences), so, wheels within wheels? Here’s my stop; I’m getting off the tilt-a-whirl at this point.
Here are the original RGB distance “white” and “gray” sequences, for comparison (data files here):
Interesting that the RGB sets look brighter overall than the CIELab results. Might be a bug, but I don’t think so. Bart Wronski’s tweet and Aras’s post, “Gradients in linear space are not better,” mentioned earlier, may apply. Must… resist… urge to simply interpolate in sRGB. Well, actually, that’s how I started out, in the original post, and convinced myself that linear should be better. There are other oddities, like how the black swatches in the CIELab are actually (0,52,0) not (0,0,0). Why? Well…
At this point I go, “any of these look fine to me, as I would like my life back now.” Honestly, it’s been educational, and CIELab seems perhaps a bit better, but after a certain number of colors I just want “these are different enough, not exactly the same.” I was pretty happy with what I posted yesterday, so am sticking with those for now.
Mark Kilgard had an interesting idea of using the CSS Color Module Level 4 names and making a sequence using just them. That way, you could use the “official” color name when talking about it. This of course lured me into spending too much time trying this out. The program’s almost instantaneous to run, since there are only 139 different colors to choose from, vs. 16.7 million. Here’s the ordered name list computed using RGB and CIELab distances:
Ignore the lower right corner – there are 139 colors, which doesn’t divide nicely (it’s prime). Clearly there are a lot of beiges in the CSS list, and in both solutions these get shoved to the bottom of the list, though CIELab feels like it shove these further down – look at the bottom two rows on the right. Code is here.
The two closest colors on the whole list are, in both cases, chartreuse (127, 255, 0) and lawngreen (124, 252, 0) – quite similar! RGB chose chartreuse last; CIELab chose lawngreen last. I guess picking one over the other depends if you prefer liqueurs or mowing.
Looking at these color names, I noticed one new color was added going from version 3 to 4: Rebecca Purple, which has a sad origin story.
Since you made it this far, here’s some bonus trivia on color names. In the CSS names, there is a “red,” “green,” and “blue.” Red is as you might guess: (255,0,0). Blue is, too: (0,0,255). Green is, well, (0,128,0). What name is used for (0,255,0)? “Lime.”
In their defense, they say these names are pretty bad. Here’s their whole bit, with other fun facts:
My response: “Lime?! Who the heck has been using ‘lime’ for (0,255,0) for decades?” I suspect the spec writers had too much lime (and rum) in the coconut when they named these things. Follow up: Michael Chock responds, “Paul Heckbert.”
I have been working on a project where there are a bunch of objects next to each other and I want different colors for each, so that I can tell where one ends and another starts. In the past I’ve simply hacked this sort of palette:
for (my $i=0; $i<8; $i++){
my $r = $i % 2;
my $g = (int($i/2)) % 2;
my $b = (int($i/4)) % 2;
print "Color $i is ($r, $g, $b)\n";
}
varying the red, green, and blue channels between their min and max values. (Yes, I’m using Perl; I imprinted on it before Python existed. It’s easy enough to understand.)
The 8 colors produced:
Color 0 is (0, 0, 0) Color 1 is (1, 0, 0) Color 2 is (0, 1, 0) Color 3 is (1, 1, 0) Color 4 is (0, 0, 1) Color 5 is (1, 0, 1) Color 6 is (0, 1, 1) Color 7 is (1, 1, 1)
which gives:
Good enough, when all I needed was up to 8 colors. But, I was finding I needed 30 or more different colors to help differentiate the set of objects. The four-color map theorem says we just need four distinct colors, but figuring out that coloring is often not easy, and doesn’t animate. Say you’re debugging particles displayed as squares. Giving each a unique color helps solve the problem of two of them blending together and looking like one.
To make more colors, I first tried something like this, cycling each channel between 0, 0.5, and 1:
my $n=3; # number of subdivisions along each color axis
for (my $i=0; $i<$n*$n*$n; $i++){
my $r[$i] = ($i % $n)/($n-1);
my $g[$i] = ((int($i/$n)) % $n)/($n-1);
my $b[$i] = ((int($i/($n*$n))) % $n)/($n-1);
print "Color $i is ($r, $g, $b)\n";
}
Which looks like:
These are OK, I guess, but you can see the blues are left out until the later colors. The colors also start out pretty dark, building up and becoming mostly light at the end of the set.
And it gets worse the more you subdivide. Say I use $n = 5. We’re then just walking through variants where the red channel walks up by +0.25. Here are the first 10, to show what I mean:
Color 0 is (0, 0, 0) Color 1 is (0.25, 0, 0) Color 2 is (0.5, 0, 0) Color 3 is (0.75, 0, 0) Color 4 is (1, 0, 0) Color 5 is (0, 0.25, 0) Color 6 is (0.25, 0.25, 0) Color 7 is (0.5, 0.25, 0) Color 8 is (0.75, 0.25, 0) Color 9 is (1, 0.25, 0)
The result for 125 colors:
These might be OK if I was picking out a random color, and that would actually be the easiest way: just shuffle the order. After calculating a set of colors and putting them in arrays, go through each color and swap it with some other random location in the array (here the colors are now in arrays $r[], $g[], $b[]):
for (my $i=($n*$n*$n)-1; $i>=1; $i--){
my $idx = int(rand($i+1)); # pick random index from remaining colors, [0,$i]
my @tc = ($r[$i],$g[$i],$b[$i]); # save color so we can swap to its location
$r[$i] = $r[$idx]; $g[$i] = $g[$idx]; $b[$i] = $b[$idx]; # swap
$r[$idx] = $tc[0]; $g[$idx] = $tc[1]; $b[$idx] = $tc[2];
}
Some colors don’t look all that different, and the palette tends to be dark. This can be improved with simple gamma correction:
The bigger problem is that these are just random colors over a fixed range, 125 colors in this case. Sometimes I’m displaying 4 objects, sometimes 15, sometimes 33. With this sequence, the first four colors have two oranges that are not considerably different – much worse than my original 8-color palette. This was just (bad) luck, but doing another random roll of the dice isn’t the solution. Any random swizzle will almost always give colors that are close to each other in some sets of the first N colors, missing out on colors that would have been more distinctive.
I’d like them to all look as different as possible, as the number grows, and I’d like to have one table. This goal reminded me of low-discrepancy sequences, commonly used for progressively sampling a pixel for ray tracing, for example. Nice run-through of that topic here by Alan Wolfe.
The idea is simple: start with a color. To add a second color, look at every possible pixel RGB triplet and see how far it is from that first color. Whichever is the farthest is the color you use. For your third color, look at every possible pixel triplet and find which color has the largest “closest distance” to one of the first two. Lather, rinse, repeat, choosing the next color as that which maximizes the distance to its nearest neighbor.
Long and short, it works pretty well! Here are 100 colors, in order:
I started with white. No surprise, the farthest color from white is black. For the next color, the program happened to pick a blue, (0,128,255), which is (0,186,255) after gamma correction. At first, I thought this third color was a bug. But thinking about it, it makes sense: any midpoint of the six edges of the color cube that don’t touch the white or black corner (these form a hexagon) are equally far from both corner colors (the other RGB cube corners are not).
The other colors distribute themselves nicely enough after that. At a certainly point, some colors start to look a bit the same, but I at least know they’re all different, as best as can be, given the constraints.
In Perl it took an overnight run on a CPU to get this sequence, as I test all 16.7 million (256^3) triplets against all the previous colors found for the largest of the closest approach distances computed. But, who cares. Computers are fast. Once I have the sequence, I’m done. Here’s the sequence in a text file, if of interest.
This is a sequence, meant to optimize all groups of the first N colors for any given N. If you know you’ll always need, say, 27 colors, the colors on a 3x3x3 subdivided color cube (in sRGB) are going to be better, because you’re globally optimizing for exactly 27 colors. Here I did not want to find some optimal set of colors for every number N from 1 to 100, but just wanted a single table I could store and reasonably use for a group of any size.
What’s surprising is that none of the other color cube corner colors – red (255,0,0), cyan (0,255,255), etc. – appear in this sequence. If you start with another color than white, you get a different sequence. Starting with a different RGB cube corner results in some rotation or flip of the color sequence above, e.g., start with black and your next color is white, then the rest are (or can be; depends on tie breaks) the same. Start with red, cyan is next, and then some swapping and flipping of the RGB values in the original sequence. But, start with “middle gray” and the next eight colors are the corners of the color cube, followed by a different sequence. Here are the first twenty:
I tried some other ideas, such as limiting the colors searched to those that aren’t too dark. If I want to, for example, display black wireframes around my objects, black and other dark colors are worth avoiding:
This uses a rule of “if red + green + blue < 0.2, don’t use the color.” Gets rid of black, though that dark blue is still pretty low contrast, so maybe I should kick that number up. But dark greens and reds are not so bad, so maybe balance by the perceptual brightness of each channel… Endless fiddling is possible.
I also tried “avoid grays, they’re boring” by having a similar rule that if a test color’s three differences among the three RGB channel values were all less than 0.15, don’t use that color. I started with the green corner of the color cube, to avoid white. Here’s that rule:
Still some pretty gray-looking swatches in there – maybe increase the difference value? One downside is that these types of rules remove colors from the selection set, forcing the remaining colors to be closer to one another.
I could have made this process much faster by simply choosing from fewer colors, e.g., looking at only the color levels (0.0, 0.25, 0.5, 0.75, 1.0), which would give me 125 colors to choose from, instead of 16.7 million. But it’s fun to run the program overnight and have that warm and fuzzy feeling that I’m finding the very best colors, each most distant from the previous colors in the sequence.
I should probably consider better perceptual measures of “different colors,” there’s a lot of work in this area. And 100 colors is arbitrary – above this number, I just repeat. I could probably get away with a smaller array (useful if I was including this in a shader program), as the 100-color list has some entries that look pretty similar. Alternately, a longer table is fine for most applications, it does not take a lot of space. Computing the full 16.7 million entry table might take quite a while.
There’s lots of other things to tinker with… But, good enough – done puttering! Here’s my perl program. If you make or know of a better, perceptually based, low-discrepancy color sequence, great, I’d be happy to use it.
Addendum: Can’t get enough? See what other people say and more things I try here, in a follow-up blog post.
Since I’m confined to my hotel room today (see the end of the album for why; you won’t be shocked), I organized and annotated some of my #SIGGRAPH2022 photos: https://bit.ly/sig2022pics – after selecting a photo, click on the (i) in the upper right to see descriptions and links.
I’ve started looking over the schedule for SIGGRAPH 2022, for online and in person attendance. Some things I’ve found (the course links below may be the most valuable):
If you’re traveling to Canada (from the US or any country), you must use ArriveCAN before you travel and must be fully vaccinated. You have to fill out ArriveCAN up to 72 hours (but not longer) before your arrival. The app needs to show you a V, I or A after you filled it out. Otherwise you did something wrong and they won’t let you into the country (unless you are Canadian). SIGGRAPH notes this and more info here.
Probably the most important thing on any of the public SIGGRAPH schedule pages is the “spinning person with a black background” vs. the “person sledding or sit-skiing on a blue background” icons – see image at bottom. These are in person vs. virtual. So, for example, all Birds of a Feather sessions are virtual, even those actually during the week of in-person SIGGRAPH days (which I think is a mistake, but let’s go no farther down that path…).
The in-person course descriptions on the SIGGRAPH site don’t include schedules, just speaker lists, at best. Here are two courses’ schedules and other information, online elsewhere:
There are not a lot of in-person courses. Like, six (some with a few sessions), not including the roundtables (whatever those are – they look to each be 15 minutes long). However, if you go to the full program, select courses, and scroll all the way to the bottom, then click on “Courses” under “On Demand,” you’ll suddenly see 14 more virtual courses revealed.
There’s no SIGGRAPH app for the iPhone, etc., this year. I was told “it will be a mobile version of the platform. And it will be available July 25 when the virtual platform launches. Once you log into the virtual platform you will get instructions on how to access the mobile version of the platform.” True. And, so far, the online “for attendees” scheduler is much better than the public site, e.g., you can actually see where Appy Hour will be located (West Building, Exhibit Hall A, near registration).
Worried about ventilation and COVID? This account plans to post live-tweet measures of CO2 (which correlates with COVID risk) at SIGGRAPH (and DigiPro).
My guess is attendance is down (duh), so the odds of running into people I know is higher, though I might not recognize them with masks (so I recommend we all wear sashimono).
For off-the-beaten-path things to do in Vancouver, see Atlas Obscura. Gassy Jack sounds like a character from Borderlands. Also, consider this art exhibit. And, there’s a free Vancouver Murals app – it’s mural festival weekm, e.g., at 437 Hornby a mural is in progress.
I’ll update this post as I learn more tidbits – feel free to write me at erich@acm.org (yes, there, I did it, put my email address)
I expect most of us have a passing knowledge of physical units for lights. We have some sense of what lumens and candelas are about, we’ve maybe heard of nits with regard to our monitors, and maybe have a vague sense of what lux is about. This was, at least, my level of understanding for the past lessee 38 years I’ve been in the field of computer graphics. My usual attitude with lights was (and still is, most of the time), “make them brighter” if the scene is too dim. That’s all most of us normally need, to be honest.
These past months I’ve been learning a fair bit about this area, as proper specification of lights is critical if you want to, for example, move a fully modeled scene from one application to another, or are merging real-world data with synthetic. APIs and programs with “0.7” or “90% brightness” or other relative units don’t hack it, as they are not anchored to any physical meaning. So, here’s my summary of the four main light units, with others mentioned along the way. My focus is on the practical, real-world use of these units. Some of this knowledge was hard won, for me. Lux, in particular, is a term where I have been misled by many pages on the internet that attempt to define it. My thanks to Luca Fascione and Anders Langlands in particular for correcting me along the way. I may still have a bug or two in this post (though am trying hard not to), so tell me if I do and I’ll fix it: erich@acm.org.
You’ll see similar tables in many other places, including page 272 of our own book. I like theirs better: more columns. Radiometry is concerned with any electromagnetic radiation – radio, microwaves, x-rays, etc.; photometry factors in how our eyes respond to light, described by the luminous efficiency function (well, functions: there’s the photopic function, for brightly lit conditions, and the scotopic, for dim). I’m focusing on photometric units.
Radiant energy: like it says, energy, some total amount of radiation, basically. Luminous energy: same, modified by the eye’s response. I say “forget them” as far as graphics goes – I’ve never seen energy (vs. power) get used for light in the field of computer graphics. All the units that follow are in terms of “per second,” and those are the ones you’ll see used in describing lights in the real world and computer graphics. Begone, Talbots!
Luminous flux: measured in lumens, this is what you’ll see on the box for most light bulbs you buy nowadays. It’s the power of the light. Think of it as the number of photons emitted per second, again modulated by the luminous efficiency function. Incandescent bulbs are (were) sold as “60 Watts” or similar. This rating refers to the incoming amount of power, not what the bulb itself produces. With LED bulbs being 6x or more efficient converting power to light than incandescents, you’ll see “60W equivalent – 800 lumens” on packaging, since such LEDs actually draw around 9 Watts. This ratio of lumens to Watts is the luminous efficacy, not to be confused with luminous efficiency, noted earlier. Welcome to the first of many instances where the English language mostly runs out of words for describing light, resulting in a lot of similar-sounding terms (and I won’t even begin to discuss “exitant radiance” vs. “radiant exitance” – see PBR). That’s about all you need to know about basic light bulbs.
If you’re curious about how manufacturers measure lumens in bulbs they produce, go down the integrating sphere rabbit hole. Invented around 1900, you put a bulb in one part and a detector at another. To eliminate any directionality from the light source, its photons usually do 10-25 scatters inside the sphere before being detected. This geometric series of scatters converges to a simple formula and gives a resulting lumens value. These devices can also be used to measure reflectivity of materials. Buy one now or build your own (no, don’t do either of those).
Luminous intensity: measured in candelas (abbreviated as cd). You might know a bulb gives 800 lumens, but that’s the total power. Even a bulb has a base where it screws into the socket, so no light is going that direction. What you’d more likely want to know is how many photons per second (again, modulated by the luminous efficiency function for our eyes) are going in a particular direction. It’s a more precise measure of how much light a surface is receiving, dividing up the lumens over the “actually emitting” part of the bulb.
From our table, you can see candelas are lumens divided by steradians, sr, a measure of solid angle. For a perfect point light, emitting equally in all directions, we get the luminous intensity by dividing the luminous flux by the solid angle of a sphere, which is 4*pi steradians, so 800 lumens/12.57 sr = 63.6 candelas. However, a real bulb has a base that blocks emission. For example, this bulb says it emits over 150 degrees (out of 180). Using this calculator, putting in 300 degrees (150 * 2), the effective intensity of the bulb is 68.2 cd, a bit higher than our 63.6 “isotropic emitter” estimate.
Spotlights, flashlights, laser pointers, and other directed light sources are most sensibly described using candelas, sometimes as “maximum beam candlepower” (MBCP). Imagine we have a flashlight that is described as providing 100 lumens over a 20 degree wide beam. By the calculator, this would give 1047 cd – pretty bright. Oddly, most consumer flashlights and similar are marketed by lumens, not candelas. I expect this is because we’re just getting used to lumens on our lightbulb packaging and have no idea what candelas are. But, the beam angle matters: if a 100 lumen flashlight instead has a 10 degree uniform beam, the intensity goes up to 4182 cd. Here’s, in fact, a flashlight along these lines, one that lists as 100 lumens and 4200 cd, so I’m guessing its beam angle is indeed 10 degrees. Note you’ll also see absolute lies out there, such as this million lumen flashlight. For comparison, the highest DIY flashlight I know is this amusing 1.4 million lumen monster.
A more elaborate and accurate way to describe a light’s emission is to provide an IES profile (another IES collection here). A profile is a simple text file describing candelas emitted in a latitude-longitude type or mapping. Find more format information here, here, and here, for starters. Or just skip to here (thanks to BellaRender for the tipoff).
A candela, by the way, is indeed related to a candle. It has a fancy physics definition nowadays, but used to be things like “one candlepower is the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour.”
Illuminance: measured in lux. Here’s where the internet is a morass of poorly written, confused, or just plain misleading information. This unit is the main reason I’m writing this piece. Illuminance depends purely on three things: the detector’s position and orientation, and its shape (but not size). I’ve seen waytoomanypages saying things like illuminance being “about how much light an area receives” – no, the area is irrelevant. There are othermentions relating the unit to lux and candelas, with a light (somehow) shining on (only) a square meter a meter away – technically correct, but useless for understanding.
I’m not sure if this will help or hurt: Imagine you have a flat little piece of whatever that detects visible light, sitting on some surface. It merrily records some number of photons per second, weighted by the luminous efficiency function, as usual. Divide this weighted value by the number of square meters your detector covers and you have the illuminance (in some form – you’d actually need some constant to convert to lux). You’re dividing by the detector’s size, so all that’s left is its location and which direction it’s pointing. In photography, this is an “incident light meter,” a separate device you put in your scene and point in a direction, vs. a “reflected light meter,” which your in-camera light meter is.
Wikipedia has a particularly good, detailed page on light meters, which are used to detect lux in a scene. Highlights: a hemispherical receptor shape is more useful in photography than a flat detector – your subjects are likely curved surfaces pointing in a bunch of directions, not flat and pointing in a single direction. This hemisphere shape leads to a cardioid falloff with angle (instead of a cosine, for a flat detector). There’s a “constant C” that varies for incident meters, a matter of taste in exposure value. Me, for yuks I bought a cheap illuminance meter that gives lux values, though it has a dubious receptor shape (hemisphere recessed inside a black bowl – what?). Still, point it at the sun and it gives a reasonable value. Update: there are apps for measuring lux with your phone – no idea how good these are.
While lumens and candelas are directly associated with the light and nothing else, lux is associated with the scene being lit. The incoming photons are from wherever. Various environments have different typical lux ranges; here’s a pretty reasonable typical table on Wikipedia. Note the range of illuminance is incredible: 0.3 lux for the full moon to 100,000 lux for direct sunlight. Beware, however, of the internet, as this similar table, also on Wikipedia, says full daylight is only 10,000 lux – a factor of 10 difference! In this case I think this second table is just plain wrong. (Update: I fixed this table on Wikipedia, adding “Sunlight” to it, and directly referenced this original source.)
But, also recall that the direction the incident meter is pointed matters. Straight up will give a different reading than pointing it directly at the sun, for example (unless the sun’s directly overhead, of course). For designing an interior space, you’ll see terms like “horizontal illuminance”, e.g., for a desk or other work surface and “vertical illuminance,” for what illumination a wall or similar receives.
Part of the variance I see in these tables I believe depends on what you’re measuring. For example, this table, yet another on Wikipedia, lists a full moon on a clear night as 0.25 lux (well, 25 decilux) and moonlight as 1 lux. I assume the latter includes reflections off the surroundings, including clouds, i.e., direct vs. global illumination. Like the moon, lights are sometimes rated by lux. A filmmaker, photographer, or set designer may not care so much about a light in terms of lumens or candelas, but rather cares how much light is reaching the subject. Light panels, for example, are described in terms such as “the Lume Cube 2.0 puts out 750 lux at 1m.” Lux is also handy for the sun and moon, where no other unit makes a heck of a lot of sense (or has a lot more zeros), and where the distance from the source is essentially constant and so can be ignored. Conversion for local lights is easy: divide candelas by the square of the distance and you get lux for the light (assuming the receptor is facing the light). For area lights, the five-times rule is useful.
To confuse things a bit, you can also describe an area light’s output in terms of lumens per square meter. Note that this same SI unit description – lumens per meters-squared (lm/m^2) – describes lux, but isn’t called lux when used for emission. In this case, the area is emitting a certain amount of visible light, again divided by area. When emitted, this is called luminous exitance or luminous emittance. That said, this sort of area emission is often better described by our last physical unit…
Luminance: measured in nits (well, candelas per square meter is the SI name – “nits” are not an official part of the SI system, but this unit name is commonly used; there are many obscure units for luminance that I’ve rarely seen employed). Luminance is a measure of light along a given direction. When you take a picture, you’re capturing luminance (well, after converting to grayscale). Your camera, of course, has its exposure adjusted so that we can see something reasonable, but it’s taking in luminance at each pixel and remapping this value in some way. When you let your camera use its through-the-lens reflected light meter to figure out how much to expose a shot, you’re depending on some average, weighted, or spot luminance reading it detects. As an aside, measuring luminance is not always a great way to shoot photos. This article explains – and gives practical examples – why using an incident light meter to capture the illuminance instead can be better.
“Nits” is from the Latin nitere, to shine. This unit name is often used to characterize the brightness of flat screens (though I won’t stop you if you use it for any surface, e.g., a reflector). For example, monitors, laptops, and mobile devices are typically 200-300 nits, though a 13-inch MacBook can max out at 548 nits. Televisions can be as bright as 1000 nits or more. The sun is about 1.6 billion nits (which I see quoted a lot, but I’m not sure where this number comes from). The filament (tiny area) of a clear incandescent bulb is 7 million nits (video where I saw this number forgotten – update: the book Vision says on p. 54 that it’s a million mL, which is about 3 million nits). This unit makes sense as a measure for area light sources. That said, be careful that you don’t assume angle doesn’t matter. As this page shows, at a 70-degree angle from perpendicular, “lightness” on real displays are between 50% to 75% of the maximum nit value. “Nits” defines the peak luminance when facing the emitter straight on.
Luminance from a surface is constant along a direction, no matter the distance. Think about looking at a blue screen of death (BSOD, to fans) in a darkened room – really, assume the screen is all blue. You walk closer to the monitor. While you yourself are more illuminated by it (since you’re nearer the light source), any location on the screen itself is not brighter. The blue stays constant; there’s just more of it in your field of view as you approach it.
The radiometric equivalent of “luminance” is “radiance,” a term you should likely know, as this is what a physically based renderer typical uses for computation under the hood (or spectral radiance, but I’m trying to keep away from spectra and color in this already overlong post). It’s a key quantity for us, since it is independent of distance. When we shoot a ray from the eye, our goal is to compute the luminance for that ray, for display. If we’re rendering a BSOD monitor, being closer just means its emission covers more pixels of the image we form, not that any pixel on its screen or our image is brighter. This of course works for any object viewed (ignoring atmospheric effects): you look at the walls of the Green Monster (hey, I live next to Boston), it doesn’t matter whether you’re behind home plate or in the cheap seats, it’s the same amount of green along any given direction.
[Addendum: Not formal enough? I like the explanation here, which points to where to get more detail, so I’ll quote it: “radiance is defined in terms of the photons flowing through a given patch of surface, with directions within a given cone, and then taking a limit as the surface patch and cone sizes go to zero.”]
Last unit – done! And, my plea: please try to avoid using the word “intensity” in your UI and documentation if you don’t mean luminous intensity. I’m likely fighting the tide here (I tried, once; it was too late), but “intensity” has a real-world physical unit meaning, “luminous intensity” and candelas. I’d use “brightness” or perhaps “multiplier” if you are using a non-physical light and just want to adjust its effect. Think of the children!
Your homework assignment: if you were building a physically based rendering system, what units would you use for light sources? Point, spotlight, directional (at infinity) light, environment (dome or image-based lighting, aka IBL), and area lights of various sorts (subdivide into flat and other, if you prefer). Some are pretty obvious, at least to me, some are tricky, some might allow multiple representations, holding some value constant. For example, think what happens if you change the shape or size of an area light. Did it do what you expected? Anyway, a thing for a future blog post.
More info: A reasonable free book, Light Measurement Handbook, on the subject of light and physical units from 1998 is found here and here. Note it is in the camp of “correct but misleading” with what “lux” means, e.g., page 32 implies lux depends on sensor area – it doesn’t.
Sebastien Lagarde’s discussion of lights in Frostbite is worthwhile. It delves deeply into representations of various light source types for their game engine, including various formulae for evaluating area lights.
For a serious book on the subject, see Introduction to Radiometry and Photometry. The first 52 pages of this book are online at Google Books (click the link). It even has a chapter on ray tracing, which you can get a glimpse of here.
Update: check the tweet replies for even more physical-light-unit goodness.
One more late addition, since the PDF is now available: the PhysLight system, used by Wētā and others. See the “/docs” area for the PDF. I spent a fair bit of time verifying that the equations there work for naive users like me. Magic: you really can go from a lux reading, a camera with given exposure settings, and a standard 18% gray reference and get out just about what the pixel value is. For some reason, my results matched more closely inside than outside – my guess is the lux meter’s weird sensor shape. Maybe if I get a (self-assigned) budget higher than $40 for equipment I might get better results…
Incredibly detailed series of 10 articles by the inimitable Jacco Bikker about efficient ray tracing. Don’t be fooled by the title “How to build a BVH,” it also lays out data structures for textured triangles and much else for making a modern ray tracer, finishing up with using OpenCL (!) to ray trace on the GPU. Tip: for the code, build and run each project separately in VS 2019, or you’ll get “cannot open program database” compilation errors. Also, WASD+FR (which, oddly, the program receives and responds to even while I’m typing in this window) and numpad for rotation.
Three.js path tracer. Sure, I prefer dedicated hardware to accelerate ray tracing, but this site presents a wide-ranging effort. The sheer number of different browser programs offered, all just a click away, is great. I’ve checked out only a handful so far, such as the quadric shape explorer (pro tip: WASD+QZ for moving around). Bonus: the README page includes rare photos of Arthur Appel, the first person to publish anything about ray tracing, back in 1968.
You can render a sphere nicely without ray tracing, but it’s certainly more work and with quite a few challenges to overcome. Ben Golus has an extremely in-depth article about techniques for doing so in Unity.
Origami Simulator in WebGL for the browser, from MIT. Won’t teach you how to fold – all folds happen at once – but it’s fun to look through the patterns.
The Reverse Phi Illusion is so impressive. There is movement between frames, but it repeats. Play with the controls to see what’s going on frame-wise.