Seven Things for April 3, 2016

The things:

Reuse of photos of public domain paintings, and of wire-frame models

It’s rainy out, and I’m trying to avoid coding for Mineways and collecting code for JGT, so time to blog a little.

Some years ago I read the book The Public Domain about copyright and learned an interesting tidbit: photos of public domain paintings or photos are not covered by copyright in the U.S., they’re free to reuse.

Here’s the relevant bit from Wikipedia:

Reproductions of public domain works

The requirement of originality was also invoked in the 1999 United States District Court case Bridgeman Art Library v. Corel Corp. In the case, Bridgeman Art Library questioned the Corel Corporation‘s rights to redistribute their high quality reproductions of old paintings that had already fallen into the public domain due to age, claiming that it infringed on their copyrights. The court ruled that exact or “slavish” reproductions of two-dimensional works such as paintings and photographs that were already in the public domain could not be considered original enough for protection under U.S. law, “a photograph which is no more than a copy of a work of another as exact as science and technology permits lacks originality. That is not to say that such a feat is trivial, simply not original”.[30]

Another court case related to threshold of originality was the 2008 case Meshwerks v. Toyota Motor Sales U.S.A. In this case, the court ruled that wire-frame computer models of Toyota vehicles were not entitled to additional copyright protection since the purpose of the models was to faithfully represent the original objects without any creative additions.[31]

The wire-frame case is obviously relevant to computer graphics. There’s a rundown of other countries’ laws on Wikimedia Commons’ site.

Private collections are within their rights to limit access as they wish, as misguided as I think it is to sell public domain works to the public. I have a problem with any public institution invoking protection of photos of works, since there’s no legal basis for this.

The Public Domain is free to download and worth a read. To be honest, after a bit I skimmed chapter 2, but I particularly enjoyed chapter 7, a case study in which the U.S.’s more permissive rules on what is in the public domain (“sweat of the brow” works are not copyright in the U.S.) are contrasted with Europe’s more restrictive laws.

Oh, and if you like to read about copyright (you weirdo), you might enjoy The Idealist: Aaron Swartz and the Rise of Free Culture on the Internet. The second half is worthwhile, though quite sad, and a story I suspect many of us know to some extent. The first half is about the evolution of copyright laws in the United States, which went from being a haven for piracy of foreign (primarily English) works to an ardent defender of extending copyright almost into perpetuity (despite there being no incentive benefits in extending copyright retroactively, since the law at the time the work was created was found sufficiently appealing to the original author; sorry, I feel a rant coming on…). Ahhh, imagine this alternate universe. <– That’s a great link, by the way, well worth a click.

Seven Things for April 2, 2015

I haven’t made one of these link posts for awhile. This one’s recent news, the ones to come will have more fun stuff.

sadly, human computers mostly got “calculate this boring number” assignments and very rarely got “if i was james counterstrike and i fired this rpg at this nightorc tell me how many gibs would come out”: one of history’s true missed opportunities

WebGL Links Page

I got tired of re-finding various useful WebGL and three.js links, so I made a page:

http://www.realtimerendering.com/webgl.html

What cool things am I missing?

I’ve made it a page of links I am likely to want to check out in the future. It’s a bit hard to draw the line. For example, I didn’t bother adding fun demos such as this and this, but I did add the page where I browse new demos. I don’t list development systems such as Goo Create for non-programmers, which is built on this open-source WebGL engine and has some interesting features. Nice things all, but I personally am unlikely to come back to them (or if I do, they’re now in this blog post).

One more variable…

Michael Cohen was looking at John Hable’s useful test image:

He noticed an odd thing. Looking at the image on his monitor (“an oldish Dell”) from across the room, it looked fine, the 187 area matched the side areas:

ok

(yes, ignore the moires and all the rest – honestly, the 187 matches the side bars.)

However, sitting at his computer, the 128 square then matched the side bars:

bad

Pretty surprising! We finally figured it out, that it’s the view angle: going off-axis resulted in a considerably different gray appearance. Moving your head left and right didn’t have much effect on the grays, but up and down had a dramatic effect.

Even knowing this, I can’t say I fully understand it. I get the idea that off-axis viewing can affect the brightness, but I would have thought this change would be consistent: grays would be dimmed down the same factor as whites. That last image shows this isn’t the case: the grays may indeed be dimmed down, but the alternating white lines are clearly getting dimmed down more than twice as much, such that they then match the 128 gray level. It’s like there’s a different gamma level when moving off-axis. Anyone know the answer?

Addendum: and if you view the first image on the iPhone, you get this sort of thing, depending on your zoom level. Here’s a typical screen shot – I’ve trimmed off the right so that the blog will show one pixel for pixel (on desktop computers – for mobile devices you’re on your own). This darkening is from bad filtering, see the end of this article.

iphone

Follow-up: one person commented that it’s probably a TN panel. Indeed, there’s a video showing the tremendous shift that occurs. The blue turning to brown is particularly impressive. I haven’t yet found a good explanation of what mechanism causes this shift to occur (Wikipedia notes other monitors having a gamma shift, so maybe it is something about gamma varying for some reason). There’s a nice color shift test here, along with other interesting tests.

Even better: check out this amazing video from Microsoft Research. (thanks to Marcel Lancelle for pointing it out)

Sci-Hub

Here’s a fascinating article on Sci-Hub, the “Pirate Bay” of scientific research papers. Really, go read it.
 
My sympathy lies with Alexandra Elbakyan. The key points to me are that researchers already informally download or ask other researchers for preprints. Sci-Hub wastes less time for this process. In physical terms, the very minor value-added of the final copy vs. the author’s draft is the main thing being “stolen”. Given that researchers make no royalties off the papers, there’s no loss to them there. The main thing journals sell is prestige.
 
That said, I don’t want journals to die on the vine, they deserve some money (though certain publishers seem way too profitable). I don’t see a good solution to these constraints:
  • Research papers should be free to anyone to access, especially since the authors do not earn royalties and want their papers to be read.
  • Publishers deserve to eat. Update: by which I mean, whoever is hosting and maintaining the journal deserves some reasonable amount of money. I don’t subscribe to the “people making buggy whips should have their jobs maintained and the automobile should be outlawed” school of thought.
In a sense, we already have a solution: author pre-prints are sometimes available on their websites. Google Scholar does a fairly good job finding these. But gathering these pre-prints on a single site is considered illegal; pre-prints themselves are probably illegal since the publisher usually owns the final article, but publishers rarely attack their unpaid writers the researchers to take their own work off their own sites. So the pre-print solution is not very good. It’s spotty coverage at best and the number of authors’ sites decrease over time as they move or die. A more permanent repository is needed.
One solution to the problem is the one-time fee to the journal to coordinate peer review (which is usually done by an unpaid editor researcher anyway) and publish the article (layout is done by the researcher, which is the norm in my field). If these fees were, say, $200 for a 12 page paper, great (well, not great, but at least understandable). For the publishers that allow this form of payment, it’s more like $2000 on up.
 
Another solution is to no longer use a paid publisher. Online journals such as the Journal of Computer Graphics Techniques are where there is no paid publisher involved, and a university provides permanent storage and distribution of the contents. In this model there are literally no costs to the researchers or readers, just the university or other institution pursuing its mission of the dissemination of information. There’s plenty of other things for publishers to publish and market and distribute, so they’ll still eat.
 
Back when journals and article reprints were on paper, and when layout and distribution was done by the publisher, $25/paper costs made sense. The internet and websites aren’t free, but nearly so. So why the high fees? Because they can.

A PNG Puzzle

Last post was too long, covering too much terrain. Here’s a puzzle instead which whittles it all down.

What values do you store in an sRGB PNG to display a perceptually half-gray color, with an alpha of 0.5?

If you’re an absolute expert on PNG and perception and alpha, that’s all the information you need. Just in case, to make sure you don’t break any rules, here are the key bits:

  1. A perceptually half-gray color on the screen is (187,187,187), not (128,128,128). See the image below to prove this to yourself, which is from John Hable’s lovely article.
  2. Your PNG is saving values in sRGB space. No extremely-rare gamma = 1.0 PNG for you.
  3. Alpha is coverage. The PNG spec notes, “The gamma value has no effect on alpha samples, which are always a linear fraction of full opacity.”
  4. PNG alphas are unassociated, they do not premultiply the color. To display your sRGB PNG color composited against black, you must multiply it by your unassociated alpha value.

So, what do you store in your PNG image to get a half-gray color displayed, with an alpha of 0.5? A few hints, then the answer, is after the image below.

Horizontal fully-black and fully-white lines combine to a half-gray, represented by 187. That’s sRGB in action:

Hint #1: a half-gray color with an alpha of 1.0 (fully opaque) is stored in a PNG by (187,187,187,255).

Hint #2: if a PNG could store a premultiplied color, the answer would be (187,187,187,128).

Hint #3: to turn a premultiplied color into an unassociated color, divide the color by the (fractional) alpha.

And just to have something between you and the answer, here’s this, from I wish I knew where.

d424a80f76e16bf552a09fae02fee808_59445

The answer is (255,255,255,128), provided by Mike Chock (aka friedlinguini), who commented on my post – see the comments below. My answer was definitely wrong, so I’ll explain why this answer works.

The PNG spec notes, “This computation should be performed with intensity samples (not gamma-encoded samples)”. So, to display an sRGB-encoded PNG, you must do the following:

  1. Convert the sRGB color to linear space. For (255,255,255,128) this gives (1.0,1.0,1.0).
  2. Now multiply in the alpha, to get a linear premultiplied value. Times (128/255) -> 0.5 gives (0.5,0.5,0.5).
  3. Convert this value back to sRGB space and display it. This gives (187,187,187) as the color to display.

Me, I thought that PNGs with sRGB values and alphas were displayed by simply multiplying the sRGB by the stored alpha. Wrong! At least, by the spec. How could I think such a crazy thing? Because every viewer and every browser I tested showed this to be how such a PNG was displayed.

So, I’m very happy to find PNG is not broken; it’s simply that no one implements it correctly. If you do know some software that does display this image properly (your browser does not), let me know – it’ll be my example of how things should work.

Update: as usual, Jim Blinn predates my realizations by about 18 years. His article “A Ghost in a Snowstorm” (collected in the book Notation, Notation, Notation; most of this article can be found here) talks about the right way (linearization) and the errors caused by the various wrong ways of encoding alpha and sRGB. Thanks to Sean Barrett for pointing it out.

My conclusion remains the same: if you want fun puzzles and you’re near a big city, check out The Puzzled Pint, a great free social puzzle event each month.

For the record, here’s my original wrong answer:

The answer is (373,373,373,128). To display this RGBA correctly, you multiply by the alpha (and divide by 255, since the value 128 represents 0.5) to get (187,187,187).

And that’s the fatal flaw of sRGB PNGs in a nutshell: you can’t store 373 in 8 bits in a PNG. 16 bits doesn’t help: PNGs store their values as fractions in the range [0.0, 1.0].

No linearization or filtering or order of operations or any such thing involved, just a simple question. Unfortunately, PNG fails.

Wrong answers include:

  • (187,187,187,128) – this would work if PNG had a premultiplied mode. It does not, so this color would be multiplied by 0.5 and displayed as (94,94,94). That said, this is a fine way to store the data if you have a closed system and no one else will ever use your PNGs.
  • (187,187,187,255) – this will display correctly, but doesn’t keep the alpha around.
  • (255,255,255,128) – this gives you a display value of (128,128,128) for the color, which Hable’s image shows is not a perceptual half-gray. If you used the PNG gamma chunk and set gamma to 1.0, this would work. Almost no one uses this gamma setting (it causes banding unless you use 16 bits) and it’s rarely supported by most tools.
  • (255,255,255,187) – you break the PNG spec by sRGB correcting the alpha. This will actually display correctly, (187,187,187). If you composite this image over some other image with an alpha, this wrong alpha fails.
  • (255,255,255,187) again – you decide to “remember” the alpha is sRGB corrected and will uncorrect it before using it as an alpha elsewhere. If you want to break the spec, better to go with storing a premultiplied color, the first wrong answer. This fix is confusing.
  • (255,255,255,128) again – you store the correct alpha, but require that you first convert the stored color from sRGB to linear before applying the alpha, then convert the color back to sRGB to display it. This will work, but it defies radiance and alpha theory, it’s convoluted, expensive, super-confusing, not how anyone implements PNG display, and not how the spec reads, as I understand it. Better to just store a premultiplied color.

I wish my conclusion was wrong, but I don’t see any solution short of adding a new chunk to the PNG spec. My preference is adding a chunk that notes the values are stored as premultiplied.

In the meantime, if you want solvable puzzles and you’re near a big city, check out The Puzzled Pint, a great free social puzzle event each month.

Addendum

Zap Andersson debated this puzzle with me on Facebook, and many thanks to him. He prefers the solution (255,255,255,128), applying the alpha “later.” To clarify, here’s how PNGs are normally interpreted (and I think this follows the spec, though I’d be happy to be proven wrong, as then PNG would still work, even if no viewer or browser I know currently implements it correctly):

To display a PNG RGBA in sRGB: you multiply the RGB color by the alpha (expressed as a fraction).

The “later” solution to display a PNG RGBA in sRGB: you convert the sRGB number stored to a linear value, you then apply the alpha, and then you convert this linear value back to sRGB for display.

I like this, as convoluted as it is, in that it makes PNG work (I really don’t want to see PNG fail). The problem with this solution is that I don’t think anyone does it this way; browsers certainly don’t.

The other interesting thing Zap points out is this interesting page, which points to this even more relevant page. My takeaway is that I shouldn’t talk about 187-gray as the perceptually average gray; 128 gray really does look perceptually more acceptable (which is often why gamma correction is justified, that human perception is non-linear along with the monitor – I forgot). This doesn’t actually change anything above, the “half-covered pixel” example should still get a display level of 187. This is confirmed by alternating full-black and full-white lines averaging out to 187, for example.

PNG + sRGB + cutout/decal AA = problematic

[TL;DR? Go try the puzzle instead.]

A few questions came out of my blog entry on GPUs preferring premultiplication from various people, including myself. Let’s nail them down one by one, then add these bits up to explain why PNG is not very good at storing antialiased cutout and decal images (images which have an alpha component) that were generated using physically-based rendering. It turns out it’s not PNG’s fault, it’s the implementation used by PNG viewers. I provide two downloadable PNG images to test your own viewer or renderer to determine whether sRGB and compositing are working properly.

If you’re already convinced that you should do filtering (and most every other computation) in linear space, skip the first section. If you already know that you should think of linear values for a pixel as intrinsically premultiplied, since they represent radiance for the pixel, skip two sections. If you know that viewers and browsers don’t blend PNGs with alphas properly, skip to the conclusions at the very end and see if you agree. Me, I’m still learning, so can imagine I made a goof along the way (update: and indeed I did!), though I’ve tried very hard not to do so. I’m honestly surprised how many viewers and browsers (perhaps all?) don’t perform display, filtering, and compositing correctly for this image type.

Don’t Filter in sRGB

This should be one of those things everyone knows by now, but just in case…

So you have three texels and two colors you’ve stored in a PNG, red and green:

rg_interp

Interpolating between these two colors equally, what’s the color (that you store in the PNG) of the center texel? The answer is not (128, 128, 0), the average of the two texels on the ends. You can sort-of tell by just looking at the result:

rg_interp2

The right answer is:

rg_interp3

You shouldn’t interpolate or otherwise filter when in sRGB (essentially, gamma corrected) space, that’s why it looks bad. You shouldn’t do this because sRGB is non-linear – linear operations such as addition and multiplication don’t work properly. Update: see this link, for example – the bus license plate is a good example.

Instead you want to convert from sRGB to linear space, interpolate in linear space, and then convert back to sRGB (equations here). It’s also what you want to do to get good mipmaps, or anything else where you’re using multiple samples to get a new value. My favorite article on this is Larry Gritz’s from GPU Gems 3. There’s also a nice recent article about this workflow on the Renderman Community site, showing how to convert textures to linear space, do lighting there, then convert back for display. If these articles don’t convince you that linearization is necessary, I’m not sure what would.

Here’s another example, sRGB interpolation vs. the correct linear interpolation over a band of about 4 texels in width:

rgb_bad   rgb_good

The sRGB interpolation gives a black band, the correct linear interpolation gives a smooth transition (personally I see a more yellowish transition, which makes sense since it’s over a few pixels, but the general brightness is the thing to notice the most here; if you back up a bit the yellow goes away but the black band in the first image is still there. On a phone you may have to zoom in).

Premultiply before converting to sRGB

Say you’re computing the coverage of a triangle you’re rendering, in linear space. It covers half the area of some pixel, alpha = 0.5. You compute the color of the triangle covering half this pixel, and the color is (1.0, 0.0, 0.0). I’m going to use floating point triplets here for colors in linear space; sRGB maps these values to displayable values we store in, say, a PNG image file.

Normally you take your color, clamp or otherwise map each of the RGB values to [0.0, 1.0] (possibly using tone mapping), and then convert to sRGB for display and storage. The question is: do you first premultiply your color by alpha, then convert to sRGB, or vice versa?

It’s clear you don’t modify the alpha coverage itself by sRGB. Coverage is coverage, it remains the same in any color space. What coverage represents is how much of a surface is visible in a pixel. If you think about it, our half-covered pixel with a (1.0, 0.0, 0.0) surface color on the triangle should emit the same amount of radiance as a fully-covered pixel that has a surface color of (0.5, 0.0, 0.0). The only way to get these to be equivalent is to multiply by the alpha first, then convert the resulting color to sRGB. As Larry Gritz succinctly put it, “radiance is associated,” that is, the area of the emitter in the pixel matters. The radiance is computed by including the area coverage term in the computations.

So, the order is linear space -> premultiply the result to get the radiance -> convert this radiance to sRGB. Take our triangle’s color of (1.0, 0.0, 0.0) and alpha of 0.5, we get an RGBA result of (0.5, 0.0, 0.0, 0.5), our radiance values with an associated alpha.

To display this antialiased result on the screen we convert to sRGB space (or gamma space, if you’re a bit sloppy about it). Of course, our screen itself doesn’t store an alpha, we can’t see through the screen, so we normally think of such a result as being composited against a black background. Using sRGB conversion, we get (0.7366, 0.0, 0.0). Multiply by 255 for an 8-bit display and the displayed value is then (187,0,0).

PNG cannot store all clamped linear values…

I would be a terrible mystery writer, as my chapters would all have titles giving away what happens in the chapter. However, since I’m getting paid by the word (ha, joke), I’m going to walk through each step carefully and slowly, building the suspense (or boring you half to death).

Here’s the strange bit: you can’t store a number of seemingly valid RGBA values in a PNG for some combinations, when fractional alphas are involved.

Update: the following logic is wrong, but it’s what would be needed for your browser to work correctly. Skip to the next “Update:” if you want to skip past this erroneous, but still interesting, information.

To store this sRGB value in a PNG we need to “unassociate” or “un-premultiply” the RGBA value. In other words:

Unassociated RGB = Associated RGB / alpha

We then multiply the resulting RGBA floating point values by 255 to get values we can store in a PNG.

Just to be clear, alpha itself is unchanged for unassociated and associated colors, it’s just the RGBs that can differ. If alpha is 1.0, the unassociated RGB value is identical to the associated one. If alpha is 0.0, we don’t divide; we assume the RGB is (0.0, 0.0, 0.0), since the result has no area, and so, no radiance. It’s only the fractional alphas where the unassociated and associated values differ.

Take our RGBA value of (0.5, 0.0, 0.0, 0.5) from above.

We converted the color to sRGB, the four values were then (0.7353, 0.0, 0.0, 0.5).

Now convert by unmultiplying (a.k.a. dividing) the RGB value by the alpha value, to get the unassociated values that PNG so craves. That is, divide by the alpha of 0.5; in other words, multiply by 2.0. We get (1.4707, 0.0, 0.0, 0.5).

Multiply all four values by 255 to get 8-bit values that we can store. Just to show we haven’t converted to PNG’s unassociated format yet, let’s leave these as precise floating point values: (375.0, 0.0, 0.0, 127.5). Rounding, that gives us (375, 0, 0, 128).

If we could store premultiplied (associated) values, we could simply store (0.7353, 0.0, 0.0, 0.5) times 255, which is (187, 0, 0, 128), knowing that when we’d convert back to linear space someday the values would go back to about (0.5, 0.0, 0.0, 0.5).

To sum up:

(0.5, 0.0, 0.0, 0.5) the premultiplied result in linear space
(0.7353, 0.0, 0.0, 0.5) converted to sRGB
(1.4707, 0.0, 0.0, 0.5) RGB divided by the alpha of 0.5 to unassociate the alpha
(375, 0, 0, 128) multiplied by 255 and round

And that’s the punchline: this value cannot be stored in a PNG properly, since the maximum value in a PNG is 255 and PNG is always unassociated. The best we could do is store (255, 0, 0, 128). But if we then convert this back from sRGB to linear space, we don’t get anything near the original (0.5, 0.0, 0.0, 0.5) result:

(255, 0, 0, 128) stored in PNG
(128, 0, 0, 128) associating (multiplying by) the alpha/255
(0.216, 0.0, 0.0, 0.5) converting from sRGB to linear space

The answer should be (0.5, 0.0, 0.0, 0.5), but the clamping has dimmed the color value down massively. So instead of being able to store a linearized color value of 0.5 when alpha is 0.5, the best we can do is store one that is 0.216. Another way to say this is that our triangle can be no brighter than twice this value, (0.432, 0.0, 0.0), before premultiplication, instead of (1.0, 0.0, 0.0) – quite a drop on the linear side of things.

I don’t know about you, but I found this surprising, that PNG is actually incapable of storing antialiased cutout images computed by a normal renderer working in linearized space.

The complaint is often leveled at storing 8 bit pre-multiplied colors and alphas is that you lose precision: a gray level of 255 and of 128 will both be represented by a 1 if the alpha itself is 1. The flip side is that, for colors that have perfectly valid colors and alpha when premultiplied and converted to sRGB, unassociated storage as used in a normal PNG cannot properly save these RGBA values. PNG sadly does not have a premultiplied mode for storage, so is stuck; if it had such a mode it could properly store (187, 0, 0, 128) and so properly display (187, 0, 0) on the screen.

If you don’t believe this result, that there’s some misstep, solve this puzzle instead.

Update: in fact, there is a problem! It turns out that PNG says that you need to unmultiply before converting to sRGB. This goes against theory, in that you normally take a premultiplied result and convert that to sRGB for display (composited against a black background). But it turns out that the proper sequence for PNG conversion is to un-premultiply and then convert to sRGB. So the right answer is to store (255, 0, 0, 128). You convert this to linear space, (1.0, 0.0, 0.0), multiply by alpha (0.5, 0.0, 0.0), convert back to sRGB space (187,0,0) and display the result. It’s just that simple. Which is why premultiplication is nicer: none of these conversions is necessary, you’d just ignore the alpha and display the RGB stored, if PNG could store premultiplied values.

See the puzzle for more information, and my thanks to friedlinguini for finding the right passage in the spec. I’m happy to see PNG itself is not broken! Based on this new information, let’s see how viewers and browsers view such PNGs with alphas.

Let’s let our viewers at home decide…

Do image manipulation programs, viewers, and browsers implement PNG with alpha correctly? Let’s go grayscale and find out… (hint: the answer’s a pretty resounding “no” – if you find a package that does it right, let me know).

One question is whether PNGs are sRGB by default, or linear by default; that is, if the gamma or sRGB chunks are missing, what’s expected? I poked around through specs, but don’t see a definitive answer, and frankly in my experience 99.98% of all PNGs I see without tags are in sRGB – they’re meant for display.

But, let’s test. Here are two sample images in PNG:

sampler_raw  sampler_with_gamma_srgb_chunks

They (probably) look identical on your display: two grayish squares on the left, a dark gray square upper right, and white square lower right. I checked: it won’t work on the iPhone 6 or Samsung Galaxy S3, as you can’t display this image at its native resolution. These devices perform cheap and incorrect filtering on the image (they filter in sRGB space; more on that below).

Both images have the same data:

sampler_labeled

The upper left square in each has alternating lines of full white and full black. Blur your eyes and you get a half-gray. The sRGB nature of this gray is shown by how the bottom left matches the top left (on sRGB monitors) when you blur your eyes, a basic gamma test. This shows that both PNGs are treated as storing non-linear sRGB values, as the 187 gray value is the sRGB equivalent of half-gray in linear space, as we’ve seen. There is a gamma chunk in PNG, but it’s rarely used.

The only difference between the two images is that the one on the left does not have gamma or sRGB PNG chunks (generated using LodePNG), the one on the right has both (it was generated by reading the one on the left into paint.net and then writing it out; you can review the chunks using pngcheck in verbose mode). They display identically, so the browser is clearly assuming that if these two chunks are missing, the PNG should be interpreted by default as storing sRGB values. This is indeed the norm: PNGs are usually used for lossless display of images, so the color values naturally are sRGB values that are directly copied to the display. However, this means that the “you could set the gamma to 1.0” option in PNG is extremely unlikely to be honored by most tools. Also, even if possible, storing 8-bit values in linear space can give a banded look when converted to sRGB. PNG does support 16-bit storage, which would solve any banding from using a gamma of 1.0.

Display this image in, say, IrfanView, which composites against a black background for display, and you get this:

irfanview_view

Note that the lower right corner is a 128-gray.

If you want to see the test image composited in your browser against a black, white, and gray background in turn, see this page.

Most (all?) browsers and viewers are a bit broken

Now we know PNGs are treated as if they’re in sRGB space by default. However, it turns out most browsers and viewers do not properly interpret or blend PNG colors when alphas are present, or even when they’re not! Here’s the proof.

The two squares on the right each have an alpha of 0.5. The upper square is black, the lower is white. Browsers composite these images against their background color. If the background color is white (as it is on this page), then the upper right square should composite to be half-black, half-white. With a value of (0,0,0,128), it’s saying that the surface is covered with a black color that is half-transparent, so that the white background should contribute only half its emission. If the math is done properly – sRGB to linear, perform blending, then linear to sRGB – then the resulting color should be around (187,187,187) and so match the results on the left. It clearly doesn’t; the browser is simply blending the two colors directly in sRGB space, without any linearization, giving a darker gray than should be displayed.

If instead you display these images composited against black, as happens in the popular IrfanView viewer, you get a darker gray for the lower-right square, when again you should get a 187-level gray, as shown above. So, IrfanView (and other viewers I tested) also do not perform linearization when blending.

You can tell that blending is also done improperly even when no alphas are present, by using the “resize” function. Resize the test image to 50% of its original size, i.e., make it 128×128. Use the best filter available (e.g., Lanczos).

Here’s the result for XnView, for example (I had problems getting IrfanView to properly save the alpha channel):

xnview_50

It’s wrong, it’s not blending in linear space. You can tell because the alternating lines in the upper left are now a 128-level gray instead of the proper 187. The gray in the upper left is significantly darker than a scaled down version of the original image. If you have an image manipulation program that gives the right answer, let me know. Imagine this is the next level up in a mip-map pyramid and you can see why the norm in interactive 3D graphics is to perform linearization before filtering, and why there’s GPU support for it. Pity we can’t get the 2D guys to adopt the correct algorithms.

Here’s the original image, again, but made smaller (128×128) by your browser by adjusting the HTML image display width and height:

sampler_with_gamma_srgb_chunks

I’m betting dollars to donuts you see the wrong result, similar to XnView’s (and every other free image manipulation package I tried). The image is shrunk to half size and so the alternating lines of white and black are incorrectly blurred to a 128-gray.

By the way, the reason the original image alternates lines of white and black, instead of using a white and black checkerboard, is to avoid any level response problem the display might have. This used to be a problem with CRTs, I don’t know if it is with LCDs, but let’s leave it out of the equation.

Right-click on the two test images and save them if you want to experiment; attach as a surface texture to see if you are performing compositing correctly. If neither of the squares on the right looks very close to the matching grays on the left, the software is not performing alpha blending properly. It should premultiply (every viewer and browser does this correctly for PNG conversion), linearize each value, blend with the linearized background value, then convert back to sRGB for display. Instead, most software simply blends in sRGB space, which is wrong.

If the two squares on the left don’t more-or-less match (blur your eyes), then you’re on an ancient Mac, NexT, SGI, or something else that’s non-sRGB. More likely, you’re on a smartphone or other device that is not showing the test image at one pixel per texel. Its faulty filtering makes the alternating black and white lines average to a gray level of 128 at the limit, when it should be 187.

I suspect the reasons most viewers and all browsers I tried are broken in this way is expediency (all that conversion per pixel is expensive, and fractional alphas in PNGs are rare) and lack of understanding, plus possibly legacy users expecting old behaviors. I certainly didn’t fully understand how to interpret PNG data when I started this post, and have had to revise it!

Now I see why OpenEXR, a floating point format with alpha and that saves premultiplied colors, is preferred by film companies and other industries where proper compositing is critical. Simple to display, and premultiplication makes display and compositing much less costly.

Conclusions

  1. Perform interpolation, blending, mipmapping, or other filtering in linear space, not sRGB.
  2. In this linear space, if your computations produce a fractional alpha, make sure the color is premultiplied by this alpha somewhere along the line before converting to sRGB. Update: unless you’re converting to PNG, in which case you want to unmultiply your RGBA before converting to their quasi-sRGB space.
  3. Update: wrong. If you have fractional alphas and you want to store these along with the colors, for later use when compositing, you may get values too high to store in your PNG after unassociating the alpha from the color. Cutouts without partial alphas, or with dim colors, may be storable.
  4. Don’t expect PNG alphas to be used properly for viewing on most viewers or on web browsers. This is not PNG’s fault per se, it’s the browser/viewer’s for not using linearization when compositing.
  5. Test and find out. The PNG test image can help you see what an application does with the data.

 

 

 

Why “tap”?

Kavita Bala asked, “What is the etymology of ‘tap’ in texture filtering?”

This is a term we use in graphics for taking a sample from a texture map. I didn’t know where it came from, and recall being a bit mystified as to what it even meant when I first encountered it, finally puzzling it out from the context. Searching around now, the earliest reference I could find in 3D graphics literature was in this article, so I asked Dave Luebke, who coauthored that paper.

Dave replied:

I think it’s actually very old and references the idea of putting a probe, as in an oscilloscope, to tap a signal (like tapping a pipe, meaning to take water out of it at a particular location, or tapping a maple tree for sap to make syrup from).

Dave asked two other experts.

Lance Williams replied:

It’s traditional filter terminology. For example:

“Filter Coefficients – the set of constants, also called tap weights, used to multiply against delayed signal sample values within a digital filter structure.”

“A direct form discrete-time FIR filter of order N. The top part is an N-stage delay line with N + 1 taps.”

“For FIR filters, there is no denominator in the transfer function and the filter order is merely the number of taps used in the filter structure.”

John Montrym replied:

Follow this trail:

https://en.wikipedia.org/wiki/Finite_impulse_response see phrase “tapped delay line” which takes you to:

https://en.wikipedia.org/wiki/Digital_delay_line

“tap” in texture filtering uses the terminology of old-time signal processing. It wouldn’t surprise me if the notion of tapping a delay line takes you back to the 1930’s or 1940’s, though I don’t have a specific reference for you.

Radar was one of the early drivers for the development of signal processing theory & practice.

And your “tapping a water pipe” analogy is a pretty good one.

If you know more, pass it on.