This last post on CIC 2011 covers a special session that took place the day after the conference. Although not strictly part of the conference (it required a separate registration fee), it covered closely related topics.
The special session “Revisiting Color Spaces” was jointly organized by the Inter-Society Color Council (ISCC), the Society for Imaging Science and Technology (IS&T), and the Society for Information Display (SID) to mark the 15th anniversary of the publication of the sRGB standard. It included a series of separate talks, all related to color spaces:
sRGB – Work in Progress
This presentation was given by Ricardo Motta, a Distinguished Engineer at NVIDIA. Mr. Motta developed the first colorimetrically calibrated CRT display for his Master’s thesis at RIT, helped develop much of HP’s color imaging tech as their first color scientist, and was one of the original authors of the sRGB spec. Now he has responsibility for NVIDIA’s mobile imaging technology and roadmap.
The presentation started with some history on the development of sRGB. It actually started with an attempt by HP and Adobe in 1989 to get the industry to standardize on CIELAB as a device-independent color space. Their first attempts at achieving industry consensus didn’t go well: Gary Starkweather at Apple insisted that full spectral representations (highly impractical at the time) were the right direction, and initial agreement by Microsoft to standardize on CIELAB were scotched when Nathan Myhrvold insisted on 32-bit XYZ (also infeasible) instead. After these setbacks, the people at HP and Adobe who were working on this started realizing that RGB can actually work pretty well as a device-independent color space. They wrote drivers for the Mac first, and ported them when Windows got color capability. PC monitors and televisions at the time all used the same CRT designs (the PC market was as yet too small to justify custom designs), so Adobe characterized the typical CRT in their RGB drivers – first as an internal HP standard (“HP RGB”), and later in collaboration with Kodak and Microsoft as part of the FlashPix standard (“NIF RGB”). In 1996, HP presented NIF RGB to Microsoft as a proposed standard, ending with the sRGB standard proposal exactly 15 years ago.
Why does sRGB work? RGB tristimulus values by themselves are not enough to describe color appearance. The effect of viewing conditions and white balance on the appearance of self-luminous displays are not fully understood. Colorimetry mostly focuses on surface colors, not self-luminous aperture colors. Also, the limited gamut of displays make low-CCT white-balances impractical.
By standardizing the assumed viewing conditions and equipment (display with near 2.2 gamma, Rec.709 primaries, D65 white point at 80 nits (cd/m2), 200 lux D50 ambient, 1% flare) then the RGB data fully implies appearance with little processing needed. Also, daylight-balanced displays tend to remain constant in appearance over a wide range of viewing conditions (D65 is consistently perceived as neutral in the absence of other adapting illumination) so the results are robust in practice.
In 1996 the strength of sRGB was that these viewing conditions and equipment were common and widely used. 15 years later, this strength has become a limiting factor in some scenarios.
If self-luminous display colors are not very close to the correct scene surface colors, there is a perceptual “snap” as the image suddenly appears as a glowing rectangle instead of a 3D scene (this is similar to the “uncanny valley” problem). Current standard display primaries fail to match large classes of surface colors due to their limited gamut; newer developments (AMOLED, LED backlights) enable a much wider color gamut.
In addition, displays have been getting much brighter – every decade, LED brightness has consistently increased by at least 20X. Newest LCD tablets achieve 500 nits with over 1000:1 contrast ratio (CR), exactly matching reflected colors in most conditions. Daylight equivalence requires 6,400 nits; by the end of this decade, portable displays should be able to show actual surface colors under all lighting situations.
The sRGB approach is no longer valid in the mobile space – with highly variable viewing conditions and displays that can directly match reflective colors, we need to move from a “tristimulus + viewing conditions” encoding to an “object properties” encoding (still tristimulus-based).
OSA-UCS System: Color-signal Processing from Psychophysical to Psychometric Color
This presentation was given by Prof. Claudio Oleari from the Department of Physics at the University of Parma.
Psychophysical color specification is based on color matching under arbitrary viewing conditions. Psychometric color specification is based on quantifying perceived color differences and realizing uniform scales of perceived colors under controlled conditions (comparison of color samples on a uniform achromatic background under a chosen illuminant). Under these conditions, the only appearance phenomena are the instantaneous color constancy and the lightness contrast.
Between 1947 and 1974 the Optical Society of America (OSA) had a committee working on a uniform psychometric color scale; their goal was a lattice of colors in Euclidean color space where equal distances between points corresponds to equal visual differences. However, they eventually concluded that this is not possible – the human color system does not work this way. The resulting system (OSA-UCS) had only approximately uniform color scales and many scientists considered this to be a failure. However, OSA-UCS has a very strong property which is not shared by any other color space – it is spanned by a net of perceived geodesic lines. These are scales of colors which define the shortest perceptual path between colors, ordered with the difference between each pair of colors equal to one just noticeable difference (jnd).
Prof. Oleari has published an algorithm linking the cone activations (psychophysical color) to the OSA-UCS coordinates (“Color Opponencies in the System of the Uniform Color Scales of the Optical Society of America”, 2004).
Another of Prof. Oleari’s papers (“Euclidean Color-Difference Formula for Small-Medium Color Differences in Log Compressed OSA-UCS Space”, 2009) defines a Euclidean color-difference formula based on a logarithmically compressed version of OSA-UCS (like other such formula, it is only applicable to small color differences since a globally uniform space does not exist). This formula has only two parameters, but performs as well as the CIEDE2000 formula which has many more. Generalizing the formula to arbitrary illuminants and observers provides a matrix which is useful for color conversion of digital camera images between illuminants. Prof. Oleari claims that this matrix provides results that are clearly better for this purpose than other chromatic adaptation transforms (“Electronic Image Color Conversion between Different Illuminants by Perfect Color-Constancy Actuation in a Color-Vision Model Based on the OSA-UCS System”, 2010).
Design and Optimization of the ProPhoto RGB Color Encodings
This presentation was given by Dr. Geoff Wolfe, Senior Research Manager at Canon Information Systems Research Australia. However, the work it describes was done while he was at Kodak Research Laboratories, in collaboration with Kevin Spaulding and Edward Giorgianni.
ProPhotoRGB was created at a time (late 1990s – early 2000s) when the photographic world was in massive upheaval. In 2000 film sales were around 1 billion rolls/year; this decreased to 20 million by 2010 with an ongoing 20% volume reduction year on year. Digital cameras were just starting to become decent: in 1998 most consumer cameras had sensors under 1 megapixel, and in 1999 most had 2 megapixel sensors, and resolution continued to increase rapidly. Another interesting trend was digital processing for film; in 1990 the PhotoCD system scanned film to 24 megapixel images which were processed digitally and then printed out to analog film. ProPhoto RGB was intended to be used in a system which took this one step further: optically scanning negatives and then processing as well as printing digitally (today of course imaging is digital from start to finish).
During the mid to late 90s there was an increasing awareness that images could exist in different “image states”, characterized by different viewing environments, dynamic ranges, colorimetric aims and intended uses. The simplest example is to classify images as either scene-referred (unrendered) or output-referred (rendered picture or other reproduction). On one hand, scenes are very different than pictures – scenes have 14 stops or so of dynamic range vs. 6-8 stops in a picture, pictures are viewed in an adaptive viewing environment with a certain white point, luminance, flare, and surround – all of which affect color appearance. On the other hand, a scene and its picture are obviously closely related: the picture should convey the scene appearance. Memory and preference also play a part: people often assess an image against their memory of the scene appearance, which tends to be different (for example, more saturated) than the original scene. Even if they have never seen the original, people tend to prefer slightly oversaturated images.
There are several issues regarding the rendering of the scene into the display image. The first is the dynamic range problem – which 6-8 stops from the scene’s 14 should we keep? The adaptive viewing environment also poses some issues. An “accurate” reproduction of the scene colorimetry looks flat and dull compared to a “pleasing” rendition with adjustments to account for the viewing environment’s effect on perception.
ProPhoto RGB was designed as a related family of encodings allowing both original scenes and rendered pictures to be encoded. The encodings should facilitate rendering from scenes to picture with: common primaries for both scene and picture encoding, suitable working spaces for digital image processing, direct and simple relationships to CIE colorimetry and the ICC profile color space (PCS), and fast and simple transformations to commonly used output color encodings such as sRGB or Adobe RGB.
Since the desire was to have the same primaries for both scene and picture image states, choosing the right primaries was critical. The primaries needed to enable a gamut wide enough to cover all real world surface colors, and all output devices. On the other hand, making the gamut too wide could cause quantization errors (given a fixed bit depth and encoding curve, quantization gets worse with increasing gamut size). The primaries needed to yield the desired white point (D50) when present in equal amounts, and avoid objectionable hue distortions under tonescale operations (more on that below). However, the primaries did not need to be physically realizable; they could be outside the spectral locus.
Regarding tonescale operations: a common image processing operation is to put each channel through a nonlinear curve, for example an S-shaped contrast enhancement curve. Such operations are fast, convenient and generally well-behaved; they also are guaranteed to not go out of the color space’s gamut. However, in the general case, tonescale operations are not hue-preserving, and can result in noticeable hue shifts in natural “highlight to shadow” gradients. These hue shifts are particularly objectionable in skin tones, especially if they shift towards green.
All these constraints were fed into Matlab and an optimization process was performed to find the final primaries. The hue rotations could not be eliminated, but they were reduced overall and minimized for especially sensitive areas such as skin tones. The final set of ProPhoto primaries was much better in this regard than those of sRGB/Rec.709 or Adobe RGB (1998) primaries. Two of the resulting primaries were imaginary (outside the spectral locus), with the third (red) right on the spectral locus .
Besides the primaries and D50 white point, a nonlinear encoding (1/1.8 power with a linear toe segment) was added to create the ROMM (Reference Output Medium Metric) RGB color space, intended for display-referred data. A corresponding RGB space for scene-referred data was also defined: RIMM (Reference Input Medium Metric). RIMM had the same primaries as ROMM but a different encoding (same as Rec.709 but scaled to handle scene values up to 2.0, where 1.0 represents a perfect white diffuse reflector in the scene). An extended dynamic range version of RIMM (ERIMM) was defined as well. ERIMM has a logarithmic encoding curve with a linear toe segment, and can handle scene values up to 316.2 (relative to a white diffuse reflector at 1.0). All spaces can be encoded at 8, 12 or 16 bits per channel, but for ERIMM at least 12 bits are recommended.
The original intended usage for this family of color spaces was as follows. First, the negative is scanned and a representation of the original scene values is created in RIMM or ERIMM space. This is known as “unbuilding” the film response – a complex process that needs to account for capture system flare, the distribution of exposure in the different color layers, crosstalk between layers and the film response curve. Digital rendering of the image puts it through a tone scale and goes to the ROMM output space, and is finally turned into a printed picture or displayed image.
Digital cameras tend to have much simpler unbundling processes – it is straightforward to get scene linear values from the camera RAW sensor values. For this reason, Dr. Wolfe thinks that camera RAW can be an effective replacement for scene referred encodings such as RIMM/ERIMM, which he claims are now effectively redundant. On the other hand, he found that ROMM / ProPhoto RGB is still used by many photography professionals (and advanced amateurs) for its ability to capture highly saturated objects (such as iridescent bird feathers) and ease of tweaking in Photoshop.
During the Q&A period, several people in the audience challenged Dr. Wolfe’s statement that scene-referred encodings are no longer needed. The Academy of Motion Pictures Arts and Sciences (AMPAS) uses a scene-referred encoding in their Image Interchange Format (IIF) because their images come from a variety of sources, including different film stocks as well as various digital cameras. Even for still cameras, a scene-referred type of encoding is needed at least as the internal reference space (e.g. ICC PCS) even if the consumer never sees it.
Adobe RGB: Happy Accidents
This presentation was given by Chris Cox, a Senior Computer Scientist at Adobe Systems who has been working on Photoshop since 1996. It covered the history of the “Adobe RGB (1998)” color space.
In 1997-1998, Adobe was looking into creating ICC profiles that their customers could use with Photoshop’s new color management features. Not many applications had ICC color management at this point, so operating systems didn’t ship with them yet.
Thomas Knoll (the original creator – with his brother John – of Photoshop) was looking for relevant standards and ideas to build ICC profiles around; one of the specifications he found documentation for was the SMPTE 240M standard, which was the precursor to Rec.709. SMPTE 240M looked interesting – its gamut was wider than sRGB’s but not huge, and tagging existing content with it didn’t result in horrid colors. The official standards weren’t available online, and Adobe couldn’t wait to have a paper copy mailed since Photoshop 5 was about to ship, so they got the information from a somewhat official-looking website.
Adobe got highly positive feedback from their customers about the “SMPTE 240M” profile. Users loved the wide gamut and found that color adjustments looked really good in that space and that conversions to and from CYMK worked really well. A lot of books, tutorials and seminars recommended using this profile.
A while after Photoshop 5 shipped, people familiar with the SMPTE 240M spec contacted Adobe and told them that they got it wrong. It turns out that the website they used copied the values from an appendix to the spec which contained idealized primaries, not the actual standard ones. The real SMPTE-240M is a lot closer to sRGB (which Photoshop users didn’t like as a working space). Even worse, Thomas Knoll made a typo copying the red primary chromaticity values so the primaries Photoshop 5 shipped with weren’t even the correct ones from the appendix.
What to do? The profile was wrong in at least two different ways, but the customers REALLY liked it! Adobe tried to improve on the profile in various ways, and built test code to evaluate CMYK conversion quality (which was something the customers especially liked about the “SMPTE 240M” profile) in the new “fixed” profiles.
But no matter what they tried: correcting the red primary, changing the white point from D65 to the theoretically more prepress-friendly D50, widening the primaries, moving the green to cover more gamut, etc., every change made CYMK conversion worse than the “incorrect” profile.
In the end, Adobe decided to keep the profile but change the name. They picked “Adobe RGB” so they wouldn’t have to do a trademark search or get legal approval. The date was added to the profile name since they were sure they would be bringing out a better version soon, and the “Adobe RGB (1998)” profile was shipped in a Photoshop 5 dot release. Adobe kept experimenting, but was never able to improve on the profile. After a while they stopped trying.
After a while Kodak visited them to talk about ProPhoto RGB and how it was designed to minimize hue shifts under nonlinear tonescale operations (see previous talk). Adobe realized they had lucked into a color space that just happened to have good behavior in that regard, explaining the good CYMK conversions (which typically suffer from the same issue). Kodak assumed that Adobe had designed their color space like that on purpose.
Recent Work on Archival Color Spaces
This session was presented by Dr. Robert Buckley, formerly a Distinguished Engineer at Xerox, now a scientist at the University of Rochester.
It describes work done in collaboration between the CIE Technical Committee TC8-09 (of which Dr. Buckley is chair) and the Still Image Working Group of the Federal Agencies Digitization Initiative (FADGI).
TC8-09 did a recent study where they sent a set of test pieces to participating institutions to digitize with their usual procedures. The test pieces included four original color prints and three standard targets: X-Rite Digital ColorChecker SG, Image Engineering Universal Test Target (UTT) and the Library of Congress Digital Image Conformance Evaluation (DICE) Object Target. Special sleeves were made for the prints with holes to identify specific regions of interest (ROIs) for measurement. The technical committee members measured CIELAB values for the print ROIs and the standard target patches for later comparison with the results produced by the participating institutions.
Each institution used their usual scanning equipment and procedures; some used digital cameras, others used scanners; they used various profiles (manufacturer or custom) and some post-processed the resulting images.
The best agreement between the institution’s captures and the measured values were in the cases where digital cameras were used with custom profiles. In general the agreement was better for the targets than for the originals, which isn’t surprising since calibration uses similar targets. The committee concluded that better results would be obtained if the capture devices were calibrated to targets that contained colors more representative of the content being captured (not true for the standard targets).
Besides evaluating the various capture protocols, TC8-09 also wanted to establish which color space is best to use for image archiving. The gamut should of course include all the colors in the archived documents, but it should not be larger than necessary to avoid quantization artifacts. Specifically, if 8 bits per channel are used (which is common) then the gamut shouldn’t be much wider than sRGB. In practice, most of the material (with a few exceptions, such as a color plate in a book on gems) fit easily in the sRGB gamut.
Modern Display Technologies: Is sRGB Still Relevant?
This session was presented by Tom Lianza, “Corporate Free Electron” at X-Rite and Chair of the International Color Consortium (ICC).
One of sRGB’s main strengths is the fact that the primary chromaticities are the same as Rec.709 (and the two tone reproduction curves, while not identical, do have similarities). These similarities have led to the easy mixing of motion and still images in many different environments. The Rec.709 primaries were based on CRT primaries – at the time it was not clear whether they could be realized in flat-panel displays, but the standard pushed the manufacturers to make sure they did.
One of the goals of any color space is to reproduce the Pointer gamut of real-world surface colors. Unfortunately, there are cyans in this gamut that will be a problem for pretty much any physically realizable RGB system.
An output referred color space will always require some specification of ambient conditions. This is needed for effective perceptual encoding.
A missing element in many color spaces is a hard definition of black (Adobe RGB is one of the few that does have an encoding specification of black). The lack of this definition leads to inter-operability issues, and to non-uniform rendering in practice. ICC is now moving black point compensation into ISO to be considered as a standard, which would allow more vendors to use it (Adobe currently have an algorithm which their products use).
All commonly used display technologies (include the iPhone screen which has a really small gamut) encompass the Bartleson memory colors (“Memory Colors of Familiar Objects”, 1960). This explains why people find them all acceptable, although they vary greatly in gamut size and none of them cover the Pointer gamut completely.
Viewing conditions for sRGB are well defined but the assumptions of low-luminance displays viewed in low ambient lighting do not reflect how people view images today.
Cameras are not (and should not be) colorimeters. They do not use sRGB as a precise encoding curve (most cameras reproduce images with a relative gamma of 1.2-1.3 vs. the sRGB encoding curve, to take account low viewing luminance). Instead, cameras are designed to produce good images when viewed on an sRGB display – having a common target guides the different manufacturers to similar solutions. As an example, Mr. Lianza showed a scene with highly out-of-gamut colors, photographed with automatic white balancing on cameras from different vendors. There is no standard for handling out of gamut colors, but nevertheless all the cameras produced very similar images. This is because the critical visual evaluations of these camera’s algorithms were all done on the same (sRGB) displays.
Browsers have various issues with color management. ICC has a test page which can be used to see if a browser handles ICC version 4 profiles properly. Chrome does not have color management and shows the entire page poorly. Firefox shows the ICC version 2 profile test correctly, but not the ICC version 4 test. Safari has good color management and shows all images well, but not when printing.
Conclusions: sRGB is robust and can be used to reproduce a wide range of real-world and memory colors. Existence of the specification coupled with physically realizable displays makes the application of the spec quite uniform in the industries that use it. The lack of black point specification and the low luminance assumption has caused manufacturers to apply compensation to the images which may not work well at higher luminances encountered in mobile environments. It may be possible to tweak the spec for higher luminance situations, but any wholesale changes will have a very bad effect on the market place due to the huge amount of legacy content. The challenge to sRGB in the 21st century comes from disruptive display technologies and the implementations that allow for simultaneous display of sRGB and wide gamut images on the same media at high luminance and high ambient conditions.
Question from the audience: most mobile products don’t have color management, and this is a core issue now. Answer: ICC is splitting into three groups. ICC version 4 is staying stable to address current applications, the “ICC Labs” open-source project is intended for advanced applications, and there will be a separate project to establish a solution for the web and mobile (there is a current discussion regarding adding a new working group for mobile hardware).
Device-Independent Imaging System for High-Fidelity Colors
This session was presented by Dr. Akiko Yoshida from SHARP. It describes the same system that SHARP presented at SIGGRAPH 2011 (there was a talk about the system, and the system itself was shown in Emerging Technologies).
The system comprises a wide-gamut camera (which colorimetrically captures the entire human visual range of colors) and a 5-primary display with a gamut that includes 99% of Pointer’s real-world surface colors.
The camera they developed has sensor sensitivities that satisfy the Luther-Ives condition: the sensitivity curves are a linear combination of cone fundamentals (or equivalently, of the appropriate color-matching functions). This is the first digital camera to satisfy this condition. It is fully colorimetric, measuring the Macbeth ColorChecker chart with an accuracy of about 0.27 ΔE.
Today’s display systems cannot display many colors found in daily life, as can be seen by comparing their gamuts to the Pointer surface color gamut (“The Gamut of Real Surface Colors”, 1980). Although the Pointer gamut is relatively small compared to the gamut of human vision, it cannot be efficiently covered with three RGB primaries. SHARP set a goal to reproduce real-surface colors faithfully and efficiently with a five-primary system (“QuintPixel”) including RGB plus yellow and cyan. QuintPixel actually has six subpixels for each pixel – the red subpixels are repeated twice. This was necessary to get adequate coverage reds. This display can efficiently reproduce 99.9% of Pointer’s gamut.
Why not just extend the three primaries? Mitsubishi has rear-projection laser TVs with really wide RGB gamuts. The reason SHARP didn’t take this approach is efficiency – the gamut is much larger than it needs to be. Another advantage of adding primaries is color reproduction redundancy, which can be exploited to have brighter reproduction at the same power consumption, lower power consumption with the same brightness, or improved viewing angle. The larger number of sub-pixels can also be used to greatly increase resolution (similarly to Microsoft’s “ClearType” technology). These advantages can be realized without losing the wide gamut.
The camera sends 10-bit XYZ signals at 30Hz to the display via the CameraLink protocol. The display does temporal up-conversion from 30 to 60 Hz as well as interpreting the XYZ signal.
Q&A Session:
Question: Is the colorimetric camera available for purchase? Answer: yes, for 1M yen (about $13,000).
Question: 10 bits are not enough for XYZ, are they planning to address this? Answer: yes, they do plan to increase the bit-depth.
Question: what is the display resolution? Answer: They use a 4K panel and combine two pixels into one, cutting the resolution in half.
Is There Really Such a Thing As Color Space? Foundation of Uni-Dimensional Appearance Spaces
This talk was presented by Prof. Mark D. Fairchild, from the Munsell Color Science Laboratory in the Rochester Institute of Technology.
Color is an attribute of visual sensation – not physical values. Color scientists seldom question the 3D nature of color space, but Prof. Fairchild thinks that it is more correct to think about color as a series of one-dimensional appearance spaces or scales, and not to try to link them together.
Color vision is only part of the visual sense, which is itself just one of five senses. Only in color vision is a multidimensional space commonly used to describe perception. All the other senses are described with multiple independent dimensions as appropriate, not with multi-dimensional Euclidean differences.
For example, taste has at least five separate scales: sweet, bitter, sour, salty, and umami. But there is no definition of “delta-Taste” which collapses taste differences into a single number. Smell has about 1000 different receptor types, and some have tried to reduce the dimensionality to about six such as flowery, foul, fruity, spicy, burnt, and resinous. Hearing is spectral – our ears can perceive the spectral power distribution of the sound. Touch might well be too complex to summarize in a single sentence.
Why should color vision be different? Perhaps researchers have been misled by certain properties of color vision such as low-dimensional color matching and simple perceptual relationships such as color opponency. The 3×3 linear transformations between color matching spaces really reinforce the feeling of a three-dimensional color space, but they have nothing to do with perception. Color scientists have spent a lot of effort looking for the “holy grail” of a global 3D color appearance space with Euclidean differences, to no avail.
Perhaps this is misguided and efforts should focus on a set of 1D scales instead. There have been examples of such scales in color science. The Munsell system has separate hue, value and chroma dimensions. Similarly, Guth’s ATD model of visual perception was typically described in terms of independent dimensions. Color appearance models such as CIECAM02 were developed with independent predictors of the perceptual dimensions of brightness, lightness, colorfulness, saturation, chroma, and hue. This was compromised by requests for rectangular color space dimensions which appeared as CIECAM97s evolved to CIECAM02. The NCS system treats hue separately from whiteness-blackness and chromaticness, though it does plot the latter two as a two dimensional space for each hue.
This insight leads to the hypothesis that perhaps color space is best expressed as a set of 1D appearance spaces (scales), rather than a 3D space, and that difference metrics can be effective on these separate scales (but not on combinations of them). The three fundamental appearance attributes for related colors are lightness, saturation, and hue. Combined with information on absolute luminance, colorfulness and brightness can be derived from these and are important and useful appearance attributes. Lastly, chroma can be derived from saturation and lightness if desired as an alternative relative colorfulness metric.
Prof. Fairchild has derived a set of color appearance dimensions following these principles. The first step is to apply a chromatic adaptation model to compute corresponding colors for reference viewing conditions (D65 white point, 315 cd/m2 peak luminance, 1000 lux ambient lighting). Then the IPT model is used to compute a hue angle (h) and then a hue composition (H) can be computed based on NCS. For the defined hue, saturation (S) is computed using the classical formula for excitation purity applied in the u’v’ chromaticity diagram. For that chromaticity, G0 is defined as the reference for lightness (L) computations that follow a “power plus offset” (sigmoid) function. Brightness (B) is Lightness (L) scaled by the Stevens and Stevens terminal brightness factor. Colorfulness (C) is Saturation (S) scaled by Brightness (B), and Chroma (Ch) is Saturation (S) times Lightness (L).
Prof. Fairchild plans to present his detailed formulation soon, and do testing and refinement afterwards.
HDR and UCS: Do HDR Techniques Require a New UCS Space?
This session was presented by Prof. Alessandro Rizzi from the Department of Information Science and Communication at the University of Milan. There was some overlap between this session and the “HDR Imaging in Cameras, Displays and Human Vision” course which Prof. Rizzi presented earlier in the week.
Colorimetry ends in the retinal cone outer segments; color appearance is at the other end of the human visual system. Appearance incorporates all the spatial processing of all the color responsive neurons. Thus color vision can be analyzed in two ways: bottom-up starting from the color matching response of retinal receptors accounting for pre-retinal absorption and glare (going through color matching tests, e.g. the CIE 1931 observer) or top-down starting from the color appearance generated by the entire human visual system (asking observers to describe the apparent distances between hues, chromas and lightnesses, e.g. the Munsell color space).
Recent work (“A Quantitative Model for Transforming Reflectance Spectra Into the Munsell Color Space Using Cone Sensitivity Functions and Opponent Process Weights”, 2003) has linked the two, solving for the 3-D color space transform that places LMS cone responses in the color-space positions measured for the Munsell Book of Color. The process includes a correction for veiling glare inside the eye, which causes the image on the retina to be different than the original scene intensities entering the cornea. The cone response is proportional to the logarithm of the retinal intensities, which (because of glare) is proportional to the cube root of scene intensities. This glare also limits the dynamic range of the retinal image. The link between cone responses and Munsell colors also involves a strong color-opponent process (creating signals differentiating opponent colors such as red-green or yellow-blue).
CIE L*a*b* also has a cube root response and opponent channel mechanism. L*a*b* handles the lightness component of HDR scenes with a two-component compression curve – the first component is a cube-root function in both lightness and chroma for high and medium light levels, and the second is a linear function for low light levels (the two components connect seamlessly). The sRGB and Rec.709 transfer functions are similarly constructed. CIE L*a*b* normalizes each of X, Y and Z to its maximum value over the image before further processing; this is equivalent to the way human vision effectively normalizes L, M and S cone responses (it processes differentials/ratios and not absolute values, as in Retinex theory). After normalization, the compression curve scales the large range of possible radiances into a limited range of appearances – 99% of possible lightnesses correspond to the top 1000:1 range of scene radiances – all remaining radiances (darker than 1/1000 of the white point) correspond to the bottom 1% of possible perceived lightness values. sRGB has similar behavior.
Given these considerations, Prof. Rizzi does not believe that new uniform color spaces (UCSs) are needed for HDR imaging; existing spaces can handle the range that the human eye can perceive in a single scene (note that this analysis does not relate to intermediate images, such HDR IBL – UCSs are only used to describe the perceived colors in the final viewed image).
Digital HDR Color Separations
This session was presented by John McCann, an independent color and imaging consultant since 1996. Previously he led Polaroid’s Vision Research Laboratory for over 30 years, working on topics including Retinex theory, color constancy, very large-format photography, and perceptually-guided color reproduction. John is a co-author of the recently published book “The Art and Science of HDR Imaging”.
Many applications (HDR exposure bracketing, various computer vision and spatial image processing algorithms) need linear light scene values. The JPEGs produced by cameras are very far from linear light; they are images created with the intention of creating a preferred rendering of the scene, which looks pleasing and is not colorimetrically accurate. Regular color print & negative film were designed with a similar intent and produce similar results.
Although the sRGB standard specifies an encoding from scene values, and camera manufacturers follow some aspects of the sRGB standard in producing JPEGs, the processing differs in important ways from the sRGB encoding spec. the algorithms that perform the demosaic, color balance, color enhancement, tone scale, and post-LUT for display and printing create discrepancies between the sRGB output in practice and an idealized conversion of scene radiances to sRGB space.
Together with Vassilios Vonikakis (Democritus University of Thrace, Greece), John McCann did an experiment to measure these discrepancies. Images of a Macbeth ColorChecker chart were taken under varying exposures using three methods: digitally scanned traditional color separation photographs, standard JPEG images from a commercial camera, and “RAW* separations” from the same camera. Traditional color separation photographs use R, G and B filters and panchromatic black and white film to create separate single-channel R, G and B images that are combined into a single color image. “RAW* separations” are the author’s names for linear RGB values that were generated from partially processed RAW camera data (read with LibRaw’s “unprocessed” function). This data does not even include demosaic – it is a black and white image with the mosaic pattern (e.g., Bayer) in it. The authors did their own, carefully calibrated processing on these images to create normalized, linear RGB data.
The photographic separations were most correct – the chromaticity of the Macbeth chart squares remained very stable across all the exposure values. The JPEG image had the largest chroma errors – the chromaticities of the colored Macbeth squares varied greatly with exposure – this is part of the “preferred rendering” performed by these cameras to make the resulting image look good. The RAW* separations were similar to film (slightly less stable chromaticities, but close).
The conclusion is that for any algorithm that needs linear scene data, it is important to use RAW data where most of the processing has been turned off and do carefully calibrated processing.