Why people don't care about Wide Gamut?

well, I do not have display with artificial gamut but one that have 'only' ~133% of the 1953 NTSC gamut area, as expressed in the CIE 1976 u'v' space vs ~75% of sRGB so my observations might not be very accurate. I would gladly test it on monitor with wider gamut but somehow it is the largest gamut monitor that there is available to buy... unlucky me ;)

D4uPqx4.png


this is Rec 2020, better but not by much
300px-CIExy1931_Rec_2020.svg.png


and when I view this image in native gamut it actually seems to have less banding! when I use saturation slider in OSD custom mode to make it less saturated the more banding I see. How is that even possible? Monitor have 10bit panel and high internal precision and this banding is definitely not caused by its internal crappy processing cause if I use convert to greyscale in irfan view its making way more banding now visible spanning over multiple original bands (bands of one color are very wide) which is a proof that monitor indeed is calculating in way more than 8bit because width of bands remain the same in monitor greyscale vs native gamut

I do not want to jump hastily to strong conclusions but it seems that its pretty much reverse to what you are suggesting it should be... so if we have this false belief that wide gamut display would make image more banding prone we can treat issue of representing smaller color spaces in bigger alone.

So I created lartest possible color space that there could even be (0.0001 instead of 0 cause of division by zero error)
R x 1.0000 y 0.00001
G x 0.0001 y 1.00000
B x 0.0001 y 0.00001

which gave this matrix
0.4338944, 0.3762235, 0.1898821, 0,
0.2126390, 0.7151785, 0.0721825, 0,
0.0177555, 0.1094555, 0.8727890, 0,

and while it made your example to exibit serious banding it is still less banding than converting this image to greyscale! So it seems that converting any video to greyscale will have more added banding than worst case scenario which is encoding sRGB signal in impossibly high gamut to which Rec. 2020 doesn't even come close. Now do you see that much added banding going to greyscale in real life images? There doesn't seem to be any banding be it silky smooth skies or other fine gradients you normally see. Again unlike any gamma manipilation which immediately produce severe banding in those scenarios

You can test if this artificial gamut conversion is that bad yourself, you will just have to install MPC-HC if you do not have it already. Like I said before, if display had this gamut natively it would show even less banding cause of out perception that sees colored images as more mentally blurred.

So where is the issue if all real life tests done on real images show there is none and even artificial images which look already like crap in sRGB hardly show any deterioration in image quality when going to much higher gamut that is near Rev 2020 using really bad color space conversion algorithm which could be easily improved with dithering? :confused:

ps. 10bit necessity for wide-gamut myth
20110905003006!Busted.png
 
Last edited:
I think the discussion has reached a deadlock - maybe I can help soothe feelings: At first it is essential to make a clear distinction between banding and tonal breaks. And we must clearly define the workflow and the components involved.

However: A 8bit (per channel) signal can't provide completely smooth color transistions, also independent of thoughts about color gamut. But a precise transformation - depending on the situation we must also consider some kind of black point compensation established during (factory) calibration or via a CMM - with appropriate output will at least avoid tonal breaks. Further optimizations for delivering finished end user material (apart from dithering already at this stage) include a perceptual gamma correction (L*) of the high bit source prior to its 8bit conversion. That only helps if it can be reproduced lossless of course.

Establishing a stable and definied 10bit feed provides a visible advantage even for non-experts. But I wouldn't associate that wish mainly with constraints regarding source and/ or destination color gamut. When demanding even higher bit depths for signal feed - I'm not referring to material bit depth during retouching or display engine/ CMM precision - we should keep the law of diminishing returns in mind.

A last comment regarding gamut conversion: All this RGB to RGB transformation are comparably "forgiving". Of course in a scenario with a smaller destination gamut there will be clipping of out-of-gamut colors (unless a perceptual intent is baked into non matrix profiles) but that can be taken into account during retouching (which shouldn't be based on 8bit material). When working with CMYK transformations things get much more complicated (only think of the black generation) and limits of the standard ICC workflow (device-dependent color => PCS (including gamut mapping) => device device-dependent color) become visible. Often DeviceLink profiles are necessary to achieve the desired results.
 
Last edited:
and when I view this image in native gamut it actually seems to have less banding!

The only reason I can fathom that the same image would have more banding when viewed with a lower gamut is due to processing artifacts. Either rounding errors (quantization artifacts), or a reduction in the available bits of color resolution (you can still have lower color resolution but very high internal precision).

when I use saturation slider in OSD custom mode to make it less saturated the more banding I see. How is that even possible? Monitor have 10bit panel and high internal precision and this banding is definitely not caused by its internal crappy processing cause if I use convert to greyscale in irfan view its making way more banding now visible spanning over multiple original bands (bands of one color are very wide) which is a proof that monitor indeed is calculating in way more than 8bit because width of bands remain the same in monitor greyscale vs native gamut

Probably because our JNDs are smaller at lower saturations.


So I created lartest possible color space that there could even be (0.0001 instead of 0 cause of division by zero error)
R x 1.0000 y 0.00001
G x 0.0001 y 1.00000
B x 0.0001 y 0.00001

which gave this matrix
0.4338944, 0.3762235, 0.1898821, 0,
0.2126390, 0.7151785, 0.0721825, 0,
0.0177555, 0.1094555, 0.8727890, 0,

and while it made your example to exibit serious banding it is still less banding than converting this image to greyscale!

So you've just observed that increasing the color gamut increased the banding in my image. Which is exactly what I said. (unless I've misunderstood you).

As for grayscale, when you converted it, did you check the RGB values of the successive bands? Were they neighbours in the addressable space? (i.e. [14 14 14] next to [15 15 15]?) Also keep in mind that JNDs are different in grayscale than they are in other areas of the gamut.
 
I think the discussion has reached a deadlock - maybe I can help soothe feelings: At first it is essential to make a clear distinction between banding and tonal breaks.

Can you elaborate on this difference?

Also, do you agree that, all else being equal, a larger color gamut contains more JNDs, and therefore has a larger minimum requirement for the number of discrete colors than a smaller color gamut, in order for smooth color transitions?
 
Can you elaborate on this difference?
As I said: A 8bit (per channel) signal can't provide completely smooth color transistions - even if we presume high bit source material and perceptual gamma correction before bit depth conversion (without dithering). "Tonal breaks" in the way I have introduced it (so it is of course very context specific) refers to a loss of tonal range during processing of the material.

Also, do you agree that, all else being equal, a larger color gamut contains more JNDs, and therefore has a larger minimum requirement for the number of discrete colors than a smaller color gamut, in order for smooth color transitions?
With respect to the tense atmosphere in this thread I don't wan't to heat it up further. There is a correlation - but the advantages of >8bit input signal bit depth are visible beyond all thoughts about color gamut. Therfore it's kind of an academic issue.
 
Last edited:
yes, your image converted to greyscale have very large banding and succesive bands are different by exactly one level of brightness and this image look exactly the same on LCD and CRT. On the other hand greyscale my monitor can produce have exactly the same amount of bands that colored image have and those bands are perceptually more visible as banding artifacts than when used in sRGB and even less in wide-gamut. Those are not big differences but show reverse order than it would at first seem there should be. It also proves that my display have more than 8bit precision.
Bad internal processing should produce uneven brightness bands but they are always very nice and very even so that is not it. Our eyes is the reason, we just do not differentiate colors that fine as we can differentiate greyscale.

as for lossy nature of squeezing sRGB in wide gamut: it is obvious it is lossy process :)
What is apparently not obvious is amount of this loss, amount of banding it produces which is nowhere near as bad as it is advertised.

There is no real reason to continue this topic, I tested everything that there was to test and my conclusions are final. You can do tests yourself if you want and have wide-gamut monitor with proper sRGB emulation to do comparison which you obviously do have... Do you? :rolleyes:
 
Last edited:
What is apparently not obvious is amount of this loss, amount of banding it produces which is nowhere near as bad as it is advertised.

I think we're arguing different things here. I'm not at all concerned about the banding produced by conversion artifacts.

So summarizing:
- wide gamut in itself do not need more bitdepth than sRGB

This is the only thing I have issue with. It was, as sailor moon, points out, an academic issue - a general principle. 9 or 10 bits may well be sufficient for wide gamut. My point is simply this:

Say your wide gamut display can handle completely smooth transitions from any color to any color, with, say, a 10 bit framebuffer ((2^10)^3= 1.07 billion addressable colors to span the gamut), and 16 bits of precision ((2^16)^3 possible shades to choose from among these 1.07 billion colors). Suppose that any less than 1.07 billion colors, and you start to see banding between some areas of the gamut, so 1.07 billion is the minimum number required.

If you take a radically smaller gamut, then less than 1.07 billion colors are required. That's the only point I've been trying to make here.

It doesn't matter how much precision you have - you could have a trillion shades to choose from, but if you only have a 2 bit frame buffer (64 colors defining the entire gamut), you will get banding in, for example, sRGB.

But 64 colors may be sufficient for smooth transitions in a tiny gamut.
 
Not superior if in real world most of content displayed is not wide gamut aware and if srgb emulation in most implementations suck. There are many things that are superior on paper .. but shattered with harsh reality of sucky implementations. And unless at least half of implementations will be flawless (software part included), i cannot claim as you did, that "All monitors should be Wide Gamut". For now this would make things worse.

Wide gamut is not evil. What is evil is monitors not capable of showing the different color spaces correctly.
Wide gamut is superior, it allows to show you more colors.

Next time buy a monitor with good sRGB emulation, or buy a TV with good contrast and Rec. 709 support. Probably you should buy phones with better displays.
 
If you take a radically smaller gamut, then less than 1.07 billion colors are required. That's the only point I've been trying to make here.
absolutely no
if you take radically smaller gamut but with three color primaries then you still need 10+bit precision cause you already need 10+bit precision for greyscale

It doesn't matter how much precision you have - you could have a trillion shades to choose from, but if you only have a 2 bit frame buffer (64 colors defining the entire gamut), you will get banding in, for example, sRGB.

But 64 colors may be sufficient for smooth transitions in a tiny gamut.

that might have some sense if you had YUV type encoding with independent bitdepth for luma, say 10bit to be on the safe side and then searching how much bitdepth is required to represent chroma which is saturation and hue. And in this case chroma bitdepth requirement would definitely depend on gamut and with tiny gamut spending a lot of bits wouldn't be justifiable, just as with very wide-gamut there would be need for more bitdepth to represent it correctly. It is exactly what is done with all compression algorithms out there that also decrease chroma resolution significally. They can cause out eyes are really bad at seeing subtle differences in color.

But for displays we are using RGB and you already need at least 10bit for smooth greyscale gradients and to go color you have tripple that because from one white pixel you go to three red, green and blue subpixels, and each of those have exactly the same requirements for bitdepth

if gamut if small or wide you still need to give 10+bit for each subpixel or otherwise you will get banding just as you would with b&w display. So its not that we need 10bit for Rec 2020, we need it for any gamut there is. And I assure you that if we had really smooth greyscale then going to color we would still had smooth color transitions. Its only that we do not have that luxury in the first place make us cut bits in half.

Besides 'number of colors' is concept that tells us how much permutations of colors we can get and have nothing to do with gamut. Sure with smaller gamut those colors will be closer together but that doesn't mean you can lower bitdepth cause lowering bitdepth will inevitably lead to banding due to less levels of brightness so you still need the same amount of bits. Now increasing gamut sure is making differences between colors larger and it is what we discuss here but the difference is not terrible cause our eyes are not that sensitive to color anyway. They are actually most of the time only picking up brightness differences as banding, not chroma differences if they are sufficiently small. In this picture you created it seem that this is exactly the case and perceived banding is caused by difference in luma, not chroma. There are test on the web to test chroma discerning abilities and they can show that it can get extremely difficult task to differentiate between similar colors if their brightness is the same.

So once again: correlation of gamut and bitdepth is insignificantly low and those two should not be linked together in the way they usually are. Bitdepth have to do almost exclusively with brightness. And lastly: 8bit is pretty much as crappy for Rec. 2020 as it already is for Rec. 709
 
Last edited:
Ah, ok I see where the confusion in our discussion lies. Yes, you're absolutely correct that even with a tiny gamut, you need more than 8 bits of color depth for grayscale, assuming of course, the luminance range is large enough to require this (and between ~0 and 100 nits), 8 bits is clearly insufficient. I should have kept this in mind during the discussion.

So yes, even if you have a tiny gamut, if your luminance range is large enough, you will get banding along colors that differ in luminance even if you don't get banding along colors that differ purely in chromaticity.


It's often said that we're more sensitive to luminance variations than chromatic variations (hence the chroma subsampling schemes). This isn't quite the case. We have more spatial acuity for luminance variations than chromatic variations. In other words, we'd be able to distinguish differences in luminance, across space, than differences in chromaticity across space. To say that we're more sensitive to changes in luminance than to chromaticity is somewhat meaningless, as we'd have to find some way to equate them.

However, we can use practical considerations here. In a normal HD display that goes from ~0 to 100 nits, we can ask the question:

a)
What is the maximum number of JNDs that exist, for any chromaticity, between 0 and 100 nits.

b)
Then we can ask what is the maximum number of JNDs that exist, for any given luminance, between the gray point, and any point on the spectral locus.

If it turns out that the answer for a) is greater than that for b), then yes, the grayscale is always going to be the "perceptual bottleneck", and your bit depth requirements is going to solely reflect this number, and will have nothing to do with how wide the gamut is. But this is an empirical question, and it may turn out that in order for the grayscale to be a perceptual bottleneck, you'd need a very high luminance range. But I do agree that once this luminance range is in effect, gamut width will not change bit depth requirements.


Incidentally, grayscale conversions aren't a good way to assess relative sensitivity. In the gradient provided earlier, I had kept the Lightness level constant on all bands (in HSL space). When I measured the actual luminance of the bands, it started out at 1.91 nits at the red end, dropped to 1.24 nits in the center, and rose to 1.62 nits at the purely gray end. When I used IrfanView to convert it to grayscale, the corresponding measured luminances at those points were 0.5 nits at the (previously saturated red end), 0.98 nits in the center, and 1.62 nits at the unsaturated end.

So Irfanview's conversion to grayscale doesn't even preserve the right direction of luminance change between the beginning, middle, and end of this gradient.

All this is to show that converting to grayscale is not a good way to compare our relative sensitivity to banding across luminance vs chromaticity. For example, if I had created a gradient that was truly equiluminant across its range (varying only in chromaticity), a "proper" grayscale conversion would result in a uniform gray patch. But in order to do a proper grayscale conversion, the software doing the conversion would need to have a model of how the display is actually calibrated.
 
conversion to grayscale use weights for RGB values and for red it is much lower than for green and cause of this after conversion its making this image to go from darker gray to lighter while in colour it might seem opposite. Obviously this conversion is not something to arrest too much attention on to but its a good example that such popular process that most people consider banding-less is actually producing more banding than conversion of sRGB image to artificial gamut higher than anything that we will ever use as there is no reason to use priemaries that far away to cover whole perceivable gamut an because of that there is not much to worry about when it comes to representing sRGB in gamuts that maybe big compared to sRGB but much smaller than this maximum mathematically attainable one

imho and according to my tests we do have a) much greater than b)

as for what is more perceptually visible, brightness or color difference there is actually simple answer: CIE XYZ colorspace, in it if I am not mistaken (am not 100% sure of that it is XYZ, there are few representation in CIE) the same distance in any direction should produce the same perceptual difference because CIE is constructed exactly on how humans see colors and brightness level
 
CIE XYZ colorspace, in it if I am not mistaken (am not 100% sure of that it is XYZ, there are few representation in CIE) the same distance in any direction should produce the same perceptual difference
The CIE XYZ color space isn't perceptual uniform. Even CIELAB doesn't achieve this target despite major improvements: The simple euclidian color distance overvalues color differences in saturated colors compared to neutral colors. In order to attenuate this situation modified formulas were introduced that weight luminance, hue and saturation differences with (reference-)color-dependent factors (the CIE2000-formula even rescales the a* axis).

However: The dispute is entirely superfluous - not only because the scenario (what is reproduced under what conditions) hasn't been exactly defined at all. But it's valid to note that a tonal range of 8bit per channel isn't enough to ensure visually seamless transitions in general - even if we "only" consider luminance variations reproduced with ideal distribution of tonal density.

conversion to grayscale [...]
should consider a transformation to CIELAB omitting color information (a*, b*).
 
Last edited:
I opened this thread sincerely because in the last year I had no time to play with photos.
I don't have time to play with cameras and photos anymore, so is my monitor the right choice today?

I have an Eizo S2433W and I'm amazed by its contrast and its incredible black level while calibrated with mine DTP94 colorimeter.

My PC usage varied and now I use this PC for developing (writing code), some films, some videogames, internet.

For internet and any other managed environment I solved using an ICC profile thanks to the XRite DTP94, but does it worth to continue having a Wide Gamut monitor?

Should I buy an RGB monitor? Will I see any improvements in my real life scenario?
 
You should only buy a wide gamut, specifically Adobe RGB gamut, monitor if you're planning on doing professional or prosumer printing. Otherwise, a wide gamut monitor would not provide any benefit.

And, yes, to the person upthread, there are "standard gamut" 10-bit panels.
 
You should only buy a wide gamut, specifically Adobe RGB gamut, monitor if you're planning on doing professional or prosumer printing. Otherwise, a wide gamut monitor would not provide any benefit.

Are you sure about this sentence?
I can see much more color than standard RGB monitor when I shot a photo with my camera, this is sure.
 
One thing, what you can "see". Another - what most common apps/games can "output". In later case more often then not many of them problems with non srgb monitors - you'll still "see" .. much worse color mismatched picture then on non wide gamut one.
 
Back
Top