nvidia or amd for color accuracy and calibration features

Frode B. Nilsen
what for exactly do you need after-callibration banding artiact?

and what is it with some 'filter' thing you are talking about. AMD have no filter to remove banding, just dither 8bit outputs if I use 16bit LUTs. I can still use 8bit LUTs and it will look exactly as bad as it does on Intel/NV

Wikipedia:
Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of "mastering" audio to a CD.

A typical use of dither is converting a greyscale image to black and white, such that the density of black dots in the new image approximates the average grey level in the original.

Its just adding noise, to hide a defect. Not sure if I want to hide those defects in the first place. Now, using such a noise filter, you will also add noise to things that are not defects.

So XoR:
What for exactly do you need to add noise to everything on your screen?

We are just back where we started, where AMD or Nvidia most probably do not matter much for any calibration.
 
monitor gamma 1.0, card gamma 0.5 8bit
3zlORw4.jpg


monitor gamma 1.0 card gamma 0.5 16bit LUT
poE74Aq.jpg


monitor gamma 0.5 card gamma 0.5 8bit
wbuOeLc.jpg


monitor gamma 0.5 card gamma 0.5 16bit LUT
G4y6IpF.jpg


monitor gamma 1.0 card gamma 1.0
lQ9GwYD.jpg


monitor gamma 1.0 card gamma 0.5 8bit
3kbA9UD.jpg


monitor gamma 1.0 card gamma 0.5 16bit LUT
LhJaA0H.jpg


monitor gamma 2.0 card gamma 1.0
QQLtHmZ.jpg


Thing to note: camera did not properly capture how dithered image really looks. I would proably need to tinker with manual settings or something. It still show improvement is there.

I counted individual bands for 1 band at 8bit there are 4 at 16bit so AMD does dithering in exactly 10bit resolution

Obviously those photos are extreme case and no one need to apply that much gamma correction in dark tones for it to be that much visible but it is good to see how dithering actually looks. And it looks pretty good as AMD seem to use subpixel rendering to do this dithering so its three times the screen resolution. It is also very random.

Going from 0.5 gamma up in 0.1 steps this dithering stops being noticeable at normal viewing distance at gamma 1.2. Normal gamma is 2.2 which is much much higher and for me personally lagon lcd gradient look exatly the same with monitor gamma 2.0 and radeon gamma 0.5 and monitor gamma 4.0 and radeon gamma 1.0. When I set monitor gamma to 2.0 and card gamma 0.5 but 8bit then there is visible loss in 'steps'.

I used CCC to set 16bit gamma and Riva Tuner to set 8bit gamma. I confirmed it is 8bit by setting monitor gamma to 0.5 which would show any dithering, even most subtle. Gamma patterns also look the same at same gamma settings as on Intel iGPU which like Nvidia use 8bit.

As for EDID gamut change here is photo of my FW900: without EDID and with EDID
and 2090UXi: without EDID and with EDID
Difference is not very drastic due to it being fairly well implemented sRGB monitors. On some cheapo W-LED it will be bigger. Also in real life difference is more visible, especially on some colors. It is not substitute for real CMS but basically only valid solution to have CMS-like experience in most programs and games and very good for multi-monitor setup because each monitor is color corrected independently so definitely nice feature to have.

And I am out of this thead. I won't waste my time convincing idiots that banding is not desirable feature. Life is too precious to spend on trying to argue with someone who is obviously trolling.

I did those comparison as a proof that AMD does have features I talked about. If anyone need more proof then that person is troll imho.
 
...
And I am out of this thead. I won't waste my time convincing idiots that banding is not desirable feature. Life is too precious to spend on trying to argue with someone who is obviously trolling. ...

Oh boy! You still do not seem to get, that yes, in a lot of use cases, the expected result that a calibration produces, like reduced color resolution, is best left as it is. You still do not seem to get, that filters have both positive and negative effects. You only rave about the positive ones, flat out denying even the existent of any negative ones.

Its that denial, and factual distortion, that is being argued. Nothing else. No need for any name throwing.

Thanks for sharing the pictures. They show what dithering is known to excel at. Its limitations, they do not.

I actually went on for hours with that gradient, and a gradient I made myself. In the end, I changed settings for my monitor, and actually ended up changing quite a bit. The results improved a lot. In the end, banding is now only visible to my eye, going beyond 200% enlargement (of the linked gradient), and then only slight. That is the equivalent of about filling a 1080p screen, before this really gets any issue, for a worst case scenario.

And I have gained more knowledge about negative aspects of calibration, both in PC and in monitor. And some new tools to minimize these.

I would never have achieved that, if you did not bring this up. So, Thanks.

As for the name throwing, that is not for me.
 
as I see it calibration should produce results after which all gradation steps are preserved making gradients to not have any more not banding
so I ask again: what do you need banding for?

say you make picture to publish in web but have monitor that have bad gamma curve which you then correct and get banding in result. In what way having banding helps you make better picture for web or any other destination?

Do I as consumer of this picture have banding? I do not and if you for example tinker with picture to remove banding then you will add noise or do fools job because I do not have it. If you se banind on your monitor but no one else will seeing the same picture then what is the point of it?. No one will because everyone have banding-less displays and if they have banding then it is theirs banding and have nothing to do with yours.

If we all had your monitor with exactly the same banding then it would make sense to leave it on but we do not have the same monitors and the same calibration curves. We do however have 8bit displays so banding limitations from it stay but they stay even if you use AMD card that have 10bit dithering because you make 8bit images. Dithering is only for gamma correction so you do not have banding after calibration because 8bit gives 256 steps of gradation, not: 256 - those caused by imperfection in your monitor

Besides if you eg. require luminance of pixel to be exactly eg. 10.4 and trim it to 10 then you also reduce accuracy by tiny amount. Not much but it happens.

There is absolutely no advantage of having some random uncorrected banding. It looks bad and can make you believe there is bad gradation in places where gradation is just fine.
 
Ideally, if the 1dlut is applied and processed via higher bit depth, there should be no perceivable loss in color resolution.

I think it is pretty silly if calibration always introduce banding.
 
I'll check next time I have time in the lab, I recently linearized the gamma of our VPIXX display using a mac with osx. The output definitely had more than 8 bit precision, so I'll check what video card it is. And I'm pretty sure I can check on some PCs with nvidia cards.

Turns out I was wrong about this. Just did some measurements - nvidia geforce GT - 650M on OSX, using what I believe is minidisplayport-to-displayport-to-DVI-D dual link. Definitely 8 bit precision.

Wonder if this is a driver limitation.
 
Turns out I was wrong about this. Just did some measurements - nvidia geforce GT - 650M on OSX, using what I believe is minidisplayport-to-displayport-to-DVI-D dual link. Definitely 8 bit precision.
Wonder if this is a driver limitation.
From my previous testing, it looks like it's a driver limitation.

If the output is 8-bit via HDMI, and the software is 8-bit, I get banding if I use any LUT-based calibration features.
If the output is 12-bit via HDMI, and the software is 8-bit, there is no banding.

If the output is set to 8-bit, and software tries to output a 10-bit or greater signal (madVR and Alien: Isolation were tested) then NVIDIA seems to process in 10-bit and use temporal dithering when converting to 8-bit, reducing banding.
This output is not as smooth as when the desktop is set to a 12-bit output, since my display has a 10-bit native panel (and accepts a 12-bit input) but it is smoother than processing in 8-bit.

What NVIDIA needs to do is use a minimum of 10-bit internal precision (ideally 16-bit) and dither that to an 8-bit output.

Ideally, if the 1dlut is applied and processed via higher bit depth, there should be no perceivable loss in color resolution.
I think it is pretty silly if calibration always introduce banding.
And dither is the only solution for this.
If you just use rounding or truncation, you end up with banding even if you're processing in 16-bit.
 
...so I ask again: what do you need banding for?...

I never said I needed the banding. Banding is not a real world issue to me, as it never shows up in my work. Like never. Even looking at huge gradients, to provoke this supposedly nasty banding. I have always made dithered gradients in the past anyway, and cannot recall any real world usage of non-dithered gradients.

After all, there is a lot of 6-bit panels out there.

And I do need the strength that comes along from a non-dithered output. In particular sharpness and resolving power. I also want predictiveness and control.

Wikipedia has a very explanatory and informative page on dithering.

https://en.wikipedia.org/wiki/Dither

Once understanding what it is and how it works, its negative effects will become apparent. Not just its benefits. Only then, can this be discussed in a meaningful manner.
 
it would perhaps be better to actually see how this particular dithering looks and then make opinions about it being worse than non-dithered
 
I never said I needed the banding. Banding is not a real world issue to me, as it never shows up in my work. Like never. Even looking at huge gradients, to provoke this supposedly nasty banding. I have always made dithered gradients in the past anyway, and cannot recall any real world usage of non-dithered gradients.
It doesn't matter if you have a perfectly dithered 8-bit gradient or not.
The processing that is done for display calibration will create banding in the image if it is not processed with sufficient internal precision (ideally 16-bit) and dithered down to 8-bit.
Yes, dither adds noise, but your options are:
  1. Don't dither: suffer from really bad banding/posterization.
  2. Dither: suffer from a slightly noisy image with minimal banding/posterization.
  3. Output a higher bit-depth: minimal dither or banding, but this requires hardware support in the display. ( >8-bit internal processing/LUT, a 10-bit panel, or both)
 
It doesn't matter if you have a perfectly dithered 8-bit gradient or not.
The processing that is done for display calibration will create banding in the image if it is not processed with sufficient internal precision (ideally 16-bit) and dithered down to 8-bit. ...

Dithering may also degrade the image visibly. As all filters do. And since I do not have a real world issue with banding, I do not want the dithering artifacts.

Why people go on and on about this issue, that do not affect me in any meaning full way, is beyond me. Especially, when they keep on claiming that I get this nasty effect, that I most certainly do not.

Until this discussion get balanced, in which pros and cons get weighted, I am out. Magic pill discussions, is not my cup of tea.
 
you would fail miserably in ABXing between 8bit+dithering vs 10bit display using any test pattern you could imagine

there is visually no dithering visible with naked eye without setting monitor to some ridiculous values like gamma below 1.2 (from 2.2 !!!) just to show that there is dithering or using way too much sharpening filters

8bit quantization errors such as NV and Intel produce might be hard to notice in many situations due to noisy images which hide it. Noise is everywhere, in photos, in movies, even some games, etc but banding will be visible in some situations. I used NV cards with calibration for few years on three monitors in total and it was at places fairy well visible that there is banding, especially on VA panels that have gamma shift which inflate gamma of darker colors from off angle. If banding disappears when you set linear LUT then it is due to 8bit quantization errors.

Again if dithering is not noticeable at all and banding is sometimes noticeable then there is no point in having banding over dithering. If this dithering was poorly implemented then there might be some discussion but it is not poorly implemented which I can assure you. AMD did what I thought was impossible to do. And that you think it is impossible to have proper dithering only proves to me that you did not see how it works :rolleyes:
 
you would fail miserably in ABXing between 8bit+dithering vs 10bit display using any test pattern you could imagine
there is visually no dithering visible with naked eye without setting monitor to some ridiculous values like gamma below 1.2 (from 2.2 !!!) just to show that there is dithering or using way too much sharpening filters
You shouldn't make that assumption. There's a clear difference between the two on my display, especially in darker tones.
If you're not seeing that difference, I suspect that the display you're using for this comparison does not have a 10-bit native panel, and merely accepts a 10-bit input and performs its own dither.

And 10-bit is not the limit of human perception.
We would need at least 12-bit panels before people stop seeing the difference.

Dithering may also degrade the image visibly. As all filters do. And since I do not have a real world issue with banding, I do not want the dithering artifacts.
If you're not seeing banding, you won't see dither either.
Processing with a high internal precision and using dither to convert to 8-bit is the correct way to handle this. That's a fact, not an opinion.
 
You shouldn't make that assumption. There's a clear difference between the two on my display, especially in darker tones.
is this dither that you see on AMD or NV?

maybe my displays are slow, definitely use own dither too
When I saw how AMD handle things I immediately tested it on CRT on DVI/HDMI to VGA adaptor and I was not able to discern any difference. Maybe I should try like in 640x480 60Hz with very high brightness level to make it more obvious :rolleyes:
 
maybe my displays are slow, definitely use own dither too
When I saw how AMD handle things I immediately tested it on CRT on DVI/HDMI to VGA adaptor and I was not able to discern any difference. Maybe I should try like in 640x480 60Hz with very high brightness level to make it more obvious :rolleyes:
If it is being done by the driver, it is limited to the current refresh rate. So you will typically have temporal dithering being performed at 60Hz.
If you output 10-bit and it is converted to 8-bit by the display's internal FRC, it could be running at a much higher rate.

Either way, it should not be that difficult to see the difference between an 8-bit native panel and a 10-bit native one.

I don't think that a CRT is necessarily a good type of display to use for this test, and it's not clear to me what you're doing with the DVI/HDMI to VGA adapter.
You're introducing a lot of variables which could influence the results. (Does the adapter accept a 10-bit signal? Is the DAC good enough to resolve 10-bits clearly on a CRT? etc.)
 
Those are 8bit only adapters. I have them to connect HDMI devices such as consoles.
 
So I've bought a used 7970 windforce.

Latest drivers with win 7 64x, connected to a sony tv via hdmi.

For some reason there's no edid and color temp feature. Why?
 
TV's might have gamut informations empty. Not all monitors are guaranteed to have them or them being accurate either.

I bet there is some kind of software solution to this issue without necessity to make (or buy) hardware emulator like I did.

Even with hardware EDID emulator or software way to force EDID you would need to measure gamut of this TV to prepare EDID file.
 
Last edited:
So, how do I ensure that the output is completely linear now? I've already unchecked all the post processing junk for video players and set the rgb range to full in both tv and driver.

I don't have a colorimeter to check how accurate the colors are. But CLtest shows a linear gamma curve and minimal color tinting on the test image, indicating that the gamma is close to 2.2. Black levels are distinguishable down to 2 in lagom test. The saturation seems ok.

The thing that bothers me is that darker shades are still darker than I used to. I've never tried this tv with an nvidia gpu, and my laptop's hd4000 apparently had unfixable rgb range issue resulting in washed out colors and other issues.

Could that be that it's just VA black crush more apparent with srgb content?

Anyway, I'm waiting for u2715h it should be delivered in a few days.
 
I'm using an AMD R9 270 and gamma calibration with LUT = no banding, none at all. Whatever 16-bit LUT + dither that AMD uses, it's magic, and I am not sure I could live without it. There are no negative artefacts introduced, so I don't know what Frode is referring to, but you get a perfectly smooth gradient with no lost shades that is 100% free of any artefacts whatsoever.

This is especially important to me since I tweak the shadow detail gamma of the 0-10% range to a kind of BT.1886-esque curve which is much needed on IPS panels to not crush shadow detail with IPS-glow after calibration (otherwise you tend to lose 1-3% near black) and can only be corrected without banding for this small range of dark colours with 16-bit+dither LUT.

So, I wanted to buy a GTX 970 for gsync, but this LUT issue is holding me back. Can anyone comment if they are still 8-bit? Thanks.
 
yea, I believe that if you are using Nvidia, you're limited to 8 bit LUT, unless you're using analogue out.
 
yea, I believe that if you are using Nvidia, you're limited to 8 bit LUT, unless you're using analogue out.

How sure are you? Some people on doom9 are saying if an application such as MPC is set to 10-bit output that nvidia does the same thing as AMD, and on evga forums one guy got an email from a German tech support manager saying "the GTX 970 do also work with 10 BIT LUT".

Maybe it depends on the lut loader being used? I've got powerstrip and monitor calibration wizard, both of which load 8-bit luts in different ways (powerstrip sticks for stubborn games that try to revert it where the other does not) and both cause banding on my AMD, but dispwin.exe and xrite calibration tester which both load 16-bit luts which don't have banding (but are more easily overriden by some games).

Maybe someone with a current gen nvidia (preferably 970) could test it with dispwin and xrite calibration tester?

http://www.argyllcms.com/doc/dispwin.html
(usage: open command prompt, type "dispwin.exe yourprofile.icc" without quotes)

https://www.xrite.com/product_overview.aspx?ID=795&Action=support&SoftwareID=546
 
How sure are you? Some people on doom9 are saying if an application such as MPC is set to 10-bit output that nvidia does the same thing as AMD, and on evga forums one guy got an email from a German tech support manager saying "the GTX 970 do also work with 10 BIT LUT".

See posts below, from this thread:

i just checked on my vg248qe+gsync and i'm positive that the output is 8bit from my gtx 970's displayport

Turns out I was wrong about this. Just did some measurements - nvidia geforce GT - 650M on OSX, using what I believe is minidisplayport-to-displayport-to-DVI-D dual link. Definitely 8 bit precision.

Wonder if this is a driver limitation.
 
How sure are you? Some people on doom9 are saying if an application such as MPC is set to 10-bit output that nvidia does the same thing as AMD, and on evga forums one guy got an email from a German tech support manager saying "the GTX 970 do also work with 10 BIT LUT".

Maybe it depends on the lut loader being used? I've got powerstrip and monitor calibration wizard, both of which load 8-bit luts in different ways (powerstrip sticks for stubborn games that try to revert it where the other does not) and both cause banding on my AMD, but dispwin.exe and xrite calibration tester which both load 16-bit luts which don't have banding (but are more easily overriden by some games).

Maybe someone with a current gen nvidia (preferably 970) could test it with dispwin and xrite calibration tester?

http://www.argyllcms.com/doc/dispwin.html
(usage: open command prompt, type "dispwin.exe yourprofile.icc" without quotes)

https://www.xrite.com/product_overview.aspx?ID=795&Action=support&SoftwareID=546
If the output is 8-bit, the LUT is processed in 8-bit with standard desktop applications.
If full-screen exclusive DirectX application outputs more than 8-bit, but the driver is set to 8-bit, the driver will take that input (e.g. 10-bit) and dither it to 8-bit. Only the madVR video renderer, and the game Alien: Isolation use this, as far as I am aware.
If the output is 10-bit (DisplayPort) or 12-bit (HDMI) the LUT should be processed in 10-bit or 12-bit with no dither.

It sounds like AMD always process in 10-bit and dither to your specified output bit-depth - which is the correct way to do this, if you are using a LUT. Dither is unnecessary if you do not have a LUT loaded.
 
zone, you may be confusing two different types of bit depths here. You can have an 8 bit LUT specified with 10 bit precision (i.e. 256 different entries, but each entry can take on any of 1024 values). The last few posts have been with respect to the precision.
 
zone, you may be confusing two different types of bit depths here. You can have an 8 bit LUT specified with 10 bit precision (i.e. 256 different entries, but each entry can take on any of 1024 values). The last few posts have been with respect to the precision.
It doesn't matter if you're using a 16-bit ICC profile, NVIDIA are processing the LUT with whatever you select as the output bit-depth in the driver control panel.
So if you select 8-bit, that 16-bit LUT is processed in 8-bit without any dither and you get horrible banding.
If your output is 12-bit, it's processed in 12-bit without any dither and you get significantly less banding - but there will still be banding because it is not being dithered.

If you have an NVIDIA GPU, I would not use the GPU LUT at all. Just about anything you do will result in banding.
The only exception is D3D Full-Screen Exclusive Mode applications that are outputting >8-bit.
For some reason NVIDIA will then accept say a 10-bit input, and dither that to 8-bit.

XoR's examples are extreme, but accurate.
 
It doesn't matter if you're using a 16-bit ICC profile, NVIDIA are processing the LUT with whatever you select as the output bit-depth in the driver control panel.

I'm using the latest nvidia drivers and I don't have the option to select output bit-depth. This may be because I'm using DVI-analogue out though.

If you have an NVIDIA GPU, I would not use the GPU LUT at all. Just about anything you do will result in banding.
The only exception is D3D Full-Screen Exclusive Mode applications that are outputting >8-bit.

I can tell you for sure that I get 10 bit precision in my GPU LUT with Nvidia (without using D3D Full screen exclusive mode, but I think that's only because I'm using DVI-analogue. My same tests on another machine using OSX and minidisplayport out show only 8 bit precision.


For some reason NVIDIA will then accept say a 10-bit input, and dither that to 8-bit.

Are you talking temporal dithering, or spatial dithering?
 
I'm using the latest nvidia drivers and I don't have the option to select output bit-depth. This may be because I'm using DVI-analogue out though.

I can tell you for sure that I get 10 bit precision in my GPU LUT with Nvidia (without using D3D Full screen exclusive mode, but I think that's only because I'm using DVI-analogue. My same tests on another machine using OSX and minidisplayport out show only 8 bit precision.
I'd guess that the output is 10-bit for VGA. Is that not typical for the RAMDAC on most GPUs?
True 10-bit flat panel displays, or displays which properly support 12-bit via HDMI (Deep Color) are less common.

I've seen people report that DisplayPort allows the selection of 6/8/10 bit output depending on the display.
HDMI will show 8-bit or 12-bit. As far as I am aware, there is no 10-bit output via HDMI but I don't have a ton of displays here for testing.
If you don't have any bit-depth options and are using a digital output, assume 8-bit is being used.

Are you talking temporal dithering, or spatial dithering?
Well I would have called it temporal dithering, but I've heard conflicting explanations for what makes dither temporal vs spatial.
I would call it temporal because the dither patten is changing every refresh. (almost seems like it's alternating between two)
 
@flossy_cake
Friend of mine have GTX970
I might go to him tomorrow and see how it looks with some gamma loaded by program that support 16bit LUTs
 
I'd guess that the output is 10-bit for VGA. Is that not typical for the RAMDAC on most GPUs?

I'm still confused about something.

Let's make sure we're on the same page here:

Do you agree that it's possible to have 8 bit color (256 values per channel), with 10 bit precision (1024 possible values to choose from for each of those 256 channels)? And if so, what do you call this: 8 bit? or 10 bit?
 
Last edited:
I'm still confused about something.
Let's make sure we're on the same page here:
Do you agree that it's possible to have 8 bit color (256 values per channel), with 10 bit precision (1024 possible values to choose from for each of those 256 channels)? And if so, what do you call this: 8 bit? or 10 bit?
It is not really clear what you are asking.
The signal chain is:
  • Source Application's Output bit-depth
  • Image Processing bit-depth
  • GPU's Output bit-depth
  • Display panel bit-depth
Any of these can be any bit-depth.
The processing bit-depth must be higher than the source bit-depth to avoid introducing banding if you are doing any image processing at all. (adjusting gamma values etc.)
The greater precision you use in the processing stage, the less chance there is for visible banding ending up in the output.
10-bit is probably not enough. As a general rule I'd say that the processing bit-depth should be greater than the output bit-depth.
It can be equal to the source bit-depth if you are simply passing through the source image. (no profiles loaded)

Any time you process the image, but especially any time you reduce the bit-depth, dither must be used.
If you are just passing the image through, e.g. 8-bit source, 10-bit processing/output/display with no adjustments being made, there is no need for dither.

There is an argument to be made for disabling dither if you are starting with an 8-bit source, are processing in 10-bit, and outputting/displaying that in 10-bit, but I don't think it's a good idea.
Ideally your processing step would have greater precision than 10-bit, and then we are back to the rule of: any time you reduce bit-depth, you must dither.


What appears to happen with NVIDIA is that the precision used for processing is equal to the specified output bit-depth.
So with an 8-bit source, you have lots of banding with an 8-bit output, less with a 10-bit output, and even less with a 12-bit output.
It's possible that they are still processing in a higher bit-depth and the banding is a result of them not using dither when reducing the bit-depth for output, but either way the result is a lot of banding when your output is 8-bit if you are applying any kind of color/gamma correction.

If your source bit-depth is greater than the output bit-depth, e.g. 10-bit D3D source image and an 8-bit output, the processing is applied using the source bit-depth (or so it appears) and dithered to the specified output bit-depth. This is the only scenario where NVIDIA are using dither.
 
Last edited:
Ok let me try to be clearer.

I'm using an Nvidia GPU, DVI-VGA connector to my CRT.

When I alter my videoLUT manually, I specify the value for each of the 256 entries. By doing careful measurements, I (and flod) have verified that each of these entries can be effectively specified with 10 bit precision (I'm guessing this means that I have 10 bit processing based on your last post?). In other words, I have 1024 different luminance levels to choose from for each of my 256 LUT entries.

Does this description make sense?

If it makes sense, then my question is this:

Given that the LUT only has 256 entries (2^8), yet can take on any of 1024 values (2^10), what would be the correct way to describe the LUT, in terms of bit depth? Is it an 8 bit LUT with 10 bit precision? Is it a 10 bit LUT? Is it an 8 bit LUT? is it an 8 bit LUT that is 10 bits wide?

If it doesn't make sense, can you be specific about what you don't understand?
 
This is my lut according to Xrite

S0hvqmt.png
D6qxShC.png


It goes up to 65535 so it would be a 16-bit lut from which to create the final 8-bit output that goes to the monitor.

How the GPU handles the conversion is a mystery to me. Maybe it downsizes it first to a 10-bit table, then reduces that to 8-bit by using dither, or maybe it just does it in one step from 16 to 8.
 
Given that the LUT only has 256 entries (2^8), yet can take on any of 1024 values (2^10), what would be the correct way to describe the LUT, in terms of bit depth? Is it an 8 bit LUT with 10 bit precision? Is it a 10 bit LUT? Is it an 8 bit LUT? is it an 8 bit LUT that is 10 bits wide?
I'd call that a 256-point 10-bit LUT. A 16-bit LUT would have values from 0-65535.
Think of the 256 points as 0-100% rather than specific input values.
If your display is 10-bit, those 256 points would still apply to the whole range, not just 1/4 of your range.
A LUT only needs a fraction of those points to be effective - in fact using so many points often results in inaccuracies in my experience.
 
This is my lut according to Xrite

S0hvqmt.png
D6qxShC.png


It goes up to 65535 so it would be a 16-bit lut from which to create the final 8-bit output that goes to the monitor.

Yea, this is another thing I forgot to mention.

Windows specifies the LUT with 16 bit precision, but that doesn't mean all 65,536 values are "active", in the sense that they each correspond to an incremental change in luminance for each channel.

How the GPU handles the conversion is a mystery to me. Maybe it downsizes it first to a 10-bit table, then reduces that to 8-bit by using dither, or maybe it just does it in one step from 16 to 8.

If your LUT is 10 bit, it means that from 1 to 65536, only every 64th value has an impact on the actual image.
 
Yea, this is another thing I forgot to mention.

Windows specifies the LUT with 16 bit precision, but that doesn't mean all 65,536 values are "active", in the sense that they each correspond to an incremental change in luminance for each channel.

I notice at the row for digital value 1 , its output for green is 191 and blue is 193, so it seems to be specifying a luminance correction to within only 2 points out of 65 thousand! Whether dithering at 8-bit is sufficient to create such a small step in luminance is something I simply don't know. I'm guessing it probably wouldn't. If 6-bit+FRC dithering can create equivalent to 8-bit , i.e a four fold increase in the number of shades, then 8-bit x 4 = 10-bit. Maybe that's why AMD processes it as 10-bit? Perhaps 10-bit is the limit of what can be achieved with 8-bit + dither.

wrt dithering, I think it has got to be spatial dithering only, because if it were temporal dithering (flashing pixels on and off) then the resulting brightness would depend upon the pixel response time characteristics of the display and could create some strange and variable results on different displays.
 
Last edited:
Back
Top