getting to the bottom of panel bit depth vs. LUT bit depth, etc.

spacediver

2[H]4U
Joined
Mar 14, 2013
Messages
2,715
I was going through this thread and trying to make sense of a few things, so this post is to test whether my understanding is on the right track.

Let's make things simple.

Suppose we have an LCD panel. The number of shades, in a particular pixel, of any given color channel is determined by the following things:

1: How finely the voltage can be quantized (the voltage is applied to the molecules and cause them to shift/rotate, changing the amount of light that is let through).

2: How finely these molecules respond to voltage. If they're very volatile, then it's going to be hard to get them to reliably move by tiny amounts.

Suppose we can quantize our voltage very finely with arbitrary precision.

But suppose that our molecules are so volatile that they can only achieve 256 different orientations/positions between their most extreme positions.

This means that our panel is a very crappy 8 bit display. This is what its properties would be:

1: It would only allow 256 different shades of gray (to keep things simple, let's restrict our discussion to the neutral axis and not talk about individual RGB channels).

2: At any given backlight level, if you tried to remap the relationship between input video level, and applied voltage (i.e. if you altered the LUT of the panel), all you could achieve would be clipping/crushing.

This can be illustrated below:

25071ih.png


If the molecules can only adopt these five particular orientations, then no matter what applied voltage we use for each position in the LUT, and no matter how precisely we specify that voltage, each of the five video input signals will be linked to only one of those five molecular orientations. So either we can display the full five shades of gray, or we end up with a quantization artifact and we end up with only four shades of gray, as shown below:

2gy4y7c.png


So, the only thing we can do with this panel is to raise the level of the backlight, which will just uniformly increase the luminance across all video input levels, or we can choose to crush input levels together. In the case of the above image, we have crushed levels 4 and 5.

So, given this model of how things work (and this is how I understand them to work - I could be well off on many parts), we're now in a position to have a meaningful discussion on what panel bit depth really means.

For the next part of the discussion, let's ignore spatial / temporal dithering, such as frame rate control (FRC).

It should be clear that any decent LCD panel is not as limited as the one described earlier. If a panel did indeed only have 256 possible molecular orientations, then any form of correction done to the luminance function of the display (e.g. gamma correction, white point adjustment, dynamic mode, cinema mode, etc.), whether implemented through the OSD or through the video card, would either result in no change whatsoever, or varying degrees of quantization artifacts.

If my thinking so far is correct, then this means that virtually all LCD panels are capable of more than 8 bit precision, in the sense that the molecules can reliably adopt more than 256 orientations/positions between their most extreme orientations/positions.

Suppose that, in fact, all LCD panels are capable of 1024 discrete molecular orientations, but that the circuitry of the panel is such that only 256 different voltages can be applied at any given frame/refresh.

In this case, one could greatly reduce quantization artifacts that are introduced by changes to the LUT, but we'd be limited to only 256 simultaneous shades of grey.

Have I got this right?
 
Last edited:
1: How finely the voltage can be quantized (the voltage is applied to the molecules and cause them to shift/rotate, changing the amount of light that is let through).

2: How finely these molecules respond to voltage. If they're very volatile, then it's going to be hard to get them to reliably move by tiny amounts.

Suppose we can quantize our voltage very finely with arbitrary precision.

But suppose that our molecules are so volatile that they can only achieve 256 different orientations/positions between their most extreme positions.

The quantization doesn't happen at the molecular level of the actual display elements. This has nothing to do with "volatile molecules". The range of brightnesses that an individual pixel can take on is a continuous, not discrete, scale. LCD pixels are more than big enough to stay out of the range of quantum effects. :p

The quantization happens at the level of the DAC (digital to analog converter) that converts the digital display signals into an analog voltage. An 8-bit DAC will produce 256 discrete voltage levels, while a 10-bit DAC will produce 1024 discrete voltage levels.

This means that our panel is a very crappy 8 bit display. This is what its properties would be:

1: It would only allow 256 different shades of gray (to keep things simple, let's restrict our discussion to the neutral axis and not talk about individual RGB channels).

2: At any given backlight level, if you tried to remap the relationship between input video level, and applied voltage (i.e. if you altered the LUT of the panel), all you could achieve would be clipping/crushing.

This can be illustrated below:

25071ih.png


If the molecules can only adopt these five particular orientations, then no matter what applied voltage we use for each position in the LUT, and no matter how precisely we specify that voltage, each of the five video input signals will be linked to only one of those five molecular orientations. So either we can display the full five shades of gray, or we end up with a quantization artifact and we end up with only four shades of gray, as shown below:

There is nothing about the molecular structure of a pixel that only allows it to adopt N discrete orientations. The pixel brightness is roughly proportional to the applied voltage.


This picture has a few major errors:

1) The input signal will never adopt fractional values (assuming 4 is full scale here) because it deals in integers only. You can't have a level of 3.7 on an integer scale of 0..4. Internally, your video card would either truncate to 3 or round up to 4 depending on the implementation details.

2) Likewise, the applied voltage is limited to values that the DAC can output. The DAC output is quantized, so you can't have a 3.7 if your LSB (least significant bit) is equal to 1 unit on your voltage scale.

3) Assuming you could somehow produce a 3.7 on your voltage scale, the pixel would have no problem rotating to 3.7/4 * 90degrees. The pixel isn't quantized.

It should be clear that any decent LCD panel is not as limited as the one described earlier. If a panel did indeed only have 256 possible molecular orientations, then any form of correction done to the luminance function of the display (e.g. gamma correction, white point adjustment, dynamic mode, cinema mode, etc.), whether implemented through the OSD or through the video card, would either result in no change whatsoever, or varying degrees of quantization artifacts.

If my thinking so far is correct, then this means that virtually all LCD panels are capable of more than 8 bit precision, in the sense that the molecules can reliably adopt more than 256 orientations/positions between their most extreme orientations/positions.

Suppose that, in fact, all LCD panels are capable of 1024 discrete molecular orientations, but that the circuitry of the panel is such that only 256 different voltages can be applied at any given frame/refresh.

In this case, one could greatly reduce quantization artifacts that are introduced by changes to the LUT, but we'd be limited to only 256 simultaneous shades of grey.

Have I got this right?

Okay, I couldn't follow your logic here.

But going back to my earlier point: The LCD panel pixel brightness (orientation) is not quantized. It can take on any number of positions relative to the voltage applied.

The voltage applied is, however, quantized. This is limited by the precision of the DAC. An 8-bit DAC can produce 256 discrete voltage levels. A 10-bit DAC can produce 1024 discrete voltage levels.

The purpose of a LUT is to map the input video signal on to the range of DAC values the monitor can support.

It's called a LUT (Look-Up Table) because that's precisely how it behaves. The LUT is just a large table of input values on the left and corresponding DAC values on the right. For each color of each pixel, the processor looks up the input brightness on the left and then send the DAC whatever value is on the right.

There is additional processing in the overall display pipeline for functions such as dithering and pixel response time compensation, but that's the general idea.
 
Thanks for the reply - appreciate the discussion.

The quantization doesn't happen at the molecular level of the actual display elements. This has nothing to do with "volatile molecules". The range of brightnesses that an individual pixel can take on is a continuous, not discrete, scale. LCD pixels are more than big enough to stay out of the range of quantum effects. :p

Fair enough - the molecules can adopt a continuous position/orientation as a function of voltage, but I have spoken to display manufacturers who say that the problem with TN panels, for example, is that the molecules are quite volatile compared to in an IPS panel, and getting them to respond cleanly is a problem. I suppose that because of this, the applied voltage is purposely quantized at a courser level to avoid noisy and unreliable results on the molecular end.

1) The input signal will never adopt fractional values (assuming 4 is full scale here) because it deals in integers only. You can't have a level of 3.7 on an integer scale of 0..4. Internally, your video card would either truncate to 3 or round up to 4 depending on the implementation details.

2) Likewise, the applied voltage is limited to values that the DAC can output. The DAC output is quantized, so you can't have a 3.7 if your LSB (least significant bit) is equal to 1 unit on your voltage scale.

So a 10 bit display panel DAC would process 1024 possible input values into 1024 possible voltages. The input values would be coded 1:1024, and the voltage values would be between the range of the lowest and highest voltage.

The relationship between the input values and the output voltages is determined by the combination of the video card LUT and the display panel LUT. If the LUT in windows is specified in 16 bits, the values will be rounded down to 10 bits because of the display panel DAC bit depth bottleneck (or vid card DAC in the case of a CRT).


Is this correct?
 
Last edited:
Fair enough - the molecules can adopt a continuous position/orientation as a function of voltage, but I have spoken to display manufacturers who say that the problem with TN panels, for example, is that the molecules are quite volatile compared to in an IPS panel, and getting them to respond cleanly is a problem. I suppose that because of this, the applied voltage is purposely quantized at a courser level to avoid noisy and unreliable results on the molecular end.

I'm not familiar with the details of a TN panel, but what you're referring to as volatility is usually called noise. Noise is always additive.

Quantizing the input signal won't decrease the noise. In fact, quantizing the input signal actually adds noise. This is referred to as quantization noise. Wikipedia has some nice figures that explain why this is the case: http://en.wikipedia.org/wiki/Quantization_(signal_processing)#Quantization_noise_model

So no, the input signal isn't quantized to reduce the noise. At some point it wouldn't make much sense to use higher DAC precision, though. Once the discrete steps are substantially smaller than the total noise amplitude you reach the point of diminishing returns.
 
thanks for the link, looking forward to studying it. I've heard the following terms associated with the molecular response of LCD panels:

volatility, viscosity, and momentum. The less viscous the molecules are, the more responsive they are, but because of their momentum, they are noisier (this is what I believe is meant by volatility).

What I mean by quantizing to reduce noise is that instead of quantizing the voltage to 1024 discrete values, it is quantized to 256 discrete values (while keeping the range of voltage values the same). This way, a noisy panel's luminance errors will be lower relative to the luminance distance between successive input values.

Consider, for example, what happens if you have 90 possible voltage values that drive the molecules between 0 and 90 degrees, and the (normally distributed) noise for the molecular response has a standard deviation of 0.5 degrees.

This means that the standard deviation of the noise spans one increment in video input signal.

Now consider what happens if you now have 900 possible voltage values.

Now the standard deviation of the noise spans 10 increments of video input signal.

In other words, a panel like this would fail miserably if one attempted to drive it with a precision of 900 voltage values. By quantizing the voltage more coarsly (i.e. in 90 steps), you can reduce the relative error of each input level.

Does this make sense?
 
In other words, a panel like this would fail miserably if one attempted to drive it with a precision of 900 voltage values. By quantizing the voltage more coarsly (i.e. in 90 steps), you can reduce the relative error of each input level.

Does this make sense?

You're still thinking of this backwards.

The final display noise is the sum of the quantization noise and all of the physical pixel noise sources.

"Reducing the relative error of each input level" isn't a useful metric, because we only care about the absolute noise level.

Since quantization noise only adds to the noise of the pixels, it's never advantageous to add more quantization noise by using fewer discrete values. However, there is a point of diminishing returns.

There's another, more complex angle to all of this: The quantization noise is relatively low frequency, for the most part. For a static image, it's actually 0Hz.

Meanwhile, your eyes are very good integrators. They will integrate noise over time and send an averaged signal to your brain. The higher the noise frequency, the less of it you'll actually sense. In other words, your eyes actually see a moving average of the brightness in any one spot.

With that in mind, consider this: For a static image, the instantaneous luminance error relative to the intended image signal is the sum of the quantization noise plus the physical noise sources from the DAC output to the physical pixel. If you're viewing a static image, the quantization noise will be 0Hz. Meanwhile, the physical noise sources will extend across a broad range of frequencies. Your eyes, however, will take the average luminance value because they integrate luminous signals over time. As a result, the higher-frequency noise will average out and won't be sensed, leaving the 0Hz signal plus some lower-frequency noise.

Your eyes are taking an average over time, so you will still be able to discern the difference between closely-spaced color levels (to a point) even if the signal is noisy.

Let's say you have a 10-bit DAC, giving 1024 values. Let's also assume that you have a surprisingly noisy system with noise evenly distributed +/- 4 LSBs (least significant bits).

Sending color 512 to the screen will randomly show colors anywhere from 508 to 516 (512 +/- 4)
Sending color 513 to the screen will randomly show colors anywhere from 509 to 517 (513 +/- 4)

Even with the noise, the average over time of the first color is still going to be 512
Likewise, the average over time of the second color will be 513.

As such, you'll still be able to tell the two colors apart despite the noise (this is assuming that you could tell the colors apart without the noise).
 
The stuff about temporal summation makes perfect sense (related to Bloch's Law) - thanks for the explanation.

Based on that figure in the wiki link, is quantization noise related to the idea that rounding errors are introduced when you perform transformations on a set of data with limited precision (which is why it's always preferable to perform the "background" calculations with high precision)?

So one of the questions I'm trying to get at is this:

Why are so many panels are purportedly 8 bit, when according to what you've said, it's preferable to quantize more finely? I've heard that some monitors have a 10 bit LUT, but the panel only responds at 8 bits of precision. Is this something to do with the idea that only 256 different voltages can applied simultaneously, but the values of these 256 voltages can be "tuned" with a precision of 10 bits?
 
Last edited:
So one of the questions I'm trying to get at is this:

Why are so many panels are purportedly 8 bit, when according to what you've said, it's preferable to quantize more finely? I've heard that some monitors have a 10 bit LUT, but the panel only responds at 8 bits of precision. Is this something to do with the idea that only 256 different voltages can applied simultaneously, but the values of these 256 voltages can be "tuned" with a precision of 10 bits?

We would need a display engineer to get an answer to the actual technical limitations, but as with most things I would guess it boils down to cost.

Note that panels are addressed in rows and columns. Each pixel can't have it's own DAC, so the driver addresses one row at a time in sequence. This still requires a large number of DACs. Barring any clever buffering and multiplexing, it requires one on-chip DAC and output buffer per column column sub-pixel. (Alternatively, scan columns and have DACs on the rows instead).

Each pixel has three cells, so if we're scanning 1600 lines (for example) the driver would need 1600 * 3 = 4800 DACs! I wouldn't be surprised if driver IC designers used clever tricks to push that number down, but it's still a lot of DACs.

When dealing with thousands of DACs, it's much cheaper and easier to implement 6-bit DACs than 8-bit DACs. 10-bit DACs are even more expensive.


To answer your other question: Having a higher LUT resolution than your output resolution is desirable for exactly the reason you suggested: You want to have enough extra precision in your calculation steps to minimize quantization errors at each processing step. This is why audio is (or should be) mastered at 24-bits even though real-world noise floors are in the 20-bit (or so) range. Those extra bits are useful to prevent losing data at each translation step. You then truncate the least significant bits only after the processing is complete.

Here's a great paper on these subjects that you might enjoy: http://cdn.intechopen.com/pdfs-wm/11273.pdf
 
Thanks AgentQ, appreciate your patience and insights.

No problem, and thank you for starting such an interesting thread.

Disclaimer I'm not a display engineer. Would love to have one chime in on this thread, though.
 
Back
Top