LG 48CX

I think you'd have to have the same 10 bit capable monitor and feed it 10bit images and 10bit renders on a 10bit signal from for example, an amd gpu (or nvidia studio drivers on photoshop) - then compare the same material sent to the same 10 bit capable monitor model fed regular nvidia gaming drivers. The nvidia gaming one would be below 10 bit so would rely on dithering. So you'd be comparing dithering which is smoothing to avoid banding, to source accurate per pixel 10bit source pipeline. Making some kind of dithering gradient to smooth "rough" banding lines might look "ok" but it isn't accurate to the source material.

I haven't dug into any examples or 10bit test materials since I don't have such a panel yet. The frustrating thing is , as was mentioned in this thread - a lot of us thought finally hdmi 2.1 support on a OLED so no more compromising with bandwidth limitations and workarounds.

EDIT: Rtings might have some here -- https://www.rtings.com/tv/tests/picture-quality/gradient

I think the biggest beef here is HDMI 2.1 was supposed to be the solution to all our current problems. Today we have to chose between several pros & cons in regards to resolution, refresh rate, color bit, etc. because you can’t have it all with either HDMI 2.0B nor DP 1.4 due to bandwidth limitations.

With the 48 to 40 GBPS downgrade, we now still can’t have our cake and eat it too. We yet again need to fiddle with settings and sacrifice something and some people’s choices are different than others. This can probably be solved with a TV software update if LG chooses, maybe the full bandwidth can be unlocked while the TV is set is in “game mode” if enough people are loud enough about it.
.

Nvidia could just start supporting 10bit output on gaming GPU's gaming drivers :yuck:

- and LG could start supporting passing through uncompressed hdmi audio formats over its eARC on the CX series....

Hopefully they will both come through.
 
Last edited:
I mean if that's absolutely important to you, there is always the C9 OLED. 48" is still too big for many to use as a desktop monitor anyways so you might as well save some cash and pick up a discounted C9 and have your no compromises for a gaming display. Heh then again there will always be some compromise because BURN-IN :rolleyes:
 
https://www.rtings.com/tv/tests/picture-quality/gradient

"we connect our test PC to the TV via HDMI, with the signal output via an NVIDIA GTX 1060 6GB graphics card. We display our gradient test image via the Nvidia ‘High Dynamic Range Display SDK’ program, as it is able to output a 1080p @ 60 Hz @ 10-bit signal, bypassing the Microsoft Windows environment, which is limited to an 8-bit. "

"10-bit color depth means a TV uses 10 bits for all three subpixels of each pixel, compared to the standard 8 bits. This allows 10-bit to specifically display many more colors; 8-bit TVs can display 2^(8*3) colors (16.7 million colors), versus 10-bit’s 2^(10*3) (1.07 billion colors). "

"10-bit is capable of capturing more nuance in the colors being displayed because there is less of a 'leap' from unique color to unique color. "

gradient-steps.png

"The final gradient score is determined from the test result and calculated to give the final gradient rating. The maximum rating for an 8-bit TV is 9.0 since a 1.0 point penalty is automatically given to any 8-bit having visible 8-bit gradient banding. "
 
I mean if that's absolutely important to you, there is always the C9 OLED. 48" is still too big for many to use as a desktop monitor anyways so you might as well save some cash and pick up a discounted C9 and have your no compromises for a gaming display. Heh then again there will always be some compromise because BURN-IN :rolleyes:

I'll probably just wait it out until black friday potential price drops and/or until nvidia "3080ti" 7nm hdmi 2.1 GPUs are out. Hopefully Nvidia will enable 10bit hdmi2.1 output and LG will enable uncompressed hdmi 2.1 audio pass-through over it's eARC by then.

If both issues are left unsupported indefinitely then I could consider a 55" C9 though I'd reallly rather not jump up a size (and distance) since I'd have to upscale the other monitor(s) in my array alongside it a bit more, losing some screen real-estate and having to sit my peripheral desk and chair father back in my modest sized pc room.
 
I have owned:
2 LG Curved C6's
1 LG Flat C7
1 LG Flat C9
1 AW55

and knock on wood, never had any burn in problems.....
....then again I practice safe sex with OLED displays.

It amazes me how people are currently using 6 bit TN's and lowly 8 bit IPS displays, however, a 10 bit OLED display is all of the sudden NOT GOOD ENOUGH lmao....please the fidelity experience on the 48CX is gonna blow every other gaming monitor away.
 
So with 120hz on PC, is the CX going to dynamically drop refresh rates like any Gsync display to accommodate FPS down to 60hz (or lower)? I can see most gaming running around the 60<->90hz range with high settings anyway.

Also LG oleds have supported DolbyVision for years, even in HDMI 2.0. I highly doubt LG would make the latest models incompatible with 12bit signals in any way. This idea that a 2.1 port will “downsample” better doesn’t make sense. Perhaps the tv processor can calculate more accuracy with actual 12bit mastered content, I’m not sure. I wouldn’t be surprised if the display works fine with nvidia’s driver set to 12bit option too.
 
It'll be great but hopefully nvidia will eventually enable actual 4k 120Hz 10bit 444 to the 10 bit panel off a gaming gpu with gaming drivers to avoid 8bit banding or detail loss from using dithering to blend out the banding.. in the 3000 series with hdmi 2.1 at least.

LG could eventually enable pass through of uncompressed hdmi 2.1 audio formats in firmware updates too. Definitely 7. 1 pcm uncompressed since you can already enable it with a CRU EIDE hack . dts-HD and atmos aren't out of the realm of possibility either since according to Rtings the C9 can do both.
 
So with 120hz on PC, is the CX going to dynamically drop refresh rates like any Gsync display to accommodate FPS down to 60hz (or lower)? I can see most gaming running around the 60<->90hz range with high settings anyway.

Also LG oleds have supported DolbyVision for years, even in HDMI 2.0. I highly doubt LG would make the latest models incompatible with 12bit signals in any way. This idea that a 2.1 port will “downsample” better doesn’t make sense. Perhaps the tv processor can calculate more accuracy with actual 12bit mastered content, I’m not sure. I wouldn’t be surprised if the display works fine with nvidia’s driver set to 12bit option too.

Yep.
 
LG oleds have supported DolbyVision for years, even in HDMI 2.0. I highly doubt LG would make the latest models incompatible with 12bit signals in any way. This idea that a 2.1 port will “downsample” better doesn’t make sense. Perhaps the tv processor can calculate more accuracy with actual 12bit mastered content, I’m not sure. I wouldn’t be surprised if the display works fine with nvidia’s driver set to 12bit option too.

Support for each depends on what Hz you are running, what chroma you are running, and how much bandwidth the port has. We are only recently able to do 4k 120hz natively on TVs.

Those of us complaining are saying we want 4k 120hz 4:4:4 10bit support off of a nvidia gaming gpu using gaming drivers. The LG CX supports 10bit 4:4:4 at 4k, but Nvidia doesn't' currently except off of a RTX gpu using Studio drivers rather than gaming drivers, and only with certain apps. AMD supports 10bit output on their gaming drivers. Other generations of hdmi 2.0 LG OLEDs were limited to 60hz and in some pipelines were problably using lower chroma too.

For example,
4k 60hz (or less for most movies) running 4:4:4 (8-12bit) requires 18Gbps uncompressed (the limit of current good hdmi 2.0b cables)
while
4k 120hz 4:4:4 10Bit uncompressed requires 40Gbps
4k 120hz 4:4:4 12Bit uncompressed requires 48Gbps


The C9's HDMI 2.1 has 48Gbps limit so can accept a 12bit signal at 120hz even though it's a 10bit panel, so nvidia gaming drivers should work sending 12bit.
The CX's HDMI 2.1 has a 40Gbps limit so can not accept a 12bit signal at 120Hz. It can accept a 10bit one but Nvidia currently doesn't support 10 bit on gaming gpus with gaming drivers.

345686_2HIg99W.png



-----------------------------------------------------------

So the video front is limited by Nvidia not supporting 10bit output really. The LG CX will accept 10bit. Running 8bit will result in banding which can be covered up by using dithering but dithering loses 1:1 detail compared to the source. Running lower chroma also results in a lower quality.

So I wouldn't blame LG but rather nvidia for the 10bit video issue. However LG is missing support on the audio front as they currently do not support passing uncompressed HDMI audio formats through the TV's eARC port like hdmi 2.1 TV's should.

345593_89pVD56.jpg

--------------------------------------
 
Last edited:
10-bit over DP is available and at least as far as actual connection goes it works just fine. This can be tested on 8-bit gradient like http://www.lagom.nl/lcd-test/gradient.php and changing gamma slider in Nvidia control panel. Nvidia for some unknown reason do not enable dithering like AMD does (even if it supports it in hardware and dithering can be kinda enabled) so on 8-bit connection there will be visible banding when changing gamma.

Separate issue is support of actual 10-bit output in all sorts of applications. And we have basic types:
- games and other Direct X applications - will not have banding even on 8-bit connection, not sure if it outputs actual 10-bit colors or not on 10-bit connection. It also probably depends on if this is windowed mode or full screen.
- Direct X but with with HDR - I have not tested this.
- professional applications using OpenGL like Photoshop - true 10-bit should require pro driver. Not sure if it have banding or not on normal driver.
- others like maybe Vulkan, Vulkan with HDR, etc. - requires testing.

Windows API is 8-bit and we are only beginning to see actual 10-bit panels and (true) HDR is still not really here yet so we must expect some issues. Especially since dithering masks 8-bit deficiencies so well it is really hard to say if what we are looking at is 8-bit with dithering or true 10-bit. At least this means that if application actually displays 8-bit + dithering and not true 10-bit we won't be able to see the difference 🙂

First step to sort out this mess is bother Nvidia to enable support for 10-bit 444 modes over HDMI.
Then finding/writing some test applications for DirectX/Vulkan(+HDR) stuff and figuring out where to exactly look to see if it is true 10-bit or fake and if it is not true 10-bit then bother Nvidia (and AMD/Intel)
 
Has anyone tested 4k120 444 10bit on an AMD card? I have an HTPC that I stuck a 1060 in for VP9 support going to my CX65 (bought before aware of 40gps nerf), now I'm wishing I went AMD. Have to drop 4k to 24hz to get RGB/4:4:4 at 12bit (10bit not selectable).

Also I saw some people discussing this early but nobody gave their results - if you duplicate display and send one HDMI out to CX and one HDMI out to AVR does this work fluidly?
AMD can't run 4k/120 10bit, no card can currently. Need HDMI 2.1 or Dp 2.0 (up to 77gbps!).
Surely could drop to 4k100 or so with 12bit instead via CRU or similar?
 
10-bit over DP is available and at least as far as actual connection goes it works just fine. This can be tested on 8-bit gradient like http://www.lagom.nl/lcd-test/gradient.php and changing gamma slider in Nvidia control panel. Nvidia for some unknown reason do not enable dithering like AMD does (even if it supports it in hardware and dithering can be kinda enabled) so on 8-bit connection there will be visible banding when changing gamma.

Separate issue is support of actual 10-bit output in all sorts of applications. And we have basic types:
- games and other Direct X applications - will not have banding even on 8-bit connection, not sure if it outputs actual 10-bit colors or not on 10-bit connection. It also probably depends on if this is windowed mode or full screen.
- Direct X but with with HDR - I have not tested this.
- professional applications using OpenGL like Photoshop - true 10-bit should require pro driver. Not sure if it have banding or not on normal driver.
- others like maybe Vulkan, Vulkan with HDR, etc. - requires testing.

Windows API is 8-bit and we are only beginning to see actual 10-bit panels and (true) HDR is still not really here yet so we must expect some issues. Especially since dithering masks 8-bit deficiencies so well it is really hard to say if what we are looking at is 8-bit with dithering or true 10-bit. At least this means that if application actually displays 8-bit + dithering and not true 10-bit we won't be able to see the difference 🙂

First step to sort out this mess is bother Nvidia to enable support for 10-bit 444 modes over HDMI.
Then finding/writing some test applications for DirectX/Vulkan(+HDR) stuff and figuring out where to exactly look to see if it is true 10-bit or fake and if it is not true 10-bit then bother Nvidia (and AMD/Intel)

You would need HDMI 2.1 gpus to really test 4k 444 120hz 10bit but would also need NVIDIA to enable 10-bit 444 in the first place.

The LG OLED TVs might have dithering enabled by default as necessary. If so, it could complicate testing unless you can disable it. The C series apparently had a dithering effect introduced in an update "as a method to fix the near black chrominance overshoot. " for example

Reddit: LG C9 - Dithering/Grainy Greys?

"This was introduced with the update from 4.10.15 to 4.10.31 as a method to fix the near black chrominance overshoot. Before that update, when things moved very close to black, you would get a flashing brighter gray color artifact. For example in a scene with dark hair in a shadow, you would see gray flashing artifacts in the darkest part of the hair. Another example is the main menu of the game Control. There are a bunch of rotating gray cube things, and in the shadows of the cubes (look at top right of menu) you would get gray flashing artifacts.
Since the update near black grays have a dithered pattern to them, which completely fixed the overshoot and there are no more flashing artifacts. If you look closely (like where you sitting from) you can notice the dithering, but from a normal distance the dithering is patterned in such a way that the dither blends into the appropriate blended color.
Tldr: it's supposed to be there, they added it to fix an artifact bug"


.
Review: LG C9 OLED 28 May 2019

"Before we switch our attention to HDMI 2.1, we want to address the subject of brightness flashing and blocking issues. LG has released a solution for its existing TVs. We never had a chance to examine it on a 2018 LG OLED but based on what we have heard this solution seems to have been employed in 2019 LG OLED TVs as well. It works in the sense that we did not observe the issues with any of our real scene or pattern tests. But the solution bothers us. As you can see in the shot to the right, the panel now uses dithering to reproduce shades of grey. And no, it is not erotic in any way, in fact it appears to reduce resolution in the darkest tones, which can be observed on a black to white gradient that produces some banding. The same gradient looked smoother (at the dark end) and produced less banding on the Sony A9G standing next to LG C9 (more side-by-side comparisons will be included in our upcoming Sony A9G review). LG's smooth gradation setting did not resolve it. We hope that this is a temporary solution and that LG continues to explore a real fix."

--------------------------------------------------

.... but I am talking about overall dithering on a 8bit signal sent to a 10bit panel to avoid banding, not the near-black fix.

I suspect that overall there would be a fidelity hierarchy starting with 1:1 uncompressed original 10bit content shown over a 10bit pipeline to the 10 bit panel, (at 120hz 4k 444 10bit for example off of a 3000 series gpu).

Something like:

1:1 Uncompressed 10bit > DSC compressed 12bit (3:1 down to 18gbps) > dithered 8bit > 4:2:2 or 4:2:0 chroma > upscaled lower resolution

I don't know if that is the actual order after the source material at the top of the hierarchy, that was just guesswork, but for me I'd like to have full uncompressed original 10bit material support at some point.
 
Last edited:
You would need HDMI 2.1 gpus to really test 4k 444 120hz 10bit but would also need NVIDIA to enable 10-bit 444 in the first place.
To test it in 120Hz yes but for testing if application that is supposed to have 10-bit output is actually using 8-bit with dithering it is better to use lowest possible refresh rate eg. 24Hz
 
It cannot be true lossless compression.
VESA claim it is "visually lossless" which only means they did bunch of tests with various test pictures and people generally could not tell when it was used.

DSC operates on per line basis so it should add only minuscule amount of input lag and how much compression will be visible depends on content of whole line in case of most images should look very good. In videos you have mostly gradients and color changes in line are not very sharp. In typical desktop scenario you do have sharp color changes in line but usually also have a lot of pixels which have the same or similar color/pattern and those are easily compressible.

DSC will probably be only visible only on specially prepared test images designed to exploit its weaknesses. Should be better than dropping color resolution and/or bitdepth.

It's possible to have even less input lag with DSC because you cut down the transmission time.

I suppose that's also a disadvantage of 40 gbps instead of 48. It takes an extra millisecond to transmit the entire signal.
 
To test it in 120Hz yes but for testing if application that is supposed to have 10-bit output is actually using 8-bit with dithering it is better to use lowest possible refresh rate eg. 24Hz

yes you are right. I think you could for example test 10 bit 4k 444 at 60hz or lower off an amd gpu over a 18Gbps hdmi cable and compare it to 4k 444 at 8bit setting dithered. I don't know exactly how you would enable and disable dithering on the GPU side and the TV side but it should be possible.

I also don't know if you can force DSC (3:1 compression) to compare but I think modern DSC needs hdmi 2.1 anyway so that would have to wait.

Then there is 4:2:2 chroma and 4:2:0 chroma, and upscaling a lower resolution as comparisons.

Each of those subsequent settings by definition are some step below displaying the 1:1 uncompressed 10bit 444 source material.

As mentioned, LG had some kind of dithering fw update to "fix" (a work-around really) near black greys so they don't flicker/flash too. I'm not sure if that is a setting in the OSD but you probably have to leave it on anyway to avoid a much worse issue. I think it only affects and applies dithering to the "near black" grey pixels.

-----------------------------------------------------

1:1 uncompressed original source material is best, (even if it's using dithering on near-black greys). I'm sure 8bit dithered entirely looks ok - depending on the content and especially the viewing distance. I'm still curious of how they all compare but really I'd rather have nvidia supporting 10bit 444 4k120hz output on their hdmi 2.1 gpus to look forward to .. (and LG supporting uncompressed HDMI 2.1 audio formats via eARC pass-through).
 
Last edited:
  • Like
Reactions: XoR_
like this
10-bit over DP is available and at least as far as actual connection goes it works just fine. This can be tested on 8-bit gradient like http://www.lagom.nl/lcd-test/gradient.php and changing gamma slider in Nvidia control panel. Nvidia for some unknown reason do not enable dithering like AMD does (even if it supports it in hardware and dithering can be kinda enabled) so on 8-bit connection there will be visible banding when changing gamma.

Separate issue is support of actual 10-bit output in all sorts of applications. And we have basic types:
- games and other Direct X applications - will not have banding even on 8-bit connection, not sure if it outputs actual 10-bit colors or not on 10-bit connection. It also probably depends on if this is windowed mode or full screen.
- Direct X but with with HDR - I have not tested this.
- professional applications using OpenGL like Photoshop - true 10-bit should require pro driver. Not sure if it have banding or not on normal driver.
- others like maybe Vulkan, Vulkan with HDR, etc. - requires testing.

Windows API is 8-bit and we are only beginning to see actual 10-bit panels and (true) HDR is still not really here yet so we must expect some issues. Especially since dithering masks 8-bit deficiencies so well it is really hard to say if what we are looking at is 8-bit with dithering or true 10-bit. At least this means that if application actually displays 8-bit + dithering and not true 10-bit we won't be able to see the difference 🙂

First step to sort out this mess is bother Nvidia to enable support for 10-bit 444 modes over HDMI.
Then finding/writing some test applications for DirectX/Vulkan(+HDR) stuff and figuring out where to exactly look to see if it is true 10-bit or fake and if it is not true 10-bit then bother Nvidia (and AMD/Intel)

Well that's the thing, if they already support it through DP1.4, then why would they NOT support it through HDMI 2.1? Just doesn't make a whole lot of sense, yeah let's give 444 10bit support on DP1.4 but totally ignore that for HDMI 2.1. I'm sure they'll come around.
 
Well that's the thing, if they already support it through DP1.4, then why would they NOT support it through HDMI 2.1? Just doesn't make a whole lot of sense, yeah let's give 444 10bit support on DP1.4 but totally ignore that for HDMI 2.1. I'm sure they'll come around.
It is not HDMI 2.1, they just do not have 10bit 444 over HDMI at all and it already affect users who connect their HDTV's to their PC's. Most HDTV's downsample everything to 4:2:2 or 4:2:0 so it is not such a big issue but still.
It does not make any sense but this is how it is.
You will probably actually need HDMI 2.1 card (Ampere?) to get it...
 
It's possible to have even less input lag with DSC because you cut down the transmission time.

I suppose that's also a disadvantage of 40 gbps instead of 48. It takes an extra millisecond to transmit the entire signal.
It does not work like this.
Frame transmission time is always calculated by 1000 / refresh_rate so eg for 100Hz it will always be 10ms. It is also true when you use VRR.

Normally picture is transmitted pixel by pixel, which is pretty much exactly digital representation of analogue VGA output used for Catode Ray Tubes.
Where DSC seems to differ is that they encode whole line of picture, compress it and then decoder first need to decode such line after it gets every single bit of this line. This means we do get increased input lag by exactly one line. This is of course insignificant amount of input lag and perhaps not even worth mentioning but technically it is there and so it is not true that DSC does not introduce any input lag 🙂 Entirely different thing is if actual implementation of signal processor that decodes DSC stream introduces input lag. We all know that by Murphy's Law if something in implementation can be botched it probably will.

Reduced amount of data you need to send to have a line/frame does not by itself mean you will have reduced input lag but it can be used to have faster refresh rates on the same physical connection and faster refresh rate will reduce input lag.
 
It is not HDMI 2.1, they just do not have 10bit 444 over HDMI at all and it already affect users who connect their HDTV's to their PC's. Most HDTV's downsample everything to 4:2:2 or 4:2:0 so it is not such a big issue but still.
It does not make any sense but this is how it is.
You will probably actually need HDMI 2.1 card (Ampere?) to get it...

That's totally fine as current cards are limited to 4k120Hz at 4:2:0 anyways. As long as nvidia decides to support 10bit 444 overh HDMI with Ampere then I'm sure that will be enough to satisfy just about every CX owner as I don't think many will be pairing them up with a older non HDMI 2.1 card at all.
 
https://www.cnet.com/news/nvidia-st...geforce-30-bit-color-unto-photoshop-and-more/ (July 29, 2019)

"Photoshop has long given you the option to turn on a 30-bit color pipe between it and the graphics card. But if you enabled it on a system with a consumer-targeted GeForce or Titan graphics card, it didn't do anything. That's why there's always been such confusion as to whether you could display 30-bit color with a GeForce card. I mean, there's a check box and you can check it! "

----------------------------------------------------------

"To properly take advantage of this, you still need all the other elements -- a color-accurate display capable of 30-bit (aka 10-bit) color, for one. The ability to handle a 30-bit data stream is actually pretty common now -- most displays claiming to be able to decode HDR video, which requires a 10-bit transform, can do it -- but you won't see much of a difference without a true 10-bit panel, which are still pretty rare among nonprofessionals.

That's because most
people associate insufficient bit depth with banding, the appearance of visually distinguishable borders between what should be smoothly graduated color. Monitors have gotten good at disguising banding artifacts by visually dithering the borders between colors where necessary. But when you're grading HDR video or painting on 3D renders, for example, dithering doesn't cut it.

And the extra precision is surely welcome when your doctor is trying to tell the difference between a tumor and a shadow on his cheap system. From Nvidia's own white paper in 2009: "While dithering produces a visually smooth image, the pixels no longer correlate to the source data. This matters in mission-critical applications like diagnostic imaging where a tumor may only be one or two pixels big."
The studio driver update was only for windowed OpenGL applications. NVIDIA added 10-bit color support for DirectX applications on GeForce cards with Maxwell "2" (9-series cards). 10-bit color support was added to GeForce cards for fullscreen OpenGL applications with Pascal.
 
  • Like
Reactions: elvn
like this
^ over displayport

... but thanks I didn't know that. That post still quotes how banding is a problem and that using dithering as a work around to hide banding caused by running a lower bit is not as good as just supporting the transmission of the 10bit source material 1:1.

4k 10bit 444 over hdmi is needed for these TVs (especially since it was revealed they can't send 12bit 4k 444 over their 40Gbps limit) --- I think AMD gpus can send 10bit 444 over hdmi and Nvidia can't, at least not yet.

-----------------------------------------------------------------------------------------------

Need Nvidia to support 10bit 4k 444 with 60Hz 18Gbps hdmi 2.0b and later 10bit 4k 444 120hz on 7nm hdmi 2.1 (40Gbps) source material sent as is to the 10 bit 4k 444 (60Hz - 120hz) LG CX TV hdmi 2.1 input for 1:1 uncompressed to the 10bit LG CX panel.

Also need LG to update firmware to support uncompressed hdmi audio formats (7.1 PCM, DTS-HD, ATMOS) pass-through via eARC to an eArc capable receiver or other sound device.

Then I'd be able to send 4k (60hz - 120hz) 10bit 444 material to the 4k 120hz 10bit 444 panel 1:1, and I'd be able to send hdmi uncompressed audio formats to my receiver. I'd rather not have to compress my video signals or my audio signals on material that I shouldn't have to and wouldn't have to on other displays.
 
Last edited:
I wonder if 4K 100Hz would be possible using 8-bit 4:2:2 or even 4:2:0 on current RTX GPU's
 
It does not work like this.
Frame transmission time is always calculated by 1000 / refresh_rate so eg for 100Hz it will always be 10ms. It is also true when you use VRR.

Normally picture is transmitted pixel by pixel, which is pretty much exactly digital representation of analogue VGA output used for Catode Ray Tubes.
Where DSC seems to differ is that they encode whole line of picture, compress it and then decoder first need to decode such line after it gets every single bit of this line. This means we do get increased input lag by exactly one line. This is of course insignificant amount of input lag and perhaps not even worth mentioning but technically it is there and so it is not true that DSC does not introduce any input lag 🙂 Entirely different thing is if actual implementation of signal processor that decodes DSC stream introduces input lag. We all know that by Murphy's Law if something in implementation can be botched it probably will.

Reduced amount of data you need to send to have a line/frame does not by itself mean you will have reduced input lag but it can be used to have faster refresh rates on the same physical connection and faster refresh rate will reduce input lag.

It can reduce input lag. The start of the frame is received at the same time, but since the frame is smaller it's transmitted faster so the scan out of the frame can happen faster. So the further down the frame the less input lag compared to an uncompressed signal.

If the screen isn't being updated as the signal as received and instead waits for the entire frame then input lag for the entire frame would be less.
 
I wonder if 4K 100Hz would be possible using 8-bit 4:2:2 or even 4:2:0 on current RTX GPU's
Doing that or running 3840x1600 ultrawide at lower fidelity types would be interesting to try for now..

idk the answer to your question specifically , but as you know each time you go down a notch you reduce the original source material signal from 1:1.

It'll downgrade the source material by some amount whether:
-dropping from 10bit to 8bit and using dithering to smudge out the more crude color steps (using dithering adding "noise" to cover the resulting 8bit banding)
- compressing 3:1 using DSC if available on hdmi 2.1
-dropping from 4:4:4 to 4:2:2: or 4:2:0 (poorer as evidenced by text quality tests, which also relates to texture detail and other fine color detail)
-upscaling 1440p (without DSS "AI upscaling")

The lower the ppi to your perspective ~ shorter the viewing distance, the worse the downgrade will look on some of the "workarounds".

-------------------------------------------------------------------------------

In a way this reminds me of the samsung TV coating to "fix" viewing angles.. where people started wondering if it should still be considered a 8k screen compared to competing screens if you can't see the individual pixels anymore (since they are blended by the viewing angle coating).

To be fair the TV can accept a 10bit 444 signal over HDMI, Nvidia apparently is not able to send it over HDMI from gaming gpus/drivers currently. So that ball is in nvidia's court, though on a different hdmi 2.1 TV with a full 48gbps of HDMI 2.1 lanes for 12bit 4k 120hz 444 signal capablity (tranlated down to 10bit panels like the C9) this probably wouldn't be an issue once we have hdmi 2.1 gpus.

LG needs to support uncompressed HDMI audio formats over eARC passthrough though. (PCM 7.1 uncompressed, DTS-HD , ATMOS)


.
 
Last edited:
I wonder if 4K 100Hz would be possible using 8-bit 4:2:2 or even 4:2:0 on current RTX GPU's
4:2:2 subsampling reduces bandwidth by a third, if I remember correctly. So you should be able to hit that target on HDMI 2.0.
 
It is not HDMI 2.1, they just do not have 10bit 444 over HDMI at all and it already affect users who connect their HDTV's to their PC's. Most HDTV's downsample everything to 4:2:2 or 4:2:0 so it is not such a big issue but still.
It does not make any sense but this is how it is.
You will probably actually need HDMI 2.1 card (Ampere?) to get it...
This appears to be right since I don't have the option for 10 bit in this picture but I do have the option for 12 bit.

No 10Bit.png
 
It can reduce input lag. The start of the frame is received at the same time, but since the frame is smaller it's transmitted faster so the scan out of the frame can happen faster. So the further down the frame the less input lag compared to an uncompressed signal.

If the screen isn't being updated as the signal as received and instead waits for the entire frame then input lag for the entire frame would be less.
Like I said, you must increase refresh rate to get reduced input lag.
Frame gets transmitted in exactly the same time irrespective of maximum bandwidth allowed by given connection, digital color format or even resolution. Only thing affecting how long it takes to transfer frame from GPU to monitor is vertical frequency. Actually it is pixel frequency times number of lines but since number of lines is given and pixel frequency is calculated from vertical refresh time it is the last one that affect frame transmission times.
So for example 640x480@8bit 4:2:0 will be transferred in exactly the same time as 3840x2160@12bit 4:4:4 if both run at 60Hz
VRR does not affect frame transmission time, only increase vertical blanking times to avoid wait times in case your frame is not ready before you start drawing next frame in which case the only thing you can do to not have to wait is to switch what you are sending to new frame (V-Sync OFF) and this causes tearing.
If you plag game which is capped at 60fps you will get slightly more input lag in 120Hz than in 240Hz because in the latter case frame is transmitted twice as fast. This will mostly affect bottom of the screen though.

So again: reducing bandwidth requirement for given video mode can lead to reduced input lag but this can only happen when you increase refresh rate.
 
How about we all just wait until Ampere launches and then see what Nvidia does? My guess is that they will support 10bit over HDMI but ONLY on Ampere cards, everyone with Turing and below will be screwed over. But for now NOBODY is getting 4k120Hz 10bit 444, and that means nobody not AMD and not nvidia so how about we stop complaining about something we can't even have regardless? Once Ampere launches with HDMI 2.1 and Nvidia still refuses to allow 10bit over HDMI then go ahead and complain all you want.
 
Problem is, some of us wanted to buy a TV with hdmi 2.1 thinking we'd be covered for the future hdmi 2.1 scenarios/pipelines.. unlike hdmi 2.0b displays. Now that isn't certain between the 40Gbps limitation of the display and Nvidia not supporting 10bit output on hdmi. Also, the lack of support from LG for uncompressed hdmi audio formats over the TV's eARC. Now I have no choice but to wait and see. I was leaning that way anyway but it would have been nice to have the option to buy a CX "early". Once again display manufacturers with gaming geared models and NVIDIA are fragmenting and piecemealing feature support.
 
Last edited:
Problem is, some of us wanted to buy a TV with hdmi 2.1 thinking we'd be covered for the future hdmi 2.1 senarios/pipelines.. unlike hdmi 2.0b displays. Now that isn't certain between the 40Gbps limitation of the display and Nvidia not supporting 10bit output on hdmi. Also, the lack of support from LG for uncompressed hdmi audio formats over the TV's eARC. Now I have no choice but to wait and see. I was leaning that way anyway but it would have been nice to have the option to buy a CX "early". Once again display manufacturers with gaming geared models and NVIDIA are fragmenting and piecemealing feature support.

LG probably needs to have some sort of upgrade justification for 2021's OLED sets. "Hey we got so much backlash by reducing 48Gbps to 40Gbps on the CX that we decided to bring it back on the C11 OLED! Plus uncompressed HDMI audio too!" I mean if they already did all of that on the CX then what reason is there to run out and buy a C11? Still doesn't change the fact that it's a lame move but regardless for 2020 nothing is beating the CX for PC gaming.
 
Nvidia could end up supporting 10bit 444 off of hdmi, at least with the 7nm "3000" series .. there is hope, just not for sure if you buy the CX ahead of time. Then if 10bit 444 were supported, the lack of the last 8Gbps shouldn't matter on the 10bit panel.

LG could end up supporting full uncompressed audio formats (7.1 PCM uncompressed, DTS-HD, ATMOS) pass-through over the CX's eARC eventually too, there is hope - especially since you can already do a CRU EIDE hack to get 7.1 PCM uncompressed audio... It's just not a sure bet overall if you buy the CX ahead of time.

Where a display with hdmi 2.1 standard was supposed to be a sure thing feature support wise, I have to wait to see what's up until 7nm hdmi 2.1 gpus, potential nvidia 10bit 444 support updates and LG audio eARC firmware updates if any.

Otherwise buy a C9 I guess but the jump from 48" to 55" is a lot for a desk island type set up and it also would be more of a stretch from my other monitor(s) in the array.

It could all pan out for the CX (and nvidia gaming) on supporting the highest uncompressed sources of video and audio it is supposed to, I'm just not buying a CX (for $1500+ $131 tax) until I know for sure.
 
Nvidia could end up supporting 10bit 444 off of hdmi, at least with the 7nm "3000" series .. there is hope, just not for sure if you buy the CX ahead of time. Then if 10bit 444 were supported, the lack of the last 8Gbps shouldn't matter on the 10bit panel.

LG could end up supporting full uncompressed audio formats (7.1 PCM uncompressed, DTS-HD, ATMOS) pass-through over the CX's eARC eventually too, there is hope - especially since you can already do a CRU EIDE hack to get 7.1 PCM uncompressed audio... It's just not a sure bet overall if you buy the CX ahead of time.

Where a display with hdmi 2.1 standard was supposed to be a sure thing feature support wise, I have to wait to see what's up until 7nm hdmi 2.1 gpus, potential nvidia 10bit 444 support updates and LG audio eARC firmware updates if any.

Otherwise buy a C9 I guess but the jump from 48" to 55" is a lot for a desk island type set up and it also would be more of a stretch from my other monitor(s) in the array.

It could all pan out for the CX (and nvidia gaming) on supporting the highest uncompressed sources of video and audio it is supposed to, I'm just not buying a CX (for $1500+ $131 tax) until I know for sure.

Yeesh, yeah. I checked this thread for the first time today in a month or so and there's so much negativity! This was supposed to be our holy savior of gaming displays! Maybe I WILL just get a 55 C9 and wall mount it above my desk. I love my C9 65 in the living room. Wouldn't trade it for anything (except a 77" C9).
 
  • Like
Reactions: elvn
like this
Yeesh, yeah. I checked this thread for the first time today in a month or so and there's so much negativity! This was supposed to be our holy savior of gaming displays! Maybe I WILL just get a 55 C9 and wall mount it above my desk. I love my C9 65 in the living room. Wouldn't trade it for anything (except a 77" C9).

Well depending on your use case scenario, the CX will still be the best gaming display bar none. Me personally I don't care at all about HDMI audio and I plan to use this thing at 4k120Hz 444 8bit SDR as I am currently using my Acer X27 at so really none of the "negatives" mean anything to me. By the time 48Gbps is ACTUALLY needed...I'll just buy the next OLED set that has it. :)
 
^ over displayport
4k 10bit 444 over hdmi is needed for these TVs (especially since it was revealed they can't send 12bit 4k 444 over their 40Gbps limit) --- I think AMD gpus can send 10bit 444 over hdmi and Nvidia can't, at least not yet.
Yup, what Armenius said is true.
This is one of the reasons I'm happy to run hot, crappy AMD gpus with drivers from the 90s but somehow at 10bit unlike their competition.
Nvidia didn't change their 10bit fuckery (used to be no 10bit outside of DX windows according to their official forum response) till 10th series when they relaxed it a little, when more people became aware of the gimping they were doing, it looks like you nvidia users will need to raise hell again to get the flag enabled for HDMI 2.1 for your expensive GPUs.
 
Couldn’t you just use the studio drivers? In the big scheme of things a few FPS probably isn’t going to matter.

Do the studio drivers even allow you to set 10bit on the desktop? I never tried.
 
NVIDIA cards have 10-bit color support over DisplayPort. NVIDIA partnered with LG to get G-SYNC into their televisions. LG's televisions do not have DisplayPort. Every indicator points to 10-bit color support coming to HDMI.
 
I wonder if it's a better idea to just wait until November to buy. The thought of having to use it @ 60hz on desktop for 5-6 months makes me want to puke.

To each his own I suppose. I've been using a Samsung for 3 years now @ 60Hz. I got over giving up Gsync after moving to a TV with better color than shit Acer/Asus monitors. OLED is even better. The image quality alone is worth the price tag without Gsync in my opinion. I'd happily wait, but I'd like to know that it will end up actually being supported correctly in the near-ish future. But either way, OLED is the only upgrade path I'd consider at this point with a display.
 
I wonder if it's a better idea to just wait until November to buy. The thought of having to use it @ 60hz on desktop for 5-6 months makes me want to puke.

Well you technically could use it at 120Hz BUT at 4:2:0 chroma which equally sucks for desktop usage. I'm just going to keep on using my Acer X27 as my desktop display for the foreseeable future while the CX gets used for all gaming.

NVIDIA cards have 10-bit color support over DisplayPort. NVIDIA partnered with LG to get G-SYNC into their televisions. LG's televisions do not have DisplayPort. Every indicator points to 10-bit color support coming to HDMI.

Exactly. People just need to hold their horses and wait instead of crying about it when it doesn't even matter right now due to the lack of HDMI 2.1 cards in the first place.
 
Couldn’t you just use the studio drivers? In the big scheme of things a few FPS probably isn’t going to matter.

Do the studio drivers even allow you to set 10bit on the desktop? I never tried.

1.
I think originally I was confused by the articles I found seemingly saying that the studio driver added 10bit support to nvidia gpus across the board. Some of the articles were about rtx based laptops and others seem to have failed to mention whether they were talking about displayport connections exclusively or not. According to Armenius nvidia 10bit has now been available on displayport only (and doesn't require studio drivers).

The studio driver has potential trade-offs anyway like not getting timely game stability fixes vs crashing and glitching, bugs - not just missing minor frame rate improvements (though sometimes games can have areas with horrible frame rate bugs that get fixed in more frequent GRDrivers too, not just "a few fps").
-You can send 12 bit 444 60 Hz over HDMI 2.0b with a 18Gbps cable so the studio driver isn't necessary for now even if that scenario would have worked.

- The problem is that in the future, the LG CX's 40Gbps limit instead of 48Gbps on hdmi 2.1 is not enough to send 12bit 4k 120hz 444 natively. It is enough for 10bit 4k 120hz 444 source material (which is the native of the panel anyway so would be 1:1) - but nvidia currently only supports 8bit or 12bit and not 10bit over HDMI.
-Nvidia "should" support 10bit over HDMI, at least in the upcoming 7nm hdmi 2.1 gpus, but that is an assumption and has no confirmation as of yet.
2HIg99W.png

2. Windows 10 desktop is normally 8bit by default I think but from what I've read you can switch it to 10bit with a fully supported pipeline of port, cables, display, display adapter/GPU, and drivers. Windows 10's current HDR mode support however looks terrible apparently, so people turn HDR off for desktop for now. You can still run run supported media, apps, games, etc in HDR with HDR turned off on the windows desktop . For 10bit SDR and 10bit HDR over hdmi, it again would require sending 12bit from nvidia gpus or using an AMD GPU at 10bit currently.
.

-----------------------------------------------------------------------------------------------------------------
NVIDIA how to enable 10 bit color
(8-2019 with a still somewhat relevant answer concerning HDMI bandwidth limitations)
-----------------------------------------------------------------------------------------------------------------

https://nvidia.custhelp.com/app/answers/detail/a_id/4847/~/how-to-enable-30-bit-color/10-bit-per-color-on-quadro/geforce?

Q: I have 10/12 bit display/ TV but I am not able to select 10/12 bpc in output color depth drop down even after selecting use NVIDIA settings on change resolution page.
A: 10/12 bpc need more bandwidth compared to default 8bpc, so there would be cases where we are out of bandwidth to populate 10/12 bpc on NVIDIA control panel. One typical case is using HDMI HDR TV which are capable of 10/12bpc but due to Bandwidth limitation of HDMI 2.0 higher color depths are not possible with4k@60hz. To accommodate such cases you can try lowering refresh rate or lowering resolution to get those options.

Q: I have HDMI TV connected, I have lowered the resolution but cannot see 10bpc even though 12 bpc is available.
A: NVIDA GPUs for HDMI do not support 10bpc, you can use 12 bpc to enable 30-bit color.

*
This is where the 40Gbps rather than 48Gbps comes into play at 120hz 4k 444 since 40 isn't enough for 12bit at those values, which makes the fact that nvidia doesn't have 10bit over hdmi (at least, not yet?) stand out as an issue for the LG CX going forward with nvidia's upcoming 7nm hdmi 2.1 gpus capable of 4k 120hz 444 in regard to uncompressed 1:1 source material incl. 10bit HDR.

--------------------------------------------------------------------------------------------------------
STUDIO DRIVER COMPARED TO GAME READY DRIVER Q & A (1 yr ago)
-just thought I'd add this for clarification
---------------------------------------------------------------------------------------------------------

https://www.nvidia.com/en-us/geforce/forums/discover/296943/nvidia-studio-driver-vs-game-ready-driver-which-should-i-install-/
Q: Hi. I'm still not fully get it, Studio Driver will somehow inferior to GDR driver?
At this time, both drivers have same version, according to download page, 430.60
So it's mean GDR will somehow sux in rendering and stuff, and SD will somehow sux in games (or only in new games) even if they have same version number?

I often use my gpu for opencl rendering and stuff, but i often play in games (only some times in brand new).

Which i should install?


A: Studio driver is not inferior in its intended purpose - Creative App stability. It will not, however, immediately solve issues experienced while playing games (hotfixes) nor be optimized for both Creative Apps and Games released past the date of issue.

If there are no optimizations or corrective actions to its use with Creative Apps, merely containing game fixes or 3D Profile additions or changes, then it's issued as a Game Ready Driver.

That is the very specific reason to using the Game Ready drivers if you play games either regularly or solely. If, however, you only play a little Minesweeper during your breaks while creating the next fan-based LEGO Avengers clip, then you'll be good until the next version labelled Studio is released.

Because Creative Apps couldn't care less about Gaming's 3D Profiles unless you're running both at the same time, and then they only care about affected system resources...


Q: hi, thanks for answer. Sorry if i little dumb about this question.
You mean, that GDR an SD it just marking on released driver that mean this driver contain Game optimizations or creative optimizations?

At least at first i thought so, but now i see that GDR and SD have same version.
Yesterday i installed SD driver, and now experience told me that it's new GDR came out.

So if GDR now is latest, and i install it, i will get GDR features from this driver and still will have those features presented in previous SD. But they had same version number... it's mean it was same driver?
A: studio drivers are offcycle snapshots of the game ready branch that have a little bit more testing and are only tested on pascal and turing products.


-------------------------------------------------------------------------------------------------

8bit Banding vs 10bit on 10 bit content


"From Nvidia's own white paper in 2009: "While dithering produces a visually smooth image, the pixels no longer correlate to the source data "

From nvidia's studio drivers pages, showing the difference between 24bit (8bit) color banding and 30bit (10bit) color on a 10bit panel. You can use dithering to add "noise" as a workaround on 8bit in order to smooth/smudge/haze the banding out but it will degrade from 1:1 "lossless" source material fidelity.

"By increasing to 30-bit color, a pixel can now be built from over 1 billion shades of color, which eliminates the abrupt changes in shades of the same color. "

(Have to click the thumbnails to really see it)
24-bit-32-bit-gfe.png

300567_Colour_banding_example01.png




-----------------------------------------------------------------------------------------------
AUDIO FORMATS, LOSSLESS AUDIO
-----------------------------------------------------------------------------------------------

LG CX currently doesn't support uncompressed audio format pass-through over it's hdmi-in and through it's eARC hdmi output. PCM 7.1 lossless, Dolby-DTS master, ATMOS. It only supports lossy, compressed audio formats. There is an EIDE CRU hack to get PCM 7.1 but nothing in official firmware. That does leave the potential (I'm hoping) of fw fixes for future support of all three types.

excerpts from: https://homedjstudio.com/audio-bitrates-formats/

16bit Audio Sources:

CDs have a bitrate of 1,411 kbps at 16 bit. This was first established by Philips and Sony all the way back in 1980. After a few discussions on details, it was adopted as a standard in 1987.

High-quality WAV files have an audio bitrate exactly the same as CDs at 1,411 kbps at 16 bit. But that isn’t the end of the story for WAV files. There are variations. The actual bitrate is determined by a specific formula which multiplies the sampling rate with the bit depth and the number of channels.

The highest quality MP3 bitrate is 320 kbps at 16 bit. You can encode MP3s as low as 96 kbps. MP3s use a compression codec that removes frequencies while trying to preserve as much of the original recording as possible. This does result in a reduction in sound quality

24bit High KHz Audio Sources:
Hi-resolution audio can be recorded at double the standard CD rate or even as high as 192kHz. The question often comes up if this is needed. There are instances where a higher sampling rate does help to improve the listening experience. Analog to digital converters have an in-built low pass filter. This filter processes out frequencies that are not within the sampling limit. For example, if the sampling rate is 44.1kHz anything below half that will be accurately rendered. Anything above that will introduce fake samples which is where the low-pass filter kicks in to process them out. By increasing the sampling rate you move the low-pass filter higher into the frequency range. This moves it further from our hearing range resulting in cleaner sounding audio.

The second component is bit depth. Bit depth is the number of bits available to capture sound. For each extra bit beyond the first the accuracy and number of bits doubles. Each bit is a slice of the sound you are hearing. The more bits available the greater options in the information that can be stored. The end result is greater accuracy in hearing subtle details which might be lost at lower bit depth. With 16-bit audio, there are 65,536 possible levels that can be captured. 24-bit audio has the capacity of 16,777,216 possible levels.

To help visualize the difference imagine if you were watching a movie and you only got to see every 10th second of the image. You would still be able to get an idea of what was going on but you would miss out on the subtle changes in the movement of the actors on screen. If you then got to watch every 3rd second you would have a greater sense of their movement. Bit depth works the same way allowing more refined detail to be captured.

Uncompressed Audio Formats

Uncompressed audio formats capture the original recording without any further modifications. They take the soundwaves and convert them into digital format. These formats do offer maximum quality but it does result in much larger file sizes.

---------------------------------------------------------------------------------------------------

https://www.digitaltrends.com/home-theater/ultimate-surround-sound-guide-different-formats-explained/

DTS: DTS uses a higher bit rate and, therefore, delivers more audio information. Think of it as similar to the difference between listening to a 256kbps and 320kbps MP3 file. The quality difference is noticeable

DTS also has two 7.1 versions, which differ in the same manner as Dolby’s versions.

DTS-HD is a lossy, compressed 7.1 surround format

DTS-Master HD is lossless and meant to be identical to the studio master.

Dolby ATMOS
Movies with Dolby Atmos soundtracks are now very common on Blu-ray and Ultra HD Blu-ray discs and streaming sites like Netflix, Vudu, Amazon Prime Video, Disney+, and Apple TV+ all offer a selection of Atmos movies and shows. Atmos is even starting to appear in some live broadcasts. Recent examples include the 2018 Winter Olympics, the NHRA’s live drag-racing events, and music festivals too.

“object-based” or “3D” surround. :

For viewers, “3D” offers the best description of this technology because of its ability to make sounds feel as though they are moving through space

Dolby ATMOS Music
Dolby Labs has been working with major record labels and streaming services to develop the use of Dolby Atmos technology for music production. The concept is simple: Dolby Atmos Music uses all of the same object-oriented 3D audio tools as the movie soundtrack version of Dolby Atmos but puts them in the hands of professional music producers.
The result is immersive music that goes well beyond what traditional two-channel stereo or even Quadrophonic sound can achieve.

DTS X
DTS has its own version of object-based audio, DTS:X, unveiled in 2015. While Dolby Atmos limits objects to 128 per scene in theaters, DTS:X imposes no such limits

Auro 3D
similar to Dolby Atmos in some respects, Auro-3D uses three “layers” of sound to achieve its immersive effect. Those layers typically require more speakers — up to 11 in an ideal setup

------------------------------------------------------------------------------------------------------------------

How do Dolby TrueHD and Atmos Differ? https://www.soundandvision.com/content/how-do-dolby-truehd-and-dolby-atmos-differ

"Dolby TrueHD is a lossless audio codec that supports up to eight audio channels on Blu-ray Disc. Dolby Atmos soundtracks, in contrast, consist of audio objects — up to 128 of them — that are mixed in a 3D soundfield during the production process. When the soundtrack is played back in a movie theater or home environment, the audio objects are rendered by an Atmos decoder to the available speaker set, which includes overhead ceiling speakers.

While Dolby Atmos and Dolby TrueHD are two separate soundtrack formats,
Atmos data on Ultra HD Blu-ray is actually an extension to TrueHD that is folded into the bitstream to maintain backwards compatibility. Here’s how that works: If you play a disc with an Atmos soundtrack, the Atmos extension data is decoded by an Atmos-compatible receiver. If your receiver isn’t Atmos compatible, the extension data is ignored and the soundtrack is decoded as regular Dolby TrueHD."


-----------------------------------------------------------------------------------------------------------------
 
Last edited:
Back
Top