DisplayPort 1.4 Can Drive 8K Monitors Over USB Type-C

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
I believe that HDMI 2.0, in comparison, tops out at 4K 60Hz. It remains to be seen just how well DSC’s compression is and whether image quality differences are truly imperceptible, however.

The new standard drives higher-resolution displays with better color support using Display Stream Compression (DSC), a "visually lossless" form of compression that VESA says "enables up to [a] 3:1 compression ratio." This data compression, among other things, allows DisplayPort 1.4 to drive 60Hz 8K displays and 120Hz 4K displays with HDR "deep color" over both DisplayPort and USB Type-C cables (note that DisplayPort 1.4 doesn't add USB Type-C support; the two have been compatible from the beginning thanks to the USB Alternate Mode spec). USB Type-C cables can provide a USB 3.0 data connection, too.
 
So i don't know to much about data compression, how much data can travel through certain cables, which metal used has the best conductivity and such. but seeing as how they were able to find a way to send huge amounts of data over copper wires, why does it appear that we are struggling with sending data to our monitors? could we not just update the output method and the receiving method of the actual components? Also would there be a time where they would require 2 separate cables?
 
So I'm going to have another compression layer between PC and monitor on top of what was used to stream/store it already. No.
 
This is lossless compression if you know what it means.
They are compressing the image 1:3. They say "visually lossless" which is a marketing term (if you know what that means), which probably means most people can't tell the difference starting with an original image. But when you accumulate such degradations, like re-saving a BMP as a JPG over and over, it becomes an issue. The visually lossless term is coming from testing with people. If it was truly lossless you would test it computationally.
 
This is lossless compression if you know what it means.

Its not lossless if it was lossless you could never guarantee a compression ratio. only losse compression can guarantee a specific compression ratio or bit rate.
also visual losssless is not the same as bitwise lossless as others have pointed out.


The method has multiple different sub algorithm that it can change on the fly to increase its efficiency
Worstcase scenario for this seems to be pure noise though.

Im not fan of the idea either really and hope its a very short lived temporary solution
 
Any type of compression-decompression will add latency, so no competitive gaming at 8K
 
Its not lossless if it was lossless you could never guarantee a compression ratio. only losse compression can guarantee a specific compression ratio or bit rate. also visual losssless is not the same as bitwise lossless as others have pointed out. The method has multiple different sub algorithm that it can change on the fly to increase its efficiency. Worstcase scenario for this seems to be pure noise though. Im not fan of the idea either really and hope its a very short lived temporary solution

Note lossless compression step is part of every compression in use. Sure lossless compression assumption is that bit patterns are not equally probable and in this sense it fails for pure noise and the like signals. Now tell me who is watching pure noise :wtf:?. For video signals with big structured areas lossless compression works fine and the factor of 2-4 is easily achievable. Even the factor of 2 is very useful due to enormous bandwidth of 8K signals.

Any type of compression-decompression will add latency, so no competitive gaming at 8K

Since you do not know much about lossless compression you may wish to accept its latency is negligible. In a simpler case it is reading one bit pattern and outputing another bit pattern, latency measured in nanoseconds.

The only argument against lossless compression is that additional processing steps have to be made for signal transmission which is increasing hardware costs. The costs are negligible though comparing to other processing steps. For example the H.265 video encoding-decoding is very complex, including sophisticated lossless compression.
 
wirk
i think you confussed my objective statement as a claim of negative opinion. it was not. if at all it would have been a positive statement informing that it has an adaptive way to keep up quality.
aka. that worscase is pure noise which is not something we rally look at. so we totally agreee here


Gowever you seem to show a bit of lack of compresion so you might not want to say pitn fingers on other in this regards aka at stoly
data decompression i more than just seeing one patern and then outputting another. if that was true you would need to have a serious hug e look uptable. the data pattern has to be analyses in regards to other parts of the pattern and then recalculate
on compression it even worse as ther is many way to compress data for same kind of compression so you need to find what is the most optimal which will take some time on the fly

a good simpel example would bit just good old lossless RLE

AAAABBBBAABBAAABBB

now you could go
3a1a3b1b2a2b3a3b

or
4a4b2a2b3a3b

you would think the last one is more optimal and and more easy cause we just did the maximum lenght of each run of a character however. you need to realize the are several compression step.s this is just the modeling step. then comes the entroyp step and i will play around with a simpel huffman binary tree



lets assume we have 3 bits to paly with
our alphabet is therfore ABCDEFGH coresponding to value of 12345678 ( just like our ansi 8 bits corresponds to 1-256)

so lets look at our top and make it into the real data instead of the d RLE data

3a1a3b1b2a2b3a3b=CAAACBABBABBCACB
4a 4b 2a 2b 3a 3b = DADBBABBCACB

now each of these "bytes" consiste of 3 bits lets just show them in order
000 = A
001 = B
010 = C
011 = D
100 = E
101 = F
110 = G
111 = H

so
case 1 (CAAACBABBABBCACB) is 16bytes of 3 bits aka 48bits in storage
case 2 ( DADBBABBCACB) is 12bytes of 3 bits so only 36

compared to our start of 18 bytes of 3 bits for a total of 54bits



tthe huffman compress stars with a a bit of statistics of occurences

Case 1: CAAACBABBABBCACB
A: 6
B:6
C: 4
Rest is 0

now huffman order a binary trye with the most occurence charcte in the top s


/ \
1 0
A / \
1 0
B C

rest is not thre so we dont need those in the tree

form the we cna now se these bit calue to byte value

1 : A
01: B
00 C

so the above case 1 in bits would be
00111000110101101010010001 aka we ar down to only 26 bits but we are still having 18 bytes. its just the one byt is no longer an exact number of bits



Case 2 : DADBBABBCACB)
A: 3
B: 5
C: 2
D: 2
rest i 0

for the bianry tree we get

/ \
1 0
B / \
1 0
A / \
1 0
C D

Bit er byte is
1: B
01: A
001: C
000: D

So case2 in bits would be

00001000110111001010011 aka 23 bits on those 14 bytes



as you cna se if you reade these patter byte 3 bits at a time whic would mean a byte of data. you dont get a byte of infromation sometimes one byte of data aka 000 is one byte of value aka D in other times it will be 101 which is tsill only one byte of data but information wise its 2 bytes aka BA


now in these small cased we only got shorter bit value but in bgger data set aka 8 bits iwith a pretty even distribution you hove to many characters/bytes to have thtem all in shortuer bits patter and you will neede som that are longer than just 8 bits.

what the mean is that once you read in a 8bit data byte you cant just say well for this byte we need this out put
cause it might be the start of a 10bits information byte so you need to loo kat the first bit of the next 8bit dat byte
then the netct 8bit dat byte you only cna use the 5bits from and then might neede the next one to finish that bit paaterne.

so this constaly comparing from one dat byt to the next and/or previous to determine wht the rald infromation value is DOES in fact put in alot of overhead you need to red through he bits before you can figure out where each byte is before you can use that byte to decompress it by the RLE method.


SO your statmens that its simpel a patter to patter in/out show a CLEAR misunderstading of how compression modeling and espceially entropy coding works.


now this was all lossless examples which is what i deal with the most.

but loss less algorithm has tons of other choice they need to make to reduce quntization errors etc etc.


Look at how even the simple S3TC worked evne thoug it was pretty simpel and fast you still need too look at the data alot more time than with raw data

S3TC had some very strict rewquiremsn it has to be the same compression factor so data coudl be easiyl fetched
( you now exaclt how far away point X is)

so you had 2 16 bit value to worked as a two limits of collos vlues

and the textur picles was detemrien by 4 bits that would be the interpolation points of the 26 bit color value

So in a S3TC compression block you could only have 4 bits ofdiffrent colora and all them have to be equale spreadd in between the two 16 bit color values

reading 2 16bit of colors value and then readinin 4bits pixel and compared agiant the 2 16bits color vlues was alot more work then just simple rearding a pixel value and outputting a color.
but in this case the incrased GPU load var to be prefer over the increased data I/O to vram.
 
Last edited:
Am I the only one concerned about the rash of shoddy USB Type-C, even those from reputable manufacturers, tanking a system with an 8K monitor? Forget all the technical crap for a second and remember that it may not even be safe to plug in.
 
As a selfish gamer, they really need to line up the release and implementation in time for the video card manufacturers to get it into their new hardware. We're still without DisplayPort 1.3 at this point, and now 1.4 is already being made ready?
wirki think you confussed my objective statement as a claim of negative opinion. it was not. if at all it would have been a positive statement informing that it has an adaptive way to keep up quality.
aka. that worscase is pure noise which is not something we rally look at. so we totally agreee here
*snip*
All that work into a wonderful technical explanation; can't be bothered to use
Code:
code blocks
to line up tree examples.
 
Last edited:
So i don't know to much about data compression, how much data can travel through certain cables, which metal used has the best conductivity and such. but seeing as how they were able to find a way to send huge amounts of data over copper wires, why does it appear that we are struggling with sending data to our monitors? could we not just update the output method and the receiving method of the actual components? Also would there be a time where they would require 2 separate cables?


We ARE sending massive amounts of data over copper wires. DisplayPort reduces costs by having four lanes running at much lower speeds, but even those are hitting their physical limits.

Version 1.2 = 5.4 Gbps per-channel
Version 1.3 (not shipping yet) = 8.1 Gbps per-channel.

We can't get above that limit without adding the cost and complexity of active cables (think Thunderbolt), so they added lossy compression. It will only be necessary for crazy people who require 8k resolution (DP 1.3 can do 4k 120 hz, and 5k at 60 hz with no compression).
 
wirk
.....
data decompression i more than just seeing one patern and then outputting another.

....
SO your statmens that its simpel a patter to patter in/out show a CLEAR misunderstading of how compression modeling and espceially entropy coding works.

To make the story short VESA compression standard principle is described here, see Fig.1. Details are in the standard document costing money but what is said in the document above Fig. 1 is that algorithm requires single line of pixel storage and a rate buffer. This, plus some other details described below the Fig. 1 indicate that the single pixel line algorithm can not introduce any delay worth concerning. It is rather more correct to think the method is able for on-the-fly compression and decompression.
 
Back
Top