AMD's RX 7900 XT cards allegedly support unannounced DisplayPort 2.1 connectors

kac77

2[H]4U
Joined
Dec 13, 2008
Messages
3,319
AMD's RX 7900 XT cards allegedly support unannounced DisplayPort 2.1 connectors

Kyle Bennet from HardOCP is reporting that multiple insider sources confirmed AMD’s inclusion of the DP 2.1 specs on the Navi 31 flagship GPUs. However, the exact specs for the 2.1 iteration are still unknown. Videocardz suggests that DP 2.1 would support the Ultra-High Bit Rate standard with 80 Gb/s bandwidth. For reference, DP 2.0 offers 77.37 Gb/s that can support a 16K display with 60 Hz refresh rate, two 4K displays @ 144 Hz or three 4K displays @ 90 Hz. HDMI 2.1, in comparison, offers 48 Gb/s with a maximum supported spec of 10K @ 120 Hz.
 
ALL of DP 2.1 or will they leave bits of it out like TV makers have been doing?
 
Oh great now they are making an article about some random guy on twitter lol..../sarcasm
You made me laugh, because it is true. :)

On a serious note, when I saw 1.4a on the 4090 yesterday morning I was extremely surprised, so I started asking around about AMD since even Intel's lethargic GPUs have DP2.0. I did verify DP2.1, whatever the hell that is, on Navi 31. I still know a few guys to reach out to that know some things.
 
I have not seen the full 2.1 spec yet, so I am not sure.
LTT had a video about it this spring. Apparently, a lot of the new stuff is optional and cable and monitor makers are skipping the optional stuff (high bandwidth, etc) and labelling their stuff as 2.1 without listing what they don't support.

 
LTT had a video about it this spring. Apparently, a lot of the new stuff is optional and cable and monitor makers are skipping the optional stuff (high bandwidth, etc) and labelling their stuff as 2.1 without listing what they don't support.


We are taking about DisplayPort 2.1, not HDMI.

Edit: For the record, DP2.1 has not been announced by VESA publicly.
 
VESA has barely managed to finish the certification on the DP2.0 hardware and AMD is going to release a GPU with a pre release of a yet unreleased and unannounced standard that has no certified devices to test it with.
That seems a bit far fetched…
I mean AMD has barrelled ahead plenty of times with things ahead of their time so I mean they could. But I just wonder about longevity and feature support when you jump the gun on an incomplete unconfirmed standard.
 
VESA has barely managed to finish the certification on the DP2.0 hardware and AMD is going to release a GPU with a pre release of a yet unreleased and unannounced standard that has no certified devices to test it with.
That seems a bit far fetched…
I mean AMD has barrelled ahead plenty of times with things ahead of their time so I mean they could. But I just wonder about longevity and feature support when you jump the gun on an incomplete unconfirmed standard.
Just want to quote this so I can easily look it up in the future.
 
Just want to quote this so I can easily look it up in the future.
Hopefully for a good thing? I mean... I'm happy to be wrong about new useful features. Somebody has to do it first, it just seems like an odd play to me.
Unless AMD has a slightly more Intel play in mind where they want something of theirs that is exclusive to them in the spec, so they jump the gun on it vendors build to their exclusive solution so when it comes time to dealing with that part of the actual specifications they get a leg up on it because its what the vendors are already doing. It's an old Intel/Microsoft tactic but a valid one for sure.
 
Last edited:
Hopefully for a good thing? I mean... I'm happy to be wrong about new useful features. Somebody has to do it first, it just seems like an odd play to me.
Unless AMD has a slightly more Intel play in mind where they want something of theirs that is exclusive to them in the spec, so they jump the gun on it vendors build to their exclusive solution so when it comes time to dealing with that part of the actual specifications they get a leg up on it because its what the vendors are already doing. It's an old Intel/Microsoft tactic but a valid one for sure.
One thing I can tell you about what I say publicly, if I state it as fact, my batting average is tremendously high.

Now, when I give my opinions in the form of editorial, my track record is somewhat bad, and most of those topics that I have opined on have been ones that would be detrimental to the industry overall. My first I ever wrote was 3dFX and NVIDIA going down different API paths and how that would be a very bad thing. That obviously did not happen. The last I wrote was is here about GPU MSRP pricing and the overall hardware review ecosystem, which painted a very bad picture. That again, did not happen, which I am glad it did not, as I stated in that editorial. When I wrote both of the aforementioned editorials, I had knowledge that those options were being looked at by the players at hand. Both those articles brought out a tremendous amount of public outcry....because both those options sucked for the end user. I also know whose attention I get when I write things like that. I get the hardware enthusiast community to react publicly, and generally very vocally, which is to its benefit. I know that there are some CEOs and SVPs that read those editorials as well and see the backlash online. I am still a hardware enthusiast and gamer, and want what is best for our hard earned dollars. I do have a bully pulpit in some regards and I try to use it wisely when it truly counts.

But back on topic as to N31 and DP2.1. I will eat my hat (and I will go buy a very tiny one this week just in case) if I am wrong. ;)
 
One thing I can tell you about what I say publicly, if I state it as fact, my batting average is tremendously high.

Now, when I give my opinions in the form of editorial, my track record is somewhat bad, and most of those topics that I have opined on have been ones that would be detrimental to the industry overall. My first I ever wrote was 3dFX and NVIDIA going down different API paths and how that would be a very bad thing. That obviously did not happen. The last I wrote was is here about GPU MSRP pricing and the overall hardware review ecosystem, which painted a very bad picture. That again, did not happen, which I am glad it did not, as I stated in that editorial. When I wrote both of the aforementioned editorials, I had knowledge that those options were being looked at by the players at hand. Both those articles brought out a tremendous amount of public outcry....because both those options sucked for the end user. I also know whose attention I get when I write things like that. I get the hardware enthusiast community to react publicly, and generally very vocally, which is to its benefit. I know that there are some CEOs and SVPs that read those editorials as well and see the backlash online. I am still a hardware enthusiast and gamer, and want what is best for our hard earned dollars. I do have a bully pulpit in some regards and I try to use it wisely when it truly counts.

But back on topic as to N31 and DP2.1. I will eat my hat (and I will go buy a very tiny one this week just in case) if I am wrong. ;)
Well if it does have it I certainly wouldn’t be upset 1.4 has stuck around entirely too long and DCS while a good compression method has managed to complicate things as many displays support DP1.4 but not DCS so that is just bad for the consumer.

VESA has been stepping up as of late to clear the waters of their fragmented “optional” features and that’s good for everyone.

I obviously haven’t seen the 2.1 spec just hope it doesn’t do anything weird or convoluted.

And if you are so inclined to eat a hat as you say. Might I recommend something along the lines of these.

1665731788860.jpeg
 
One thing I can tell you about what I say publicly, if I state it as fact, my batting average is tremendously high.

Now, when I give my opinions in the form of editorial, my track record is somewhat bad, and most of those topics that I have opined on have been ones that would be detrimental to the industry overall. My first I ever wrote was 3dFX and NVIDIA going down different API paths and how that would be a very bad thing. That obviously did not happen. The last I wrote was is here about GPU MSRP pricing and the overall hardware review ecosystem, which painted a very bad picture. That again, did not happen, which I am glad it did not, as I stated in that editorial. When I wrote both of the aforementioned editorials, I had knowledge that those options were being looked at by the players at hand. Both those articles brought out a tremendous amount of public outcry....because both those options sucked for the end user. I also know whose attention I get when I write things like that. I get the hardware enthusiast community to react publicly, and generally very vocally, which is to its benefit. I know that there are some CEOs and SVPs that read those editorials as well and see the backlash online. I am still a hardware enthusiast and gamer, and want what is best for our hard earned dollars. I do have a bully pulpit in some regards and I try to use it wisely when it truly counts.

But back on topic as to N31 and DP2.1. I will eat my hat (and I will go buy a very tiny one this week just in case) if I am wrong. ;)

"Did you ever care what chip was used inside your VCR?"

Hmmm never thought about that....

Our first VCR was a Sony behemoth with knobs and switches I never understood (it looked like a piece of legit 80's stereo equipment)

- I recall the biggest feature was time-based recording of some-sort

Now, I just feel old....
 
It is a post in a Chinese forum, so maybe something is lost in translation !?

I would expect AMD to atleast match raster performance. It should not be too hard.
I would love to be proven wrong, but my opinion is that AMD won't even come close this generation. The generational leap the 4090 displays is hefty, and I don't think anyone was expecting this amount of uplift from both the new architecture and the move to TMSC's 4NM node.
 
I would love to be proven wrong, but my opinion is that AMD won't even come close this generation. The generational leap the 4090 displays is hefty, and I don't think anyone was expecting this amount of uplift from both the new architecture and the move to TMSC's 4NM node.
I don't think it tries to take the crown. No reason it should. That TAM is TINY. The meat is right below the halo.
 
I would love to be proven wrong, but my opinion is that AMD won't even come close this generation. The generational leap the 4090 displays is hefty, and I don't think anyone was expecting this amount of uplift from both the new architecture and the move to TMSC's 4NM node.

Realistically they just made the 4090 as big and power hungry as possible to perform, it really is not that big of a leap in performance otherwise. The 4080 and down is not a big leap in performance. I think AMD can be right there with a 4090 from Nvidia if they also decide to make a power hungry card as well, or close enough anyway.
 
Realistically they just made the 4090 as big and power hungry as possible to perform, it really is not that big of a leap in performance otherwise. The 4080 and down is not a big leap in performance. I think AMD can be right there with a 4090 from Nvidia if they also decide to make a power hungry card as well, or close enough anyway.
Yup. RDNA2 was extremely competitive, all while using significantly less power.
 
"Did you ever care what chip was used inside your VCR?"

Hmmm never thought about that....

Our first VCR was a Sony behemoth with knobs and switches I never understood (it looked like a piece of legit 80's stereo equipment)

- I recall the biggest feature was time-based recording of some-sort

Now, I just feel old....
GI Joe at 8:30 check Heman at 9:00 check Thunder cats at 10:00
Those things where gold for lazy children.

Ahhhh no some news interruption pushed everything back 10 min... and now I'm missing 10 min at the start and end of every show. Was self taught to always start the recording 5 min early and let it go 5 in after.... or just look for a block of good toons on one channel. lol
 
GI Joe at 8:30 check Heman at 9:00 check Thunder cats at 10:00
Those things where gold for lazy children.

Ahhhh no some news interruption pushed everything back 10 min... and now I'm missing 10 min at the start and end of every show. Was self taught to always start the recording 5 min early and let it go 5 in after.... or just look for a block of good toons on one channel. lol

And then one day you catch hell because you forgot to check what tape was in the device and it turns out someone had swapped a movie that you end up recording over.
 
But it is worth noting RDNA 2 had a significant node advantage over Ampere. This time they are on even footing.

Samsung 8nm was and still is by most accounts a terrible process.
You also have to remember AMD is going with an MCM design. This will allow them to make more money per gpu, and no need to overprice their cards while possibly getting close to 4090 speeds while keeping the power usage down.

I got a feeling they will be close to 4090 speeds while being cheaper and use less power
 
You also have to remember AMD is going with an MCM design. This will allow them to make more money per gpu, and no need to overprice their cards while possibly getting close to 4090 speeds while keeping the power usage down.

I got a feeling they will be close to 4090 speeds while being cheaper and use less power
I want that to be true... But lately, I am not so optimistic.
Power usage tends to be as much about the processing node as it is about architecture, this time they are both on the same node, so if anything else it will really show the differences in their architectures, which from an engineering standpoint I am all for.
 
Last edited:
But it is worth noting RDNA 2 had a significant node advantage over Ampere. This time they are on even footing.

Samsung 8nm was and still is by most accounts a terrible process.
Still a longshot to claim AMD won't even be close.
 
I want that to be true... But lately, I am not so optimistic.
Power usage tends to be as much about the processing node as it is about architecture, this time they are both on the same node, so if anything else it will really show the differences in their architectures, which from an engineering standpoint I am all for.
The problem with rumors is well they are rumors. Everyone was claiming that the 4090’s would use 450+ watts and possibly 500 at stock. Only to find that they are as power hungry as a 3090ti.

So, because we go and have doom and gloom for the new AMD cards let’s be patient and wait and see. I am really interested as a hardware nerd to see how crossfire on a single gpu die works! Either way we know this Generation AMD Can keep costs and power down.
 
Nice, but maybe AMD can fix the HDMI glitching of their 6000 series.....which still exist since launch...
Funny because since I got a nice sound bar I have found that nvidia has Dolby atmos problems with their cards. It’s a known problem in windows when using it. Many forum posts about it.

And it’s been that way for awhile now. So remember, neither company is perfect
 
Still a longshot to claim AMD won't even be close.
I'm sure they will put out something that will be close, how much of it they put out is another matter, but I am sure on some level a "7950 XT" will exist and get like 90% of the performance at 80% of the power usage with more manageable thermals, but it would probably be about the same as a power tuned 4090 that was downclocked .
I'm just advising caution that MCM is not the magic bullet a lot of people think it is.
 
  • Like
Reactions: ChadD
like this
And then one day you catch hell because you forgot to check what tape was in the device and it turns out someone had swapped a movie that you end up recording over.
Dads own fault for leaving his dad flicks in the machine. lol
 
The problem with rumors is well they are rumors. Everyone was claiming that the 4090’s would use 450+ watts and possibly 500 at stock. Only to find that they are as power hungry as a 3090ti.

So, because we go and have doom and gloom for the new AMD cards let’s be patient and wait and see. I am really interested as a hardware nerd to see how crossfire on a single gpu die works! Either way we know this Generation AMD Can keep costs and power down.
It's not really crossfire on a single die, RDNA 3 has 2 chips yes so it is a "chiplet design" but all the GPU computing happens on one die, and all the IO operations happen on the other. So technically from a computing standpoint, it is still a monolithic design because GPU computing is still all happening on a single die.
Unless they have something that hasn't leaked where they have one shipping with multiple GPU dies, I mean that is entirely possible. In which I do expect scaling to be good, TSMC and Apple developed an Interconnect for the M1 series that was capable of moving 2.5 TB/s which is more than fast enough to handle an internal crossfire situation, then it's just all up to the IO die to schedule and batch the jobs accordingly. Apple got a ~95% scaling there so there's no reason to believe AMD couldn't achieve a similar result using that.
 
I'm sure they will put out something that will be close, how much of it they put out is another matter, but I am sure on some level a "7950 XT" will exist and get like 90% of the performance at 80% of the power usage with more manageable thermals, but it would probably be about the same as a power tuned 4090 that was downclocked .
I'm just advising caution that MCM is not the magic bullet a lot of people think it is.
I have a feeling there is a massive difference this gen between NVs 90 and 80s. I bet AMD will compete and may even beat the 80s a little bit. The 90 is a freak with 40% more tensors over the 80s. I suspect the chip in the 4090 was never intended for a consumer GPU. It is probably the full datacenter chip that MAY have made it to a titan product. Nvidia is likely correctly concerned that the 4080 might get beat. If I was placing money down I would say AMD is probably going to be slightly faster then the 4080 and slightly slower then the 4090.
 
The problem with rumors is well they are rumors. Everyone was claiming that the 4090’s would use 450+ watts and possibly 500 at stock. Only to find that they are as power hungry as a 3090ti.

So, because we go and have doom and gloom for the new AMD cards let’s be patient and wait and see. I am really interested as a hardware nerd to see how crossfire on a single gpu die works! Either way we know this Generation AMD Can keep costs and power down.
I'm pretty certain the rumors were real. They just couldn't get the cards to reliably draw that level of power without melting and lighting your system on fire. If they could have done it, they would have, and charged us another grand on top of what they're already asking.

https://www.techspot.com/news/96261-nvidia-rtx-titan-ada-reportedly-canceled-after-melted.html
 
It's not really crossfire on a single die, RDNA 3 has 2 chips yes so it is a "chiplet design" but all the GPU computing happens on one die, and all the IO operations happen on the other. So technically from a computing standpoint, it is still a monolithic design because GPU computing is still all happening on a single die.
Unless they have something that hasn't leaked where they have one shipping with multiple GPU dies, I mean that is entirely possible. In which I do expect scaling to be good, TSMC and Apple developed an Interconnect for the M1 series that was capable of moving 2.5 TB/s which is more than fast enough to handle an internal crossfire situation, then it's just all up to the IO die to schedule and batch the jobs accordingly. Apple got a ~95% scaling there so there's no reason to believe AMD couldn't achieve a similar result using that.
That will probably sum up RDNA 4. This gen is probably more conservative. Prove the concept... save some money on production. Next gen going all in on compute chiplets more like Ryzen. I could see AMD designing a RDNA 4 part where each chiplet is a specific number of cores and a controller sends bits to complexes ala zen. I can't think of any reason it wouldn't work out just fine as you say interconnects are now plenty fast enough to handle all the data... its not like they would have to split frames or anything like that.
 
I'm pretty certain the rumors were real. They just couldn't get the cards to reliably draw that level of power without melting and lighting your system on fire. If they could have done it, they would have, and charged us another grand on top of what they're already asking.

https://www.techspot.com/news/96261-nvidia-rtx-titan-ada-reportedly-canceled-after-melted.html
I think there is also a real possibility that the current 4090 chips where always border line for consumer cards. AD102-300-A1 may well have been intended to be the data center chip where power concerns are different. The 4080... may have been intended to be pushed a bit harder in a 4090 part. I mean the 3090/3080s where essentially the same chip... this gen the 4090 has 40% more tensor cores I mean 76k transistors in the 4090 vs 46k in the 4080. IMO Nvidia about 6 months back got some spy info on what AMD was cooking and started figuring out how to tame their datacenter chip for a halo card so they can keep the benchmark win mindshare they love. I don't expect AMD will match the datacenter 76k transistor version of Ada. They where planning to go up against the 46k transistor 4080 in the consumer space. I have a felling AMD is going to kick the 4080 around.... but Nvidia will still be able to say ya ya but we have the 4090, which I suspect after this initial stock dries up will trickle out.
 
Back
Top