VirtualMirage
Limp Gawd
- Joined
- Nov 29, 2011
- Messages
- 470
I just watched the AMD RX6000 series announcements and I have to say, even as an RTX 3090 owner with no regrets in buying it, I am impressed and I am glad to see some decent competition again in the graphics card world. They are certainly pushing this hard with a pretty aggressive pricing too. While that may be bad for us early adopters, it may be good in the long run to get Nvidia back to price competitive levels if the performance of the new AMD cards hold true.
As we know from any PR launch, the figures they display versus what we will find out via reviews can vary since PR are going to do what they can to show the results in favor of their product. Having said that, do you think it may be possible that Nvidia was expecting this and, as such, may have sandbagged their performance numbers via drivers/firmware to fool misdirect AMD in their development to reach performance targets? Then once their product is released and their actual performance figures start coming out, Nvidia magically releases a driver or firmware (or both) that gives performance boosts to keep it a step ahead of AMD?
I guess many would call this "driver optimizations". But instead of it being tweaking and finding scraps and pieces here and there to eek out performance, in this proposed questioning maybe they purposefully capped performance to release later at the right moment?
I suppose AMD could be doing the same thing and this is a game of poker both are playing with each other.
Just something I was thinking about and wondering when seeing the 6900XT versus the 3090 on there (granted, their chart also showed they enabled Rage mode and their Smart Cache settings too to meet those targets, which was not listed on the 6800XT charts). As is probably still fresh in our minds, the 3090 vs 2080 has 2.4x the memory with 23% more bandwidth, 20.5% more cores and shader performance, and clock speeds within spitting distance of each other yet yields only around a 10% on average improvement in frame rate at 4K over the 3080. While I don't think it is realistic to see an exact 20% improvement over the 3080 due to clock speed, power caps, etc., you would think that maybe there is still some performance left on the table that could be squeezed out to increase the performance gap between the two.
Now as for the 3080, maybe it is at its limits if evidence of the cancelling of the 3080 20GB and development of a 3080 Ti is true, a sign of a quick counterattack against AMD to try and maintain the performance crown. I guess we will have to wait and see.
And just for curiosity, here are some side by side specs:
Single Precision Compute Performance:
AMD's got some impressive numbers on paper there. It looks like they gave up on compute performance (by 44%-54%) in exchange for focusing on fill rates, more so their pixel fill rate vs their texture fill rate. Their bump in clock speeds is giving them an added advantage there. It'll be interesting to see how much of that on paper translates into performance on the screen. Also, this doesn't take into consideration ray tracing nor tensor performance between the two.
Lastly, I am impressed with how compact AMD was able to maintain the 6800XT and 6900XT. Both are 2.5 slot height cards at only 267mm in length. That is going to be a boon for those that were having trouble finding an RTX 3080 or 3090 that would fit in more compact size cases. However, I am a little concerned at what the temps will be as well as the AMD cooler designs not exhausting any hot air outside the case. I also find it interesting that the 6800XT is recommending a 750w power supply while the 6900XT is recommending an 850w power supply, yet both are 300w rated cards. Meanwhile, Nvidia recommends a 750w power supply for both the 3080 and 3090 but both are rated at higher power ratings of 320-350w. As we know, heat and power are the ultimate limiters in peak performance. So I guess we will see how the AMD fairs in performance sustainability during prolonged sessions or if the throttling is aggressive.
I am interested to hear what your thoughts might be.
As we know from any PR launch, the figures they display versus what we will find out via reviews can vary since PR are going to do what they can to show the results in favor of their product. Having said that, do you think it may be possible that Nvidia was expecting this and, as such, may have sandbagged their performance numbers via drivers/firmware to fool misdirect AMD in their development to reach performance targets? Then once their product is released and their actual performance figures start coming out, Nvidia magically releases a driver or firmware (or both) that gives performance boosts to keep it a step ahead of AMD?
I guess many would call this "driver optimizations". But instead of it being tweaking and finding scraps and pieces here and there to eek out performance, in this proposed questioning maybe they purposefully capped performance to release later at the right moment?
I suppose AMD could be doing the same thing and this is a game of poker both are playing with each other.
Just something I was thinking about and wondering when seeing the 6900XT versus the 3090 on there (granted, their chart also showed they enabled Rage mode and their Smart Cache settings too to meet those targets, which was not listed on the 6800XT charts). As is probably still fresh in our minds, the 3090 vs 2080 has 2.4x the memory with 23% more bandwidth, 20.5% more cores and shader performance, and clock speeds within spitting distance of each other yet yields only around a 10% on average improvement in frame rate at 4K over the 3080. While I don't think it is realistic to see an exact 20% improvement over the 3080 due to clock speed, power caps, etc., you would think that maybe there is still some performance left on the table that could be squeezed out to increase the performance gap between the two.
Now as for the 3080, maybe it is at its limits if evidence of the cancelling of the 3080 20GB and development of a 3080 Ti is true, a sign of a quick counterattack against AMD to try and maintain the performance crown. I guess we will have to wait and see.
And just for curiosity, here are some side by side specs:
Single Precision Compute Performance:
- 6800XT: 20.74 TFLOPS
- 3080 FE: 29.77 TFLOPS
- 6900XT: 23.04 TFLOPS
- 3090 FE: 35.58 TFLOPS
- 6800XT: 257.9 GP/s | 580.3 GT/s at gaming frequency of 2,015MHz (128 ROPS, 288 TMUs)
- 3080 FE: 164.2 GP/s | 465.1 GT/s at advertised minimum boost of 1,710MHz (96 ROPs, 272 TMUs)
- 6900XT: 257.9 GP/s | 644.8 GT/s at gaming frequency of 2,015MHz (128 ROPS, 320 TMUs)
- 3090 FE: 189.8 GP/s | 556 GT/s at advertised minimum boost of 1,695MHz (112 ROPS, 328 TMUs)
- 6800XT: 288 GP/s | 648 GT/s at max boost frequency of 2,250MHz (128 ROPS, 288 TMUs)
- 3080 FE: 190.1 GP/s | 538.6 GT/s at peak boost of 1,980MHz as captured by Gamers Nexus (96 ROPs, 272 TMUs)
- 6900XT: 288 GP/s | 720 GT/s at max boost frequency of 2,250MHz (128 ROPS, 320 TMUs)
- 3090 FE: 224 GP/s | 656 GT/z at peak boost of 2,000MHz as captured by Gamers Nexus (112 ROPS, 328 TMUs)
AMD's got some impressive numbers on paper there. It looks like they gave up on compute performance (by 44%-54%) in exchange for focusing on fill rates, more so their pixel fill rate vs their texture fill rate. Their bump in clock speeds is giving them an added advantage there. It'll be interesting to see how much of that on paper translates into performance on the screen. Also, this doesn't take into consideration ray tracing nor tensor performance between the two.
Lastly, I am impressed with how compact AMD was able to maintain the 6800XT and 6900XT. Both are 2.5 slot height cards at only 267mm in length. That is going to be a boon for those that were having trouble finding an RTX 3080 or 3090 that would fit in more compact size cases. However, I am a little concerned at what the temps will be as well as the AMD cooler designs not exhausting any hot air outside the case. I also find it interesting that the 6800XT is recommending a 750w power supply while the 6900XT is recommending an 850w power supply, yet both are 300w rated cards. Meanwhile, Nvidia recommends a 750w power supply for both the 3080 and 3090 but both are rated at higher power ratings of 320-350w. As we know, heat and power are the ultimate limiters in peak performance. So I guess we will see how the AMD fairs in performance sustainability during prolonged sessions or if the throttling is aggressive.
I am interested to hear what your thoughts might be.