Is it possible Nvidia/AMD are sandbagging their numbers for the 30xx and 6000 series? Plus some other ramblings

VirtualMirage

Limp Gawd
Joined
Nov 29, 2011
Messages
470
I just watched the AMD RX6000 series announcements and I have to say, even as an RTX 3090 owner with no regrets in buying it, I am impressed and I am glad to see some decent competition again in the graphics card world. They are certainly pushing this hard with a pretty aggressive pricing too. While that may be bad for us early adopters, it may be good in the long run to get Nvidia back to price competitive levels if the performance of the new AMD cards hold true.

As we know from any PR launch, the figures they display versus what we will find out via reviews can vary since PR are going to do what they can to show the results in favor of their product. Having said that, do you think it may be possible that Nvidia was expecting this and, as such, may have sandbagged their performance numbers via drivers/firmware to fool misdirect AMD in their development to reach performance targets? Then once their product is released and their actual performance figures start coming out, Nvidia magically releases a driver or firmware (or both) that gives performance boosts to keep it a step ahead of AMD?

I guess many would call this "driver optimizations". But instead of it being tweaking and finding scraps and pieces here and there to eek out performance, in this proposed questioning maybe they purposefully capped performance to release later at the right moment?

I suppose AMD could be doing the same thing and this is a game of poker both are playing with each other.

Just something I was thinking about and wondering when seeing the 6900XT versus the 3090 on there (granted, their chart also showed they enabled Rage mode and their Smart Cache settings too to meet those targets, which was not listed on the 6800XT charts). As is probably still fresh in our minds, the 3090 vs 2080 has 2.4x the memory with 23% more bandwidth, 20.5% more cores and shader performance, and clock speeds within spitting distance of each other yet yields only around a 10% on average improvement in frame rate at 4K over the 3080. While I don't think it is realistic to see an exact 20% improvement over the 3080 due to clock speed, power caps, etc., you would think that maybe there is still some performance left on the table that could be squeezed out to increase the performance gap between the two.

Now as for the 3080, maybe it is at its limits if evidence of the cancelling of the 3080 20GB and development of a 3080 Ti is true, a sign of a quick counterattack against AMD to try and maintain the performance crown. I guess we will have to wait and see.

And just for curiosity, here are some side by side specs:
Single Precision Compute Performance:
  • 6800XT: 20.74 TFLOPS
  • 3080 FE: 29.77 TFLOPS
  • 6900XT: 23.04 TFLOPS
  • 3090 FE: 35.58 TFLOPS
Peak Pixel Fill Rate and Texture Fill Rate at expected minimum stock boost speed:
  • 6800XT: 257.9 GP/s | 580.3 GT/s at gaming frequency of 2,015MHz (128 ROPS, 288 TMUs)
  • 3080 FE: 164.2 GP/s | 465.1 GT/s at advertised minimum boost of 1,710MHz (96 ROPs, 272 TMUs)
  • 6900XT: 257.9 GP/s | 644.8 GT/s at gaming frequency of 2,015MHz (128 ROPS, 320 TMUs)
  • 3090 FE: 189.8 GP/s | 556 GT/s at advertised minimum boost of 1,695MHz (112 ROPS, 328 TMUs)
Peak Pixel Fill Rate and Texture Fill Rate at expected maximum stock boost speed:
  • 6800XT: 288 GP/s | 648 GT/s at max boost frequency of 2,250MHz (128 ROPS, 288 TMUs)
  • 3080 FE: 190.1 GP/s | 538.6 GT/s at peak boost of 1,980MHz as captured by Gamers Nexus (96 ROPs, 272 TMUs)
  • 6900XT: 288 GP/s | 720 GT/s at max boost frequency of 2,250MHz (128 ROPS, 320 TMUs)
  • 3090 FE: 224 GP/s | 656 GT/z at peak boost of 2,000MHz as captured by Gamers Nexus (112 ROPS, 328 TMUs)
Now I want to note something here. The fill rates for the Radeons were initially taken from AMD's spec page. AMD was calculating their fill rates based on their maximum boost frequency instead of their gaming frequency whereas the Nvidia fill rates are taken from their minimum boost clock speed. According to AMD, their boost frequency is the maximum clock speed the GPU will go for burst workloads while their gaming frequency is seems to be more in line with how Nvidia calculates their figures advertised boost speeds. I went ahead and expanded on this by adding AMD's figures using their gaming frequency as well. We know that stock Nvidia's will typically boost quite a bit higher than their advertised numbers, but without knowing right at this moment what that peak number is, I went ahead and used what was found from Gamers Nexus during their testing. Of course, we should be able to take these minimum and maximums and develop and average. How much headroom AMD left on the table for overclocking remains to be seen.

AMD's got some impressive numbers on paper there. It looks like they gave up on compute performance (by 44%-54%) in exchange for focusing on fill rates, more so their pixel fill rate vs their texture fill rate. Their bump in clock speeds is giving them an added advantage there. It'll be interesting to see how much of that on paper translates into performance on the screen. Also, this doesn't take into consideration ray tracing nor tensor performance between the two.

Lastly, I am impressed with how compact AMD was able to maintain the 6800XT and 6900XT. Both are 2.5 slot height cards at only 267mm in length. That is going to be a boon for those that were having trouble finding an RTX 3080 or 3090 that would fit in more compact size cases. However, I am a little concerned at what the temps will be as well as the AMD cooler designs not exhausting any hot air outside the case. I also find it interesting that the 6800XT is recommending a 750w power supply while the 6900XT is recommending an 850w power supply, yet both are 300w rated cards. Meanwhile, Nvidia recommends a 750w power supply for both the 3080 and 3090 but both are rated at higher power ratings of 320-350w. As we know, heat and power are the ultimate limiters in peak performance. So I guess we will see how the AMD fairs in performance sustainability during prolonged sessions or if the throttling is aggressive.

I am interested to hear what your thoughts might be.
 
I just watched the AMD RX6000 series announcements and I have to say, even as an RTX 3090 owner with no regrets in buying it, I am impressed and I am glad to see some decent competition again in the graphics card world. They are certainly pushing this hard with a pretty aggressive pricing too. While that may be bad for us early adopters, it may be good in the long run to get Nvidia back to price competitive levels if the performance of the new AMD cards hold true.

As we know from any PR launch, the figures they display versus what we will find out via reviews can vary since PR are going to do what they can to show the results in favor of their product. Having said that, do you think it may be possible that Nvidia was expecting this and, as such, may have sandbagged their performance numbers via drivers/firmware to fool misdirect AMD in their development to reach performance targets? Then once their product is released and their actual performance figures start coming out, Nvidia magically releases a driver or firmware (or both) that gives performance boosts to keep it a step ahead of AMD?

I guess many would call this "driver optimizations". But instead of it being tweaking and finding scraps and pieces here and there to eek out performance, in this proposed questioning maybe they purposefully capped performance to release later at the right moment?

I suppose AMD could be doing the same thing and this is a game of poker both are playing with each other.

Just something I was thinking about and wondering when seeing the 6900XT versus the 3090 on there (granted, their chart also showed they enabled Rage mode and their Smart Cache settings too to meet those targets, which was not listed on the 6800XT charts). As is probably still fresh in our minds, the 3090 vs 2080 has 2.4x the memory with 23% more bandwidth, 20.5% more cores and shader performance, and clock speeds within spitting distance of each other yet yields only around a 10% on average improvement in frame rate at 4K over the 3080. While I don't think it is realistic to see an exact 20% improvement over the 3080 due to clock speed, power caps, etc., you would think that maybe there is still some performance left on the table that could be squeezed out to increase the performance gap between the two.

Now as for the 3080, maybe it is at its limits if evidence of the cancelling of the 3080 20GB and development of a 3080 Ti is true, a sign of a quick counterattack against AMD to try and maintain the performance crown. I guess we will have to wait and see.

And just for curiosity, here are some side by side specs:
Single Precision Compute Performance:
  • 6800XT: 20.74 TFLOPS
  • 3080 FE: 29.77 TFLOPS
  • 6900XT: 23.04 TFLOPS
  • 3090 FE: 35.58 TFLOPS
Peak Pixel Fill Rate and Texture Fill Rate at expected minimum stock boost speed:
  • 6800XT: 257.9 GP/s | 580.3 GT/s at gaming frequency of 2,015MHz (128 ROPS, 288 TMUs)
  • 3080 FE: 164.2 GP/s | 465.1 GT/s at advertised minimum boost of 1,710MHz (96 ROPs, 272 TMUs)
  • 6900XT: 257.9 GP/s | 644.8 GT/s at gaming frequency of 2,015MHz (128 ROPS, 320 TMUs)
  • 3090 FE: 189.8 GP/s | 556 GT/s at advertised minimum boost of 1,695MHz (112 ROPS, 328 TMUs)
Peak Pixel Fill Rate and Texture Fill Rate at expected maximum stock boost speed:
  • 6800XT: 288 GP/s | 648 GT/s at max boost frequency of 2,250MHz (128 ROPS, 288 TMUs)
  • 3080 FE: 190.1 GP/s | 538.6 GT/s at peak boost of 1,980MHz as captured by Gamers Nexus (96 ROPs, 272 TMUs)
  • 6900XT: 288 GP/s | 720 GT/s at max boost frequency of 2,250MHz (128 ROPS, 320 TMUs)
  • 3090 FE: 224 GP/s | 656 GT/z at peak boost of 2,000MHz as captured by Gamers Nexus (112 ROPS, 328 TMUs)
Now I want to note something here. The fill rates for the Radeons were initially taken from AMD's spec page. AMD was calculating their fill rates based on their maximum boost frequency instead of their gaming frequency whereas the Nvidia fill rates are taken from their minimum boost clock speed. According to AMD, their boost frequency is the maximum clock speed the GPU will go for burst workloads while their gaming frequency is seems to be more in line with how Nvidia calculates their figures advertised boost speeds. I went ahead and expanded on this by adding AMD's figures using their gaming frequency as well. We know that stock Nvidia's will typically boost quite a bit higher than their advertised numbers, but without knowing right at this moment what that peak number is, I went ahead and used what was found from Gamers Nexus during their testing. Of course, we should be able to take these minimum and maximums and develop and average. How much headroom AMD left on the table for overclocking remains to be seen.

AMD's got some impressive numbers on paper there. It looks like they gave up on compute performance (by 44%-54%) in exchange for focusing on fill rates, more so their pixel fill rate vs their texture fill rate. Their bump in clock speeds is giving them an added advantage there. It'll be interesting to see how much of that on paper translates into performance on the screen. Also, this doesn't take into consideration ray tracing nor tensor performance between the two.

Lastly, I am impressed with how compact AMD was able to maintain the 6800XT and 6900XT. Both are 2.5 slot height cards at only 267mm in length. That is going to be a boon for those that were having trouble finding an RTX 3080 or 3090 that would fit in more compact size cases. However, I am a little concerned at what the temps will be as well as the AMD cooler designs not exhausting any hot air outside the case. I also find it interesting that the 6800XT is recommending a 750w power supply while the 6900XT is recommending an 850w power supply, yet both are 300w rated cards. Meanwhile, Nvidia recommends a 750w power supply for both the 3080 and 3090 but both are rated at higher power ratings of 320-350w. As we know, heat and power are the ultimate limiters in peak performance. So I guess we will see how the AMD fairs in performance sustainability during prolonged sessions or if the throttling is aggressive.

I am interested to hear what your thoughts might be.
This is a pretty big guess but----- I am guessing Rage mode increases power usage in order to effectively maintain the peak boost clock and maybe even exceed it. And since it is an official feature----that's why the recommended power supply wattages are what they are. We are talking about clock speeds far beyond the competition and maintaining that on such big cards, requires power.
 
This is a pretty big guess but----- I am guessing Rage mode increases power usage in order to effectively maintain the peak boost clock and maybe even exceed it. And since it is an official feature----that's why the recommended power supply wattages are what they are. We are talking about clock speeds far beyond the competition and maintaining that on such big cards, requires power.
I wonder how long the stock cooler can keep up with the added load and what the thresholds are before throttling kicks in. With the OEM cards being such a small size, I would think those cooling setups would heat soak pretty quickly which would prevent it from holding onto the high clock speeds for too long. Especially if it is in an enclosed case and all the heat is being exhausted inside the case. But we will have to wait for reviews to truly see what it is capable of as well as AIB solutions.
 
Then once their product is released and their actual performance figures start coming out, Nvidia magically releases a driver or firmware (or both) that gives performance boosts to keep it a step ahead of AMD?
One possible way to look at it if to look if some game out there didn't had their performance change by driver optimisation over time (of say release of the 5700xt versus now), maybe a Doom Eternal in Vulkan ?

If such game exist you could see if the advantage of the 3090 over the 2080ti is higher in them than in average and that could give an indication of what it would look like with mature drivers/game in the future.
 
I wonder how long the stock cooler can keep up with the added load and what the thresholds are before throttling kicks in. With the OEM cards being such a small size, I would think those cooling setups would heat soak pretty quickly which would prevent it from holding onto the high clock speeds for too long. Especially if it is in an enclosed case and all the heat is being exhausted inside the case. But we will have to wait for reviews to truly see what it is capable of as well as AIB solutions.
I wonder if they are using liquid metal, like they did with the PS5.
 
I just watched the AMD RX6000 series announcements and I have to say, even as an RTX 3090 owner with no regrets in buying it, I am impressed and I am glad to see some decent competition again in the graphics card world. They are certainly pushing this hard with a pretty aggressive pricing too. While that may be bad for us early adopters, it may be good in the long run to get Nvidia back to price competitive levels if the performance of the new AMD cards hold true.

As we know from any PR launch, the figures they display versus what we will find out via reviews can vary since PR are going to do what they can to show the results in favor of their product. Having said that, do you think it may be possible that Nvidia was expecting this and, as such, may have sandbagged their performance numbers via drivers/firmware to fool misdirect AMD in their development to reach performance targets? Then once their product is released and their actual performance figures start coming out, Nvidia magically releases a driver or firmware (or both) that gives performance boosts to keep it a step ahead of AMD?

I guess many would call this "driver optimizations". But instead of it being tweaking and finding scraps and pieces here and there to eek out performance, in this proposed questioning maybe they purposefully capped performance to release later at the right moment?

I suppose AMD could be doing the same thing and this is a game of poker both are playing with each other.

Just something I was thinking about and wondering when seeing the 6900XT versus the 3090 on there (granted, their chart also showed they enabled Rage mode and their Smart Cache settings too to meet those targets, which was not listed on the 6800XT charts). As is probably still fresh in our minds, the 3090 vs 2080 has 2.4x the memory with 23% more bandwidth, 20.5% more cores and shader performance, and clock speeds within spitting distance of each other yet yields only around a 10% on average improvement in frame rate at 4K over the 3080. While I don't think it is realistic to see an exact 20% improvement over the 3080 due to clock speed, power caps, etc., you would think that maybe there is still some performance left on the table that could be squeezed out to increase the performance gap between the two.

Now as for the 3080, maybe it is at its limits if evidence of the cancelling of the 3080 20GB and development of a 3080 Ti is true, a sign of a quick counterattack against AMD to try and maintain the performance crown. I guess we will have to wait and see.

And just for curiosity, here are some side by side specs:
Single Precision Compute Performance:
  • 6800XT: 20.74 TFLOPS
  • 3080 FE: 29.77 TFLOPS
  • 6900XT: 23.04 TFLOPS
  • 3090 FE: 35.58 TFLOPS
Peak Pixel Fill Rate and Texture Fill Rate at expected minimum stock boost speed:
  • 6800XT: 257.9 GP/s | 580.3 GT/s at gaming frequency of 2,015MHz (128 ROPS, 288 TMUs)
  • 3080 FE: 164.2 GP/s | 465.1 GT/s at advertised minimum boost of 1,710MHz (96 ROPs, 272 TMUs)
  • 6900XT: 257.9 GP/s | 644.8 GT/s at gaming frequency of 2,015MHz (128 ROPS, 320 TMUs)
  • 3090 FE: 189.8 GP/s | 556 GT/s at advertised minimum boost of 1,695MHz (112 ROPS, 328 TMUs)
Peak Pixel Fill Rate and Texture Fill Rate at expected maximum stock boost speed:
  • 6800XT: 288 GP/s | 648 GT/s at max boost frequency of 2,250MHz (128 ROPS, 288 TMUs)
  • 3080 FE: 190.1 GP/s | 538.6 GT/s at peak boost of 1,980MHz as captured by Gamers Nexus (96 ROPs, 272 TMUs)
  • 6900XT: 288 GP/s | 720 GT/s at max boost frequency of 2,250MHz (128 ROPS, 320 TMUs)
  • 3090 FE: 224 GP/s | 656 GT/z at peak boost of 2,000MHz as captured by Gamers Nexus (112 ROPS, 328 TMUs)
Now I want to note something here. The fill rates for the Radeons were initially taken from AMD's spec page. AMD was calculating their fill rates based on their maximum boost frequency instead of their gaming frequency whereas the Nvidia fill rates are taken from their minimum boost clock speed. According to AMD, their boost frequency is the maximum clock speed the GPU will go for burst workloads while their gaming frequency is seems to be more in line with how Nvidia calculates their figures advertised boost speeds. I went ahead and expanded on this by adding AMD's figures using their gaming frequency as well. We know that stock Nvidia's will typically boost quite a bit higher than their advertised numbers, but without knowing right at this moment what that peak number is, I went ahead and used what was found from Gamers Nexus during their testing. Of course, we should be able to take these minimum and maximums and develop and average. How much headroom AMD left on the table for overclocking remains to be seen.

AMD's got some impressive numbers on paper there. It looks like they gave up on compute performance (by 44%-54%) in exchange for focusing on fill rates, more so their pixel fill rate vs their texture fill rate. Their bump in clock speeds is giving them an added advantage there. It'll be interesting to see how much of that on paper translates into performance on the screen. Also, this doesn't take into consideration ray tracing nor tensor performance between the two.

Lastly, I am impressed with how compact AMD was able to maintain the 6800XT and 6900XT. Both are 2.5 slot height cards at only 267mm in length. That is going to be a boon for those that were having trouble finding an RTX 3080 or 3090 that would fit in more compact size cases. However, I am a little concerned at what the temps will be as well as the AMD cooler designs not exhausting any hot air outside the case. I also find it interesting that the 6800XT is recommending a 750w power supply while the 6900XT is recommending an 850w power supply, yet both are 300w rated cards. Meanwhile, Nvidia recommends a 750w power supply for both the 3080 and 3090 but both are rated at higher power ratings of 320-350w. As we know, heat and power are the ultimate limiters in peak performance. So I guess we will see how the AMD fairs in performance sustainability during prolonged sessions or if the throttling is aggressive.

I am interested to hear what your thoughts might be.

AMD is just recommending realistic power supply, I would never recommend a 750w for 3090 if you remotely want to OC it. Do you really believe Nvidia's recommendation? Noob grabbing a 750w to plug in 3090 making all 750w the same lol. Nvidia should have been more realistic about those recommendation. AMD is more realistic in real world scenarios and giving it more headroom. Nvidia should have said buy 750W GOLD 3080 or Platinum for 3090 to be on save side lol. AMD probably lookign

6800xt likely boosts higher on average then 6900xt. and 6900xt probably sticks more around 2015mhz of game clock. I think that is probably why you are seeing 300w for both.

Also Gamer Nexus talked about Rage mode, its not really doing much, just increases power budget a little and AMD is mentioning doesn't use much more power and its only 1-2% uplift. So not much, its not an actual OC. Just boost behavior aggression.
 
The AMD numbers being with Smart Memory Access (requires 5000 series CPU) and Rage Mode (OC essentially) compared with stock Nvidia products is a bit meh. I would much prefer to see apples-to-apples, but I suspect that is not as impressive. Plus, nothing about RT performance or a DLSS equivalent.

I think AMD still has some very competitive products here, but we still need to see real reviews with direct comparisons.
 
  • Like
Reactions: DPI
like this
nothing about ...a DLSS equivalent.
They told us the name of the tech, that it will be an open technology, and that it is still being worked on. Indeed, they didn't show it to us. But, official confirmation like this, was a pretty solid thing, in my opinion. And I'm ok that its not ready yet. It took Nvidia about 11 months after the release of Turing, to get DLSS actually based on their A.I. model, into 1 game.

I wonder if AMD might have a preliminary version of it ready, when the Ray Tracing updates come out for the PS5/Xbox versions of Cyberpunk, in early 2021.
 
  • Like
Reactions: Epos7
like this
Having said that, do you think it may be possible that Nvidia was expecting this and, as such, may have sandbagged their performance numbers via drivers/firmware to fool misdirect AMD in their development to reach performance targets?

Nope. Such things only exist in fantasy and don’t happen in the real world.
 
AMD is just recommending realistic power supply, I would never recommend a 750w for 3090 if you remotely want to OC it. Do you really believe Nvidia's recommendation? Noob grabbing a 750w to plug in 3090 making all 750w the same lol. Nvidia should have been more realistic about those recommendation. AMD is more realistic in real world scenarios and giving it more headroom. Nvidia should have said buy 750W GOLD 3080 or Platinum for 3090 to be on save side lol. AMD probably lookign

6800xt likely boosts higher on average then 6900xt. and 6900xt probably sticks more around 2015mhz of game clock. I think that is probably why you are seeing 300w for both.

Also Gamer Nexus talked about Rage mode, its not really doing much, just increases power budget a little and AMD is mentioning doesn't use much more power and its only 1-2% uplift. So not much, its not an actual OC. Just boost behavior aggression.
While I, personally, wouldn’t pair a 750w PSU with a 3090, nor did I, I can say that my UPS has never recorded over 650w when under load (and that includes my monitor, modem, and router). So a decent quality 750w may work with a 3090 but is too close to my comfort since I like to allow headroom.

For a 3080, sure. For a 3090, I would feel more comfortable with an 850 to allow room for aging and quality variance. It I went with a 1000w Titanium instead.

As for the 300w for both, while what you say is probably true, I just find it odd. Especially, since the 6900XT has more compute/ray accelerators and more TMUs. For it being their flagship and to advertise it at the same clock rate as the 6800XT yet be limited to the same exact power envelope just seems a little off.

I wonder if it was due to trying to meet a certain power target or a thermal limitation, either at the chip level or the cooling system.
 
Nope. Such things only exist in fantasy and don’t happen in the real world.
While uncommon, it is not pure fantasy. Nvidia did something similar with their Titan XP back in 2017. A driver release unlocked up to 3x performance in certain productivity applications around the release of Vega.

https://blogs.nvidia.com/blog/2017/07/31/titan-xp-drivers-new-levels-of-performance-for-creatives/

While it wasn’t a 3x performance increase in games, it is a sign of sandbagging and means that while improbable, it is possible.

There are probably other instances in the past between the two, but figured this was a decent example. Many may come across as driver optimizations.
 
While uncommon, it is not pure fantasy. Nvidia did something similar with their Titan XP back in 2017. A driver release unlocked up to 3x performance in certain productivity applications around the release of Vega.

https://blogs.nvidia.com/blog/2017/07/31/titan-xp-drivers-new-levels-of-performance-for-creatives/

While it wasn’t a 3x performance increase in games, it is a sign of sandbagging and means that while improbable, it is possible.

I wouldn’t call that sandbagging. It’s well known that Nvidia doesn’t support productivity optimizations on its gaming SKUs. They weren’t trying to fool anyone because you could simply look at Quadro performance for the same chip.
 
I don't think Nvidia is sandbagging. It seems clear they squeezed to get to the performance which they did. High power draw and bleeding edge memory tech, to get there. And the high power draw is likely why their frequencies aren't very high.

I wonder if the 3070 might have a bit more headroom due to the lower base power. And I wonder if Nvidia might be using the extra time from pushing back the release, to work with partners on tweaking the products to ensure better clocks and maybe some versions will compete closer to the 6800?
 
While I, personally, wouldn’t pair a 750w PSU with a 3090, nor did I, I can say that my UPS has never recorded over 650w when under load (and that includes my monitor, modem, and router). So a decent quality 750w may work with a 3090 but is too close to my comfort since I like to allow headroom.

For a 3080, sure. For a 3090, I would feel more comfortable with an 850 to allow room for aging and quality variance. It I went with a 1000w Titanium instead.

As for the 300w for both, while what you say is probably true, I just find it odd. Especially, since the 6900XT has more compute/ray accelerators and more TMUs. For it being their flagship and to advertise it at the same clock rate as the 6800XT yet be limited to the same exact power envelope just seems a little off.

I wonder if it was due to trying to meet a certain power target or a thermal limitation, either at the chip level or the cooling system.

Most common thing here would be 6800xt actually boosts little higher while 6900xt is sticking to it or not boosting as high. That is the only thing I can think off. I don't think AMD would straight bullshit about Total power, but who knows may be 6900xt is highly binned? Its all a guess but something is clearly up to keep them at same power envelope.
 
While uncommon, it is not pure fantasy. Nvidia did something similar with their Titan XP back in 2017. A driver release unlocked up to 3x performance in certain productivity applications around the release of Vega.

https://blogs.nvidia.com/blog/2017/07/31/titan-xp-drivers-new-levels-of-performance-for-creatives/

While it wasn’t a 3x performance increase in games, it is a sign of sandbagging and means that while improbable, it is possible.

There are probably other instances in the past between the two, but figured this was a decent example. Many may come across as driver optimizations.

That is common though. Nvidia holds back pro drivers for consumer card unless they need it. it wasn't sandbagging it was because its not a normal practice for them.
 
The infinity fabric cache really gives AMD an edge. Nvidia is forced to use GDDR6x and wide bus widths to be competitive, and that's expensive. They are double screwed as they are still limited to 8 Gb chips on gddr6x so they are limited in memory pool as well.

The AMD numbers being with Smart Memory Access (requires 5000 series CPU) and Rage Mode (OC essentially) compared with stock Nvidia products is a bit meh.

SAM doesn't give an edge in gaming. It allows the system to use gpu memory when short on system ram. As others have stated, RAGE mode doesn't really give you that much performance either.
 
The infinity fabric cache really gives AMD an edge. Nvidia is forced to use GDDR6x and wide bus widths to be competitive, and that's expensive. They are double screwed as they are still limited to 8 Gb chips on gddr6x so they are limited in memory pool as well.



SAM doesn't give an edge in gaming. It allows the system to use gpu memory when short on system ram. As others have stated, RAGE mode doesn't really give you that much performance either.
eh, its very much about increasing game performance (time stamped)


it is nothing to do with being short on system RAM. Basically means a lot less redundant copying back and forth between system memory and GPU memory (or the old style, 256mb accessible space)----so that the CPU and GPU can talk to eachother. The CPU is not using the VRAM as extra system memory. Instead, it is accessing the VRAM to directly see what the GPU is doing. Being able to see all of the VRAM, will Skip wait states and copying GPU data to system memory (or managing the antiquated, much smaller pool of 256mb accessible VRAM), so that the CPU can look at what the GPU is doing. They've been doing stuff like that in consoles, since at least PS4.

It has generally applicable utility for virtually any game. But AMD also said its possible for developer to code specifically for it. Further increasing the benefit. And I expect their next motherboard platform will take this idea even further.
 
Last edited:
SAM doesn't give an edge in gaming. It allows the system to use gpu memory when short on system ram
The marketing around it is saying that it does more than that:

https://www.tomshardware.com/news/amds-rx-6000-gpus-to-boost-perf-with-ryzen-5000-cpus-via-smart-memory-access#:~:text=By enabling the Smart Memory,on-card 16GB of VRAM.

Historically some form of memory operation between the CPU and GPU can be extremely costly (specially if you stuff that are in read/write mode), anything that help that should help certain workload.
 
SAM doesn't give an edge in gaming. It allows the system to use gpu memory when short on system ram. As others have stated, RAGE mode doesn't really give you that much performance either.

It says right on their slides that there is up to like a 13% uptick in performance - so I would argue that is not the case. As far as how much difference it makes in reality, we shall see.
 
It says right on their slides that there is up to like a 13% uptick in performance - so I would argue that is not the case. As far as how much difference it makes in reality, we shall see.

Ah, I did miss that. It looks like Forza got 13%, most other games were around 5%. Also, the 6900xt looked to be the only one with a Rager on.
 
Yeah, I think it's clear that NVIDIA rushed the launch for two reasons:

- Q3 earnings
- Intel that AMD had something very competitive this time.

Now we see big, hot cards from NVIDIA and barely any to buy...

So I do not think sandbagging is at play.

I love that AMD is killing it. I, personally, will never deal with their gaming drivers until I hear from you all that things are solid. My 290X was the last one after a good run with them.
 
I think amd is pushing GPU and CPU combo. Heck for those buying zen 3 it’s a good feature for extra performance. Performance is performance. Since I am getting zen 3 and have x570 already. I’ll be picking up 6800xt or 6900xt with 5950x.
 
Yeah, I think it's clear that NVIDIA rushed the launch for two reasons:

- Q3 earnings
- Intel that AMD had something very competitive this time.

Now we see big, hot cards from NVIDIA and barely any to buy...

So I do not think sandbagging is at play.

I love that AMD is killing it. I, personally, will never deal with their gaming drivers until I hear from you all that things are solid. My 290X was the last one after a good run with them.

I think drivers will be fine. When the same architecture was in consoles for it was really solid drivers. RDNA1 according to MLID was a bitch to make drivers for because it was hybrid and wasn’t a fully baked architecture like RDNA2 and had a bug that they had to workaround, hence shitty driver initially for first 6 months.

Given RDNA 2 is now everywhere and new architecture not having to deal with part gcn and shit it should come out way better.
 
For a 3080, sure. For a 3090, I would feel more comfortable with an 850 to allow room for aging and quality variance. It I went with a 1000w Titanium instead.

I think you're right that the quality of PSU is important. Optimum Tech on YT found the Corsair SF600 Gold can handle the 3080 just fine pair with an overclocked 10900K.

I'd still feel better with a 750 watter, but it sounds like Nvidia baked some padding into their recommendation.
 
I'd still feel better with a 750 watter, but it sounds like Nvidia baked some padding into their recommendation.

They have to...if for nothing else to account for some idiot running some off brand $40 750W PSU.
 
It's def a really impressive show from AMD.. good for them. But they are really only edging over the 3090 due to the Smart Cache..

Nvidias R&D and machine learning thats been chugging away for god knows how long is untouchable, DLSS makes it worth the price of entry alone. AMD just don't have that kind of clout.

I'd be in the market for a 6900XT if it wasn't for the fact that every 3D renderer worth its salt uses CUDA cores. Need the 3090 for work too.. just need to fucking find one in stock, I am not paying scalper prices.
 
Back
Top