RTX 3xxx performance speculation

I foresee a ~40-50% performance gain when utilizing RT and DLSS over the 20x0 non-Super cards as they'll be better tailored for those now more mature technologies. But without these features enabled the gain in overall performance will only be another ~30-35% for each card tier.

I also predict the performance of the fastest "Big Navi" card to barely nip at the heels of a 2080 Super when DXR features aren't used, while still a solid 10-20% behind once RT is enabled. And that's with me more than a little optimistic on the part of AMD here.

If that's the case, then thats underwhelming to say the least.
 
I think we're beginning to reach the limits of what GPUs built on silicon are capable of just as we have been with CPUs post Sandy Bridge.

with current gpu designs i agree.. if either AMD or Nvidia can get MCM designs to work then it might open the door for more improvements and potentially reduce costs in the process.
 
I think we're beginning to reach the limits of what GPUs built on silicon are capable of just as we have been with CPUs post Sandy Bridge.
thats why every one is moving to chiplets keep scaling with out the costs of massive monolithic dies
 
To run more instructions in a given amount of time. It's a great way to "brute force" improved performance.
 
And GTC goes virtual:
https://blogs.nvidia.com/blog/2020/03/02/gtc-san-jose-online-event/
"NVIDIA has decided to shift GTC 2020 on March 22-26 to an online event due to growing concern over the coronavirus. "


WTF? They are now cancelling the GTC Webcast keynote because of CV19:
https://nvidianews.nvidia.com/news/...-shared-on-march-24-followed-by-investor-call

NVIDIA today announced that, in light of the spread of the coronavirus, it is deferring plans to deliver a webcast keynote as part of the digital version of its GPU Technology Conference later this month.

The company will, instead, issue on Tuesday, March 24, news announcements that had been scheduled to be shared in the keynote.

How does a webcast increase the Virus threat?

Don't say it's running on Intel chips. ;)
 
  • Like
Reactions: noko
like this
Doesn't matter. No RTX 3000 series announcement until after they see what AMD does.

And all this hoopla I keep reading about these new GPUs in benchmarks are most certainly Enterprise GPUs.
 
Doesn't matter. No RTX 3000 series announcement until after they see what AMD does.

And all this hoopla I keep reading about these new GPUs in benchmarks are most certainly Enterprise GPUs.

It's been more the other way round lately. Nvidia release first followed by AMD. They wait to see how Nvidia prices there GPU's and they come in under that.
 
It's been more the other way round lately. Nvidia release first followed by AMD. They wait to see how Nvidia prices there GPU's and they come in under that.

Both of them release their cards, as soon as they are ready, and their timetables are internal.

If you have a hot new product you release as soon as you can, to sell for the longest period you can, with no competing part.

It never really makes sense to hold off and wait to see what a competitor does.

This applies to actual new product, not marketing adjustments like the "Super" cards, which are just re-purposing existing chips.
 
This applies to actual new product, not marketing adjustments like the "Super" cards, which are just re-purposing existing chips.

Your argument fails because the Super cards perform better...if more performance = "marketing" we have gone waaaay beyond "silly season".
 
If you have a hot new product you release as soon as you can, to sell for the longest period you can, with no competing part.

It never really makes sense to hold off and wait to see what a competitor does.

This really isn't correct. If the existing line is selling well, there is no benefit to releasing a new line. This is because you'd be doing little more than cannibalizing sales you were already going to get, but you'd be spending money to do so. This needs to be weighed against how many new customers you'd acquire via the new release. How many undecideds are there at this point?
 
This really isn't correct. If the existing line is selling well, there is no benefit to releasing a new line. This is because you'd be doing little more than cannibalizing sales you were already going to get, but you'd be spending money to do so. This needs to be weighed against how many new customers you'd acquire via the new release. How many undecideds are there at this point?

Nonsense. No one sits on their hands waiting for competitors.

Your argument fails because the Super cards perform better...if more performance = "marketing" we have gone waaaay beyond "silly season".

The Super cards are the exact same cards with a different set of tweaks. Totally a marketing driven move.
 
The Super cards are the exact same cards with a different set of tweaks. Totally a marketing driven move.

If the Super cards, which offer substantial performance increases over their non-Super counterparts, were nothing more than marketing, why didn't Nvidia launch with them?
 
If the Super cards, which offer substantial performance increases over their non-Super counterparts, were nothing more than marketing, why didn't Nvidia launch with them?

Process refinement. It's not like they've never done that before.
 
If the Super cards, which offer substantial performance increases over their non-Super counterparts, were nothing more than marketing, why didn't Nvidia launch with them?

Because the profit margin was higher on what they launched with (cards are really mostly stealth price cuts), and marketing didn't require the margin cut until AMD shipped Navi.

What's your theory?
 
Nonsense. No one sits on their hands waiting for competitors.



The Super cards are the exact same cards with a different set of tweaks. Totally a marketing driven move.
Everyone knows race car drivers wait at the starting line to see how fast their rivals go first, geesh. . . :D
 
Because the profit margin was higher on what they launched with (cards are really mostly stealth price cuts), and marketing didn't require the margin cut until AMD shipped Navi.

What's your theory?
Nvidia knew they had the capability but also knew they didn’t need that capability at launch, and they also knew AMD would be releasing later. So, they held back on the initial launch with the intent of the refresh upon AMD’s launch. Two sets of headlines, one set of R&D.

This is a pretty standard approach across industries. Even AMD has tried to do it on paper recently with their post-launch frequency boosts.

Everyone knows race car drivers wait at the starting line to see how fast their rivals go first, geesh. . . :D

Uh. Where do you think the term “sandbagging” came from? That is literally when you drive more slowly, as if your vehicle is ballasted up with bags of sand, so that you’re faster than your competition without showing them your full capability.

Why are decades-old terms for an approach which you guys say has never existed?
 
Nvidia knew they had the capability but also knew they didn’t need that capability at launch, and they also knew AMD would be releasing later. So, they held back on the initial launch with the intent of the refresh upon AMD’s launch. Two sets of headlines, one set of R&D.

Similar to what I said, but in reality mine fits better since these are more like price cuts, in response to an AMD release.
 
Similar to what I said, but in reality mine fits better since these are more like price cuts, in response to an AMD release.

Well, yeah. They were nothing more than price cuts if you're willing to ignore the two tiny little details of the change in silicon and the fact that the priced didn't change.

An orange is a potato if you're willing to ignore the fact that it isn't.
 
Well, yeah. They were nothing more than price cuts if you're willing to ignore the two tiny little details of the change in silicon and the fact that the priced didn't change.

An orange is a potato if you're willing to ignore the fact that it isn't.
For AMD sandbagging would most likely not work well. When AMD released the 480 for $199 -> They got great press reviews even with all the cards exceeding the pcie power limit initially. The 1060 was not to be seen for a couple of months, which when released had lower thermals and better performance but the reviews were already in. That gave them a endless positive review stance since all those same reviews are still present today. Cut today, if AMD released a reasonably priced and better performing than a 2080Ti they would once again get all the glory for history to tell and to convince. In this case Snowdog is right. Stuffing it to Nvidia, even for a short while will have an overall lasting positive effect.

Now Nvidia did sandbag the GF2 and allowed GF1 to continue on to virtualky sell out, maintaining production way after GF2 was ready. If I remember right 6-9 Months after GF2 was ready to go. ATi initial Radeon woke up Nvidia some since it outperformed the GF2 until Nvidia unleashed the Forceware driver update that increase performance like 10% (I just remember it was huge so that number is an estimate).
 
Well, yeah. They were nothing more than price cuts if you're willing to ignore the two tiny little details of the change in silicon and the fact that the priced didn't change.

An orange is a potato if you're willing to ignore the fact that it isn't.

2060 Super is a price reduced 2070 (using a slightly more cut down 2070 die).
2070 Super, is price reduced 2080 (using a more cut down 2080 die),

These are essentially price reduced 2070, and 2080.
 
2060 Super is a price reduced 2070 (using a slightly more cut down 2070 die).
2070 Super, is price reduced 2080 (using a more cut down 2080 die),

These are essentially price reduced 2070, and 2080.

Orange = potato except it's citrus
Banana = airplane except smaller and without wings
 
I am stating the true situation, and you're just babbling nonsense. Awesome contribution there. :rolleyes:

You're literally stating that there was no price reduction and that there was a change in silicon.... but still insisting that the two things you just said never happened. 10 years ago, this would be called dimentia.

For your dimentia trifecta, you're also insisting that Nvidia would never sandbag because that would be disadvantageous while at the same time literally stating that they took good silicon and reduced its functionality to gain a competitive advantage.

Enjoy your flight.
1583891426662.png
 
You're discussing how to describe the same thing... less specifically.

At some point, yields for the various dies used in the 2000-series improved, and at some point, Nvidia 'refreshed' their lineup by:
  • Lowering prices for the same performance, using next higher dies in some cases
  • Renaming higher-end dies with lower-end model numbers along with the 'Super' designation
  • Timing this release with their competition's latest release
About all we can say is that the yield improvement happened before AMDs release. And for Nvidia's part, it wasn't a terrible move overall, pushing more performance to lower prices, while from a business perspective, providing a reasonable alternative to new competition.
 
You're discussing how to describe the same thing... less specifically.

At some point, yields for the various dies used in the 2000-series improved, and at some point, Nvidia 'refreshed' their lineup by:
  • Lowering prices for the same performance, using next higher dies in some cases
  • Renaming higher-end dies with lower-end model numbers along with the 'Super' designation
  • Timing this release with their competition's latest release
About all we can say is that the yield improvement happened before AMDs release. And for Nvidia's part, it wasn't a terrible move overall, pushing more performance to lower prices, while from a business perspective, providing a reasonable alternative to new competition.


Yield may or may not have improved. We can't tell, because for the most part, they were selling dies that were more disabled.

Ignore the marketing name for a moment, as some people get childish when the marketing name is mentioned. Focus on the silicon:

TU 106 - 2304 Cuda for $499, becomes TU 106 with 2176 Cuda for $399
TU 104 - 2944 Cuda for $699, becomes TU 104 with 2560 Cuda for $499

That isn't strongly indicative of improved yield to me, since NVidia started selling parts that were more disabled than they previously were, though at a nice discount.

You don't need a yield improvement to do this. Which is why this much more like a Marketing move. They certainly could have done this day 1 of they wanted less margin/chip.
 
Yield may or may not have improved. We can't tell, because for the most part, they were selling dies that were more disabled.
I'm not going to argue with 'may'; at best, we can say that since yields usually improve over time, that probably happened here too. But since we can't prove that yield improvements were a major reason that Nvidia released the Super cards... yeah :).
Ignore the marketing name for a moment
This is really how I approach GPU releases, and something everyone should be aware of when discussing the technical aspects of GPUs. Not just what something is named, but what it actually is composed of.
You don't need a yield improvement to do this. Which is why this much more like a Marketing move. They certainly could have done this day 1 of they wanted less margin/chip.
Based on what you've posted, I'm inclined to agree.

Overall the RTX release mostly makes sense; the GPUs are larger, memory is faster, so initial unit cost is higher, and with non-shader resources taking die space, the performance jump is lower than it could have been.

At the same time, Nvidia had to know that the MSRP increases, or another way, the flat pricing per performance from 1000- to 2000-series at release with a single faster SKU and accompanying higher price, would have resulted in fewer unit sales.

Which means that they were willing to settle for fewer unit sales. Makes me wonder if they sold as many as they had initially expected.

saw this today. Taking it with a grain of salt but honestly id keep expectations low.

330963_1583955267710.png
This is supportable to a degree, given that TSMC is tapped, but it's also enough of a departure from the norm to be questionable on its face on the fab and performance points. As for SLI, if it requires NVLINK, then it would make sense that it's limited to SKUs shared with compute parts that would also be likely to be teamed. And yeah, RTX should hit every SKU this time around.
 
If this is what Ampere is offering, then it's time to quit PC gaming for 2 years and just get a PS5 or Xbox Series X. 3080 is basically a 2080 Ti with lower TDP.
 
saw this today. Taking it with a grain of salt but honestly id keep expectations low.

View attachment 229321

Pretty sure that is complete Bullshit. Multiple red flags here. There is no way the whole lineup would leak, they will have a staggered release. Someone just made up the whole thing, starting wtih the top entry and subtracting a bit each time, till they hit bottom.

RTX for everyone? only someone who doesn't understand RTX resource usage, would even make up that claim for a card with 4GB.

Samsung 10nm? WTF Samsung is supposed to have very good 7nm now. Plus NVidia walked back Samsung usage. IIRC they were indicating flagship on TSMC

How in the world are they going to get 95% of 2080 Ti performance on a 256 bit memory bus?

etc....

Bullshit.
 
Back
Top