RTX 3xxx performance speculation

Then why did you say at 4K, rather than get off into the weeds with Hz?

Because 4k on Bioshock (which I upscale like crazy) is different than 4k on BFV.

Anyways - Ampere should be interesting. I think some might be right with a RT increase but the 2080ti is a huge chip. It’ll be hard to brute force past it for rasterized. Although nVidia has pulled some decent increases out of nowhere in the past.
 
  • Like
Reactions: Auer
like this
Well your facts just aren't adding up. Must be that new math they are teaching these days.

If they don't add up...a counter-argument would be the proper response.

Not the failed comedy act you are doing right now.
Put up or shut up...
 
30% isn't lackluster. In fact, it about average/typical for the last decade. It's just that Pascal was huge jump that spoiled everyone, and they set that as normal expectation.

As I said before. I expect about 30%.
Yes 30% is roughly average but that did not include a massive price hike. For the same price the percent increase in performance was virtually zero between the 2080 and 2080 Ti in other words.
 
Yes 30% is roughly average but that did not include a massive price hike. For the same price the percent increase in performance was virtually zero between the 2080 and 2080 Ti in other words.

Even AMD is stating that price per transistor is going up...welcome to the real world.
 
Not necessarily going up, just not going down proportionally with the node.

AMD says otherwise:
cropped-cost-yield-node-compare-1-1155x719.jpg
 
Thats not cost per transistor.
Smaller nodes get more transistors per mm2.

But that's pretty flat. So as they add more transistors brute forcing it the BOM costs more. nVidia only added 10-15% with Turing to rasterized (ignoring transistor increase) IIRC.
 
Last edited:
Even AMD is stating that price per transistor is going up...welcome to the real world.
Nvidia did not go to 7nm process, 12nm -> with your logic, as usual, I guess 7nm Amper 3080 Ti will be over $2000 :D
 
Nvidia did not go to 7nm process, 12nm -> with your logic, as usual, I guess 7nm Amper 3080 Ti will be over $2000 :D

You think AMD's big 7nm die will be cheap?
And allthough NVIDIA has high end to them self...they price SKU's at what gives them the best revenue.

Gaming is a luxury hobby...stop whining.
 
I don’t know what that means or why it’s relevant. We’re talking about the speed of developer adoption of new 3D api features.

If you think DXR adoption is lackluster you must also think most DirectX feature releases were a disaster.

I do think DX12 was.
 
What is really increasing is R&D costs.

Both production and up front costs have been increasing with each process shrink.

Its a huge deal that we have completely upset the main driver of decades of silicon economics. In the past each generation would bring a massive increase in transistor budget for free.

That free ride is over. Now if you want to increase transistor counts by 50%, you will likely pay close to 50% more.

Get used to increasing GPU prices if they require big increases in transistor count.

Performance will likely come from a modest increase in transistors, some architecture enhancements, and some clock speed boost.
 
What is really increasing is R&D costs.

What's really increasing is the maturity level of GPUs as a product. Massive percentage gains are normal early on in the existence of anything and they will always taper as that product becomes more mature. The VooDoo2 was massively faster than the VooDoo as a percentage. Now that graphics processing has matured, both of them combined are less powerful than an Apple Watch. If Nvidia released a new line that was 500% faster than the VooDoo 2 today, we'd all LOL as Nvidia went out of business. Despite this, internet posters seem stuck on comparing the generational improvements [as a percentage] from 15 years ago to generational improvements today.

The flipside to this is that what matters to the end user is the percentage gain. A GPU that's 5% faster makes no practical difference where as one that is 50% faster enables new uses and makes a significant impact on existing ones.
 
You think AMD's big 7nm die will be cheap?
And allthough NVIDIA has high end to them self...they price SKU's at what gives them the best revenue.

Gaming is a luxury hobby...stop whining.
Looks like that will be the case if you are a future Nvidia user. $1200 for a 12nm (modified 16nm node) to Nvidia 7nm -> $2000 and justification will be what? It is a luxury lol. Ahmmm AMD Radeon VII, even when the 7nm was relatively new, lower yields, using expensive HBM, 16GB etc. was what? Less than $800. It is more Nvidia wanting you to pay them more for their pockets than the increase in making the card. I really don't think prices will go higher than Nvidia current pricing but lower due to competition. AMD 5xxx series is getting Nvidia to become a little bit more reasonable and Nvidia immediately released the Super line -> Better performance and cheaper pricing when AMD had competitive performance at a much lower cost.
 
Looks like that will be the case if you are a future Nvidia user. $1200 for a 12nm (modified 16nm node) to Nvidia 7nm -> $2000 and justification will be what? It is a luxury lol. Ahmmm AMD Radeon VII, even when the 7nm was relatively new, lower yields, using expensive HBM, 16GB etc. was what? Less than $800. It is more Nvidia wanting you to pay them more for their pockets than the increase in making the card. I really don't think prices will go higher than Nvidia current pricing but lower due to competition. AMD 5xxx series is getting Nvidia to become a little bit more reasonable and Nvidia immediately released the Super line -> Better performance and cheaper pricing when AMD had competitive performance at a much lower cost.

No one is forcing you to buy?
I could buy a 2080 Ti every month with my "for fun money"...get a better job perhaps, but do stop whining about stuff you will never buy.
 
No one is forcing you to buy?
I could buy a 2080 Ti every month with my "for fun money"...get a better job perhaps, but do stop whining about stuff you will never buy.
lol, you have no clue what I can and cannot buy :ROFLMAO:. I buy AMD/Nvidia at my pleasure, not theirs.
 
My point still stands...stop whining about the price of a (cheap) luxury hobby...
Not whining, laughing, big difference. When I go to a restaurant, I don't go to the highest priced one with crap food because it cost more but the one that has the best service and food. Pay the same amount for the same performance as my 1080 Ti - that would be stupid. spend almost double the amount for 30%+ - lol more stupid. I expect Nvidia to learn from their mistakes and provide something better is all or I will just eat some place else. Now the tech finally shows it has potential which I also expect to be realized more with Ampere - if so, I may buy several like I bought 4 pascal cards in the past and have two now.
 
Not whining, laughing, big difference. When I go to a restaurant, I don't go to the highest priced one with crap food because it cost more but the one that has the best service and food. Pay the same amount for the same performance as my 1080 Ti - that would be stupid. spend almost double the amount for 30%+ - lol more stupid. I expect Nvidia to learn from their mistakes and provide something better is all or I will just eat some place else. Now the tech finally shows it has potential which I also expect to be realized more with Ampere - if so, I may buy several like I bought 4 pascal cards in the past and have two now.

You keep treating the 1080 Ti as the "norm" emwhen infect it one of the biggest outliers in performance jumps...zzzzZZZZZzzzz.

If you memory is only good for one generation, you are an utter waste of time...
 
Now some better information is coming out:
https://www.nextplatform.com/2020/0...rst-production-cray-shasta-supercomputer/amp/


That is compared to V100 (Volta)
Cool! Epyc and Next Gen Nvidia Tesla. AMD better support Nvidia and visa versa in the long run. Probably some very good reasons keeping Nvidia for AI, software, experience with, performance etc. AMD so far has taken more a wait and see for specific AI hardware as in like Tensor cores and relying more on their Platform capability to push AI. Now will this Supercomputer use NVLink? or just pcie4?
 
You keep treating the 1080 Ti as the "norm" emwhen infect it one of the biggest outliers in performance jumps...zzzzZZZZZzzzz.

If you memory is only good for one generation, you are an utter waste of time...
Maxwell was also a very keen moment from Kepler - those two big jumps pushed Nvidia way ahead particularly in the mobile market where ATI/AMD had a good standing. Yes big jump -> no need to jump to Turing with little to no jump for the same price. I don't think that is too hard to comprehend and many took it the same way. The many promises of Turing has been very slow to come, while I like Nvidia taking chances, I do not like how Turing was promoted and the overall performance did not warrant an upgrade. People have many reasons why not going ahead so everyone can be different. Ampere makes more sense on my end to look at when available which also means AMD as well. In my case it will be a high end, high resolution, high refresh monitor/GPU combination. Zero reason to upgrade with current monitors and the monitor I really want is more like 2nd half of this year. We will see.
 
Now some better information is coming out:
https://www.nextplatform.com/2020/0...rst-production-cray-shasta-supercomputer/amp/


That is compared to V100 (Volta)

Yeah, that was the max I was expecting to see. 50-70% is realistic for a jump to 7nm!

But I'm curious about how big the die size is - if it's as huge as it's predecessor, the Ampere chips could be significantly smaller (and a bit closer to 50% bump).

NVIDIA took over a year before they turned something the size of Volta into a $1200 consumer product! This is a cutting-edge process node, so large chip yields will be lower!
 
Last edited:
Yeah, that was the max I was expecting to see. 50-70% is realistic for a jump to 7nm!

But I'm curious about how big the die size is - if it's huge, the Ampere chips could be significantly smaller (and a bit slower).

NVIDIA took over a year before they turned something the size of Volta into a $1200 consumer product! This is a cutting-edge process node.
I didn't think that something the size of V100 was possible. Wasn't it like 815mm2?
A 815mm2 Ampere card would be nuts (as would a similarly sized Navi would)
 
Yeah, that was the max I was expecting to see. 50-70% is realistic for a jump to 7nm!

But I'm curious about how big the die size is - if it's as huge as it's predecessor, the Ampere chips could be significantly smaller (and a bit closer to 50% bump).

NVIDIA took over a year before they turned something the size of Volta into a $1200 consumer product! This is a cutting-edge process node.
Yields for TSMC 7nm are said to be very good, even the 5nm yields are above the curve per time frame compared to 7nm. 7nm TSMC fabs open up much more to everyone when Apple goes to 5nm so Nvidia should have plenty access for big and smaller chips. Will Nvidia wait to push their larger size Amper chips down for gaming? Usually with Nvidia on a new node, the bigger size GPU's come later but if AMD has something very competitive, I hope Nvidia pushes up the time table (Nvidia usually refuses to take 2nd place, which is one quality I do like, don't like some of the marketing BS that goes along with it). March will be interesting or could be.
 
I didn't think that something the size of V100 was possible. Wasn't it like 815mm2?
A 815mm2 Ampere card would be nuts (as would a similarly sized Navi would)


Yeah, that's always been my concern with guesstimating Ampere performance - if they go same sized as Turing, you'll get at least 70%. But that's more realistic for the 7nm process refresh (in two years, just like Turing).

If you go more affordable dies (600mm2 and under) like Pascal, you drop down to 50%.
 
Last edited:
Maxwell was also a very keen moment from Kepler - those two big jumps pushed Nvidia way ahead particularly in the mobile market where ATI/AMD had a good standing. Yes big jump -> no need to jump to Turing with little to no jump for the same price. I don't think that is too hard to comprehend and many took it the same way. The many promises of Turing has been very slow to come, while I like Nvidia taking chances, I do not like how Turing was promoted and the overall performance did not warrant an upgrade. People have many reasons why not going ahead so everyone can be different. Ampere makes more sense on my end to look at when available which also means AMD as well. In my case it will be a high end, high resolution, high refresh monitor/GPU combination. Zero reason to upgrade with current monitors and the monitor I really want is more like 2nd half of this year. We will see.
980 Ti was only 30% faster than 780 Ti. What was impressive about Maxwell was the performance they were able to squeeze out of 28nm when everyone thought it was already being pushed to its limit.
 
980 Ti was only 30% faster than 780 Ti. What was impressive about Maxwell was the performance they were able to squeeze out of 28nm when everyone thought it was already being pushed to its limit.
30% but then Maxwell OC so well it pushed it up past 40% easily - slaughtering AMD Fury cards to no end - It was an embarrassing time for AMD.
 
30% but then Maxwell OC so well it pushed it up past 40% easily - slaughtering AMD Fury cards to no end - It was an embarrassing time for AMD.
True, but this was also the last generation from NVIDIA where one could easily up the power limit. Even without breaking the power limit, though, I believe most easily overclocked to 1.5 GHz from 1.05 GHz.
 
True, but this was also the last generation from NVIDIA where one could easily up the power limit. Even without breaking the power limit, though, I believe most easily overclocked to 1.5 GHz from 1.05 GHz.
Maxwell killed AMD in the mobile market followed up by Pascal - literally slaughtered. Now with Turing and AMD Navi, AMD is making a come back. How long and successful that comeback will depend upon Ampere which should be better efficiency wise than Turing. Then again AMD will have refreshes as well, plus RNDA 2 will come out, so AMD may continue to make a recovery in the mobile market. Long ways to go there.
 
Back
Top