RTX 3xxx performance speculation

Is this what you used? If so, it’s an extremely rough estimate that’s based on theoretical performance and not actual performance. You can probably determine that looking at where the 590 and 6990 sit in the lineup.
Both of these put out great performance, but only one of them had great frametimes ;)
 
Here is the data table on the contested chart. I used performance summary from TPU comparing the highest common resolution tested between prior generations, so it is "best case" scenario of GPU-bound performance.

CardFactor from 6800 UltraFactor from Prior Card
6800 Ultra
1​
1​
7900GTX
1.7560975​
1.7560975​
8800GTX
3.058536362​
1.7416666​
9800GTX
3.027950998​
0.99​
GTX 285
4.806271162​
1.5873015​
GTX 480
6.675376186​
1.3888888​
GTX 580
7.276160043​
1.09​
GTX 680
8.662094631​
1.1904761​
GTX 780 Ti
12.03068622​
1.3888888​
GTX 980 Ti
17.18669425​
1.4285714​
GTX 1080 Ti
31.82721069​
1.8518518​
RTX 2080 Ti
44.20445646​
1.3888888​

The chart is pretty useless though. It's not very accurate and even if it was accurate it doesn't really show the whole story.

First, lets look at the generations that drag down the average below 40% and the reason for that.

8800GTX to 9800GTX - The 9800GTX was just a rebranded 8880 GTS.
GTX 480 to GTX 580 - No real changes, the 580 was basically a fixed 480.
GTX 580 to GTX 680 to 780Ti to 980Ti - Will take all these together. First generation Kepler had no high end card. The 680 was a really a midrange card. The 780Ti was 2nd generation Kepler. The 980ti was on the same 28nm process but a new Architecture.
1080ti to 2080Ti - Different process and Architecture but not a die shrink.

Now lets look at the other side of things.

6800 to 7900GTX - Die shrink and new Architecture.
7900GTX to 8800GTX. New Architecture. Same process, but, the 8800GTX was over double the die size and transistor count of the 7900GTX.
9800GTX to GTX 285 - Die shrink and new Architecture.
980Ti to 1080Ti - You could call it a double die shrink, Skipped 20nm and went from 28nm to 16nm(and 14nm) and a new process.

I hope you can see the point I am making. Every time there has been a die shrink and a new Architecture there has been a 40%+ increase in performance. Now the 3xxx cards are coming and they are going to be on die shrink and a new process. So if we are trying to estimate their performance should we really be looking at past generations where there was either no die shrink or no new process?

This is why I will be disappointed if the performance increase isn't around 40% over Turing, it's what has happened nearly every other time there has been a both a die shrink and a new Architecture.
 
I hope you can see the point I am making. Every time there has been a die shrink and a new Architecture there has been a 40%+ increase in performance. Now the 3xxx cards are coming and they are going to be on die shrink and a new process. So if we are trying to estimate their performance should we really be looking at past generations where there was either no die shrink or no new process?

Ultimately, NVidia isn't forced to deliver on anyone's expectations, and the conditions on the cost of silicon has changed drastically over past releases. Those poor releases where NVidia did little are also instructive. Often those were choices to not chase significant improvements.

It's up to NVidia how big and how expensive they want to push dies this time, and how they want to arbitrarily segment the market on pricing.

With each generation they have been pushing up the pricing on each numerical tier. Is it really a great benefit if the 3060 increases by 40%, but then costs $500?

It's all up in the air right now.

But it should be an interesting release cycle with new AMD and NVidia parts dropping nearly together.
 
I predict 10% faster in general, but 30% increase with RT enabled.... just pulled it out of my @$$, but hey, this is a speculation thread :).
 
The chart is pretty useless though. It's not very accurate and even if it was accurate it doesn't really show the whole story.

First, lets look at the generations that drag down the average below 40% and the reason for that.

8800GTX to 9800GTX - The 9800GTX was just a rebranded 8880 GTS.
GTX 480 to GTX 580 - No real changes, the 580 was basically a fixed 480.
GTX 580 to GTX 680 to 780Ti to 980Ti - Will take all these together. First generation Kepler had no high end card. The 680 was a really a midrange card. The 780Ti was 2nd generation Kepler. The 980ti was on the same 28nm process but a new Architecture.
1080ti to 2080Ti - Different process and Architecture but not a die shrink.

Now lets look at the other side of things.

6800 to 7900GTX - Die shrink and new Architecture.
7900GTX to 8800GTX. New Architecture. Same process, but, the 8800GTX was over double the die size and transistor count of the 7900GTX.
9800GTX to GTX 285 - Die shrink and new Architecture.
980Ti to 1080Ti - You could call it a double die shrink, Skipped 20nm and went from 28nm to 16nm(and 14nm) and a new process.

I hope you can see the point I am making. Every time there has been a die shrink and a new Architecture there has been a 40%+ increase in performance. Now the 3xxx cards are coming and they are going to be on die shrink and a new process. So if we are trying to estimate their performance should we really be looking at past generations where there was either no die shrink or no new process?

This is why I will be disappointed if the performance increase isn't around 40% over Turing, it's what has happened nearly every other time there has been a both a die shrink and a new Architecture.
Why would they push it that far? There is no reason for them to do this. They just have to stay ahead of AMD.(They are far ahead already)
They are really competing against themselves right now.
 
With each generation they have been pushing up the pricing on each numerical tier. Is it really a great benefit if the 3060 increases by 40%, but then costs $500?
I think this is an important component of the discussion. Can you really compare the 1080 Ti to the 2080 Ti when the 2080 (non-Ti) launched at a higher price than than the 1080 Ti?

2013 - 780 Ti launches at $700
2015 - 980 Ti launches at $650
2017 - 1080 Ti launches at $700
2018 - 2080 Ti launches at "$1000" but sells for $1200-1300
 
Why would they push it that far? There is no reason for them to do this. They just have to stay ahead of AMD.(They are far ahead already)
They are really competing against themselves right now.

Correct - right now they are competing against themselves. And only way for them to stay in this dominant position is to keep pushing themselves to the max. They can take a look at Intel and see what happens when you get too comfortable.
 
Correct - right now they are competing against themselves. And only way for them to stay in this dominant position is to keep pushing themselves to the max. They can take a look at Intel and see what happens when you get too comfortable.
They can still push themselves but only release a little bit faster card and save some for a refresh. I just do not see them pushing it to the max and selling it all at once.
 
Correct - right now they are competing against themselves. And only way for them to stay in this dominant position is to keep pushing themselves to the max. They can take a look at Intel and see what happens when you get too comfortable.

Not really the same at all. Intel imploded when trying to advance to the next process node. By some analysis they were too aggressive pushing some features sizes sizes smaller, than were sensible without EUV, and trying to do it before EUV was an option. That's not a case of holding back, that's a case pushing to hard, before the technology was ready.

On the outside it looks like Intel resting on it laurels, but the reality is they bet everything on being early and denser on the the new node, and that bet failed spectacularly.
 
Not really the same at all. Intel imploded when trying to advance to the next process node. By some analysis they were too aggressive pushing some features sizes sizes smaller, than were sensible without EUV, and trying to do it before EUV was an option. That's not a case of holding back, that's a case pushing to hard, before the technology was ready.

On the outside it looks like Intel resting on it laurels, but the reality is they bet everything on being early and denser on the the new node, and that bet failed spectacularly.

Right so they were pushing (aggressively) one path forward, without having a backup plan / plans. Clearly they were too comfortable in their ability to push the new node and they failed. They did not strategically create for themselves other ways out. What happened to Andy Grove "only the paranoid survive" mantra? Clearly they were not paranoid enough.........
 
Right so they were pushing (aggressively) one path forward, without having a backup plan / plans. Clearly they were too comfortable in their ability to push the new node and they failed. They did not strategically create for themselves other ways out. What happened to Andy Grove "only the paranoid survive" mantra? Clearly they were not paranoid enough.........

Whats the alternate way out? You either can get the new process working well enough, or you can't. Ask Global Foundries about their 7nm backup plan...
 
Whats the alternate way out? You either can get the new process working well enough, or you can't. Ask Global Foundries about their 7nm backup plan...

Clearly intel F*** up against AMD, I hope you are not going to argue this point. Obviously there are things Intel could have done better not to lose such a big competitive advantage lead over AMD. According to their CEO, they admitted as much at Fortune’s Brainstorm Tech conference, where he stated “The short story is we learned from it,” Swan said. Intel has reportedly reorganized itself for greater transparency and better information exchange between business units to avoid these problems in the future." So clearly (at the very least) there was an issue of communication within the company that led to this issue by Intel's own admission. I am not an insider so I do not know exactly what and how many things they F**** up on, but clearly they did or AMD would not be where they are today.

My point is, if AMD was hot on their trail, and if Intel felt they were fighting for survival back in 2011, 2012, 2013, 2014, 2015 etc. - would they have made as many mistakes that placed them in this situation? Also unlike global foundries Intel could have a back up plan to outsource some of their chip manufacturing until they figured out their node issues. If your traditional area of core strength (fabs) becomes a liability, would it be too crazy to create a strategy plan on how to get around this until you either fix the issues or spin them off. As I said, I am not an insider, so I cant list all the internal failures that led to this, but clearly there could have been a better / faster way to address these issues. And having a bit of extra paranoia induced by AMD could have spurred Intel to act faster / smarter to whatever internal issues they were dealing with.
 
Why would they push it that far? There is no reason for them to do this. They just have to stay ahead of AMD.(They are far ahead already)
They are really competing against themselves right now.
They need to make RT fast enough to push sales. People want RT but not the current crop of cards due to either too low performance or too high price.
Next gen has to be a serious jump up otherwise only the very top card will be worth having which again will be too high price.
And more RT features will be implemented which need better performance.
They cant slouch on this.
 
  • Like
Reactions: noko
like this
They need to make RT fast enough to push sales. People want RT but not the current crop of cards due to either too low performance or too high price.
Next gen has to be a serious jump up otherwise only the very top card will be worth having which again will be too high price.
And more RT features will be implemented which need better performance.
They cant slouch on this.

That's assuming that RT is actually implemented in a timely fashion. This pattern of not releasing RT features until months after a game's release (if at all) makes Jensen's August 2018 launch presentation look like a sick joke to anyone who bought a 2XXX card expecting new and useful features anytime before the 3XXX parts launched. If nothing else, I hope that the new console releases bring some urgency to the release of RT because the 2XXX cards certainly didn't.
 
  • Like
Reactions: noko
like this
That's assuming that RT is actually implemented in a timely fashion. This pattern of not releasing RT features until months after a game's release (if at all) makes Jensen's August 2018 launch presentation look like a sick joke to anyone who bought a 2XXX card expecting new and useful features anytime before the 3XXX parts launched. If nothing else, I hope that the new console releases bring some urgency to the release of RT because the 2XXX cards certainly didn't.
Which re-enforces the point.
For NVidia this must be a success, nothing must allow RT to fail or be put on the backburner.
Next gen cards have to be perform well enough with a price that tempts people.
 
  • Like
Reactions: noko
like this
Anyone not is shortsighted. It's absolutely where things are headed. Nice job taking things too literally though!
Sure, they can, but is it worth it? I do not care if it had full RT at 1000000 fps, if the game is fast moving, who cares!!!!!
 
Sure, they can, but is it worth it? I do not care if it had full RT at 1000000 fps, if the game is fast moving, who cares!!!!!
Most people love more graphic fidelity regardless of the game. Nice goalposts movement though.
 
Most people love more graphic fidelity regardless of the game. Nice goalposts movement though.
Goalposts are the same, always have been. It's not great for fast paced games. Unless you stop when everything is dead and look around
it's not going to be seen.
 
Well I guess you could, but then you will die in the game. But you will have pretty stuff to look at while on the ground bleeding........
Your original complaint though was devs don't care about raytracing. Let's return to that.
 
I was implying they ALL are not doing it since it's still new.
Just ask if you do not understand.
Oh, everyone understood alongside your post history of bashing raytracing. All studios have it on their radar which is what was claimed. He didn't say everyone was working on it right now.
 
Oh, everyone understood alongside your post history of bashing raytracing. All studios have it on their radar which is what was claimed. He didn't say everyone was working on it right now.
Bashing, no. I would not mind it in slow games where FPS and paying attention to the action is not important.
 
Bashing, no. I would not mind it in slow games where FPS and paying attention to the action is not important.
It's not like people don't love graphics on fast games too. That's your opinion however so I can't really argue it... :).
 
Well I guess you could, but then you will die in the game. But you will have pretty stuff to look at while on the ground bleeding........

Do higher resolutions, tessellation, high res textures, AA, AO, shadows, particle effects, physics simulation etc also make you die? In that case you should really only play games using 2d sprites. That way you will never be distracted by "graphics".
 
Do higher resolutions, tessellation, high res textures, AA, AO, shadows, particle effects, physics simulation etc also make you die? In that case you should really only play games using 2d sprites. That way you will never be distracted by "graphics".
If it makes your fps drop too much, then yes, it could. ;)
 
  • Like
Reactions: noko
like this
Back
Top