RTX 3xxx performance speculation

The anatomy of a typical RT effect is one part Intersection Testing (RT cores - the actual tracing of rays) one part shading those results (traditional cores) and one part denoising (contrary to NVidia arguments, usually traditional cores). More of that RT effect frame time was spent dealing with the output of RT cores, than actually in the RT cores.

It also highlights why spending precious die space, mostly on RT HW boosting has poor payback. A balanced approach makes much more sense given how much more traditional HW affects frame rates. You want enough RT HW so it isn't the bottleneck. That has already been accomplished, to the point that Traditional cores are more the bottleneck now.

For Ampere, or whatever the next gen GPUs are, Maybe Nvidia finally got the denoising part of the pipeline moved over to the Tensor cores.

RT cores only take up a tiny amount of Die Space.

Surely having the 3060 matching the 2080Ti in Ray Traced performance is a realistic target? Especially when you consider how small a jump Turing was over Pascal in normal rendering. Because, unlike others in this thread, I think Nvidia are going to go big on Ray Traced performance this generation. I think they are going to make sure that the owners of Pascal and lower cards will want to make the switch. And I bet they are going to make sure there is enough RT performance to entice Turing owners to upgrade as well.
 
For Ampere, or whatever the next gen GPUs are, Maybe Nvidia finally got the denoising part of the pipeline moved over to the Tensor cores.

NVidia already promotes denoising on Tensor cores, Developers prefer alternatives using conventional cores.

RT cores only take up a tiny amount of Die Space.

Like 10%? Now if you do 4x as much RT cores, how much space are they taking up?

Surely having the 3060 matching the 2080Ti in Ray Traced performance is a realistic target?

You didn't really read previous posts did you? You could do magically do the actual ray tracing part for free, and you still might only get 10% faster Ray Tracing performance in games.

I think they are going to make sure that the owners of Pascal and lower cards will want to make the switch. And I bet they are going to make sure there is enough RT performance to entice Turing owners to upgrade as well.

The biggest thing the can do to entice holdouts is increase standard Raster performance, that many felt didn't get the upgrade needed in Turing.

The best approach is a balanced one. They may tweak the RT core balance a bit, but they really need to boost traditional performance proportional RT boosts, because Traditional cores are the main frame time occupancy in both Raytraced and Raster games.

If you concentrate too much on RT cores, that comes at the detriment of traditional cores, and lower traditional cores bottlenecks every kind of performance.
 
Bottom line is the more we can move away from rasterization, the better ray tracing will become.
 
What I'd be curious to know is how serious a threat is nVidia taking Series X and PS5. Gaming is no longer a niche hobby and while the high end market is still reasonably small it's still competing with consoles more than ever before (mid-range cards even more so) I know many gamers (especially older ones who's time has become far more limited in recent years) who are struggling with what direction to go. Do they buy the new console for $500/$600 and get near identical performance for the next 18 months then upgrade the video card when the PC starts pulling ahead of consoles again or just upgrade the video card? nVidia is going to have to give (most) people a real good reason to spend an extra $300 on a 3080 or the equivalent on a 3070.
 
What I'd be curious to know is how serious a threat is nVidia taking Series X and PS5.
Series X is likely to boost Nvidia's sales as games will run on desktops and Nvidia has the better hardware... the PS5 is no different than the previous PlayStation consoles.
 
Series X is likely to boost Nvidia's sales as games will run on desktops and Nvidia has the better hardware... the PS5 is no different than the previous PlayStation consoles.

nVidia does have the better hardware but the overhead of running a game on a PC vs a console is much higher and every new console generation closes that hardware gap between it and the PC. Saying the "PS5 is no different than previous Playstation console is like saying nVidia's 3000 series cards will be no different than any of the previous cards". Feature for feature these new consoles are closer to a high-end video cards than ever before. With the IO improvements in both you're probably looking at consoles actually having improved performance over current PCs.

I've been gaming on PCs and consoles since 1986, this generation is not like all the other generations.
 
nVidia does have the better hardware but the overhead of running a game on a PC vs a console is much higher and every new console generation closes that hardware gap between it and the PC. Saying the "PS5 is no different than previous Playstation console is like saying nVidia's 3000 series cards will be no different than any of the previous cards". Feature for feature these new consoles are closer to a high-end video cards than ever before. With the IO improvements in both you're probably looking at consoles actually having improved performance over current PCs.

I've been gaming on PCs and consoles since 1986, this generation is not like all the other generations.
At this time we cannot compare what Nvidia will have with the next generation nor the performance of next generation consoles. Now the next gen consoles do look very promising but could also be overhyped at this stage. Nvidia usually does extremely well on major node changes so we may be pleasantly surprised with Ampere or not. That is just how I see it at this time.

I agree with you on the jump from one gen to the next for the coming new consoles, looks or appears to be rather major. Still PC gamers usually have an arsenal of games ready to play on the PC, investment, time etc. Not sure many will just jump ship, maybe enticed to try it out, I may. Last time I bought a console, XBox 360, I never completed a game on it and it sat most of the time until I sold it. For the hardcore PC gamer, I don't see a big move at least yet, maybe some will try it out is all.

As for the less PC centric crowd of gamers running mid tier and below GPU's -> I see the next Gen Consoles could upset things. For the non builders, those who buy prebuilt and usually subpar gaming PC's, yea the next Gen Consoles may put out of business those prebuilt PCs except for the extreme high end ones.
 
NVidia already promotes denoising on Tensor cores, Developers prefer alternatives using conventional cores.

Nvidia's AI based denoising using Tensor cores wasn't good enough, just like the original DLSS. If they can improve it like they did with DLSS 2.0 then developers might use it and get better performance.

Like 10%? Now if you do 4x as much RT cores, how much space are they taking up?

You didn't really read previous posts did you? You could do magically do the actual ray tracing part for free, and you still might only get 10% faster Ray Tracing performance in games.

You didn't read my post, just got up on your high horse to do your usual shouting down at people. Where did I mention 4x the number of RT cores? Where did I say anything about magical free performance?

The biggest thing the can do to entice holdouts is increase standard Raster performance, that many felt didn't get the upgrade needed in Turing.

The best approach is a balanced one. They may tweak the RT core balance a bit, but they really need to boost traditional performance proportional RT boosts, because Traditional cores are the main frame time occupancy in both Raytraced and Raster games.

If you concentrate too much on RT cores, that comes at the detriment of traditional cores, and lower traditional cores bottlenecks every kind of performance.


Again, you plainly just wanted to talk down to somebody and didn't bother reading my post. I didn't mention concentrating on RT cores, I said that they were going to go all in to make a big improvement in Ray Tracing, whatever that takes. I even specifically mentioned the poor jump in normal rendering between Pascal and Turing as reason that 3060 could possibly match the 2080Ti Ray Tracing performance.
 
You didn't read my post, just got up on your high horse to do your usual shouting down at people. Where did I mention 4x the number of RT cores? Where did I say anything about magical free performance?

I read your post, since your defending the rumor we are discussing, that claimed 4X RT performance ( AKA 3060 = 2808Ti RT performance), so it's fair game to point out it doesn't make sense, and would occupy too much die area.

Again, you plainly just wanted to talk down to somebody and didn't bother reading my post. I didn't mention concentrating on RT cores, I said that they were going to go all in to make a big improvement in Ray Tracing, whatever that takes. I even specifically mentioned the poor jump in normal rendering between Pascal and Turing as reason that 3060 could possibly match the 2080Ti Ray Tracing performance.

It's not talking down to clarify that you need to basically match a 2080Ti in both raster and RT units to deliver on that claim.

It's not reasonable to think the 3060 is going to deliver full 2080Ti performance. That would pretty much be the biggest jump in GPU performance ever.

That is NOT going to happen. Zero chance.
 
Last edited:
. . . . .
It's not talking down to clarify that you need to basically match a 2080Ti in both raster and RT units to deliver on that claim.

It's not reasonable to think the 3060 is going to deliver full 2080Ti performance. That would pretty much be the biggest jump in GPU performance ever.

That is NOT going to happen. Zero chance.
Looking at below image of 1 frame rendering in Metro Exodus, those RT cores are doing nothing most of the time. That is die space that is idle most of the time:

1588720355507.png


If AMD or Nvidia integrated other useful functionality into them so usage would be higher vice the rather limited use case -> 90% inactive also causing the rest of GPU to be 10% inactive or much less active -> That die space would be much more useful. Everything is stopped when the lighting intersection is being determined after geometry setup plus any communication of data from the RT core to the shader cores. I would think integrating functionality and data into the shader cores themselves (if the performance can be maintained) or usage is used more through the rendering process would increase efficiency, allow more die space that is more active -> Increase performance.

Since you said it is more rasterization limited, why wouldn't a 3060 being at least beating a 2080 Super in RT performance? Well the 2080 Super is 29% faster in non RT games (straight rasterization) than a 2060 Super, 2080 Ti is 43% faster than a 2060 Super (Techpowerup GPU database) -> Is Ampere rasterization performance going to be less than 30% improvement?

We do not know is the real answer, just guessing we all are. I do expect the 3060 will outperform at least the 2080 Super RT performance. Also a much better, easier to implement DLSS, making performance useful. AMD solution maybe way more heuristic in approach, we just don't know.
 
Last edited:
But how does it look in a game with good ray tracing? All metro has is global illumination so they could tick the RTX checkbox. It's probably the worst example.
 
But how does it look in a game with good ray tracing? All metro has is global illumination so they could tick the RTX checkbox. It's probably the worst example.
Good question, what game has good raytracing? Only one I know of is Minecraft RTX with a very rich feature set using raytracing, which also shows clearly current hardware limitations for the rather simple overall scene geometry.
 
Good question, what game has good raytracing? Only one I know of is Minecraft RTX with a very rich feature set using raytracing, which also shows clearly current hardware limitations for the rather simple overall scene geometry.

Battlefield V would be a good game to test. It has ray traced reflections. That's the most impactful RTX feature.
 
Looking at below image of 1 frame rendering in Metro Exodus, those RT cores are doing nothing most of the time. That is die space that is idle most of the time:
If AMD or Nvidia integrated other useful functionality into them so usage would be higher vice the rather limited use case -> 90% inactive also causing the rest of GPU to be 10% inactive or much less active -> T

If (Magic hand waving) would that be good? Sure. But there is no sign of (Magic Hand waving). Intersection testing is highly intensive and has highly specialized HW to accelerate it. If you want general HW that is what CUDA cores are for.

Since you said it is more rasterization limited, why wouldn't a 3060 being at least beating a 2080 Super in RT performance? Well the 2080 Super is 29% faster in non RT games (straight rasterization) than a 2060 Super, 2080 Ti is 43% faster than a 2060 Super (Techpowerup GPU database) -> Is Ampere rasterization performance going to be less than 30% improvement?

That is moving the goal posts. The rumor claim was vs 2080 Ti, compare that to where the 2060 sits today. Moving the x60 series to x80ti series in single generation would be an unprecedented jump, at least in the last decade or so of slowing process gains.

I would even count on 3060 beating a 2080 Super. We have nothing but sketchy rumors right now.

And if the 3060 beat the 2080 Super, NVidia would probably charge at least $600 for it.

Whatever the performance gains coming, I wouldn't expect big jumps in perf/$. Unless AMD has similar parts and is willing to get more aggressive than it has been on price in recent years.
 
Last edited:
If (Magic hand waving) would that be good? Sure. But there is no sign of (Magic Hand waving). Intersection testing is highly intensive and has highly specialized HW to accelerate it. If you want general HW that is what CUDA cores are for.



That is moving the goal posts. The rumor claim was vs 2080 Ti, compare that to where the 2060 sits today. Moving the x60 series to x80ti series in single generation would be an unprecedented jump, at least in the last decade or so of slowing process gains.

I would even count on 3060 beating a 2080 Super. We have nothing but sketchy rumors right now.

And if the 3060 beat the 2080 Super, NVidia would probably charge at least $600 for it.

Whatever the performance gains coming, I wouldn't expect big jumps in perf/$. Unless AMD has similar parts and is willing to get more aggressive than it has been on price in recent years.
Did you not state it is rasterization limited vice RT core limited? Just saying if the 3060 rasterization performance improves 30% which would be disappointing for a large node change, why wouldn't RT improve with it? That would be 2080 Super performance level. I would think more inline with 40% improvement this time around. I never said 2080 Ti level.

No I don't think Nvidia will charge the same price for the same performance level, wait they did that already - nevermind.
 
Edit: keep politics out of this thread please.

I think the establishment will limit the 3080ti. If Trump wins, it will be 50% faster. If Biden wins, maybe 40% faster as the liberals will try to give some of your hard-earned performance to the illegal aliens through forced distributed computing.

More than likely, SJWs and the mainstream media will push for a more 'gender neutral' design for the card.
 
I think the establishment will limit the 3080ti. If Trump wins, it will be 50% faster. If Biden wins, maybe 40% faster as the liberals will try to give some of your hard-earned performance to the illegal aliens through forced distributed computing.

More than likely, SJWs and the mainstream media will push for a more 'gender neutral' design for the card.
Epic.
 
I think the establishment will limit the 3080ti. If Trump wins, it will be 50% faster. If Biden wins, maybe 40% faster as the liberals will try to give some of your hard-earned performance to the illegal aliens through forced distributed computing.

More than likely, SJWs and the mainstream media will push for a more 'gender neutral' design for the card.

Thanks for this, when can we get a brady bunch style panel of 10 people on the screen that each have 3 seconds to tell us why the 3080ti will be good or bad?
 
What I'd be curious to know is how serious a threat is nVidia taking Series X and PS5. Gaming is no longer a niche hobby and while the high end market is still reasonably small it's still competing with consoles more than ever before (mid-range cards even more so) I know many gamers (especially older ones who's time has become far more limited in recent years) who are struggling with what direction to go. Do they buy the new console for $500/$600 and get near identical performance for the next 18 months then upgrade the video card when the PC starts pulling ahead of consoles again or just upgrade the video card? nVidia is going to have to give (most) people a real good reason to spend an extra $300 on a 3080 or the equivalent on a 3070.
897.gif
 
I read your post, since your defending the rumor we are discussing, that claimed 4X RT performance ( AKA 3060 = 2808Ti RT performance), so it's fair game to point out it doesn't make sense, and would occupy too much die area.



It's not talking down to clarify that you need to basically match a 2080Ti in both raster and RT units to deliver on that claim.

It's not reasonable to think the 3060 is going to deliver full 2080Ti performance. That would pretty much be the biggest jump in GPU performance ever.

That is NOT going to happen. Zero chance.

Oh, and is it fair game to make up things that I said? 4X RT cores, nope never said that. Focus on RT cores? Nope, never said that either.

By your own arguments 4 times the RT performance does not require 4 times the numbers of RT cores. So please tell me why it would occupy too much die area?

You don't even seem to think that the 3060 will reach 2080 super levels of performance in Ray Tracing. That would mean that the 3xxx cards are a complete fail. It would mean that the the Raster performance improvement over Turing would be less than 40% and there would be no improvements over Turing in any aspect of their design.

And don't forget this isn't just a new process, it's a decent die shrink as well.
 
Oh, and is it fair game to make up things that I said? 4X RT cores, nope never said that. Focus on RT cores? Nope, never said that either.

Already explained. You were defending a rumor that said that. Can't you read?

By your own arguments 4 times the RT performance does not require 4 times the numbers of RT cores. So please tell me why it would occupy too much die area?

It doesn't require only 4 times the RT cores, it would require that and 4 times the shader cores as well.

You don't even seem to think that the 3060 will reach 2080 super levels of performance in Ray Tracing. That would mean that the 3xxx cards are a complete fail. It would mean that the the Raster performance improvement over Turing would be less than 40% and there would be no improvements over Turing in any aspect of their design.

It wouldn't be a fail, it would be typical.

Edit: Armenius posted this earlier in the thread. In over a decade, only two NVidia GPU generation increased performance over 40%. Only one was greater 50%, and that was the GTX 10 series.
249384_upload_2019-11-1_16-39-10.png

If the average is less than 40% increases, why do people develop expectations of 40% gains or more? Because that's what they want to believe.

Falling for stories, without any real evidence, just because they tell you things you want to hear, is how you get conned.
 
Last edited:
Already explained. You were defending a rumor that said that. Can't you read?



It doesn't require only 4 times the RT cores, it would require that and 4 times the shader cores as well.



It wouldn't be a fail, it would be typical.

Edit: Armenius posted this earlier in the thread. In over a decade, only two NVidia GPU generation increased performance over 40%. Only one was greater 50%, and that was the GTX 10 series.
View attachment 243479

If the average is less than 40% increases, why do people develop expectations of 40% gains or more? Because that's what they want to believe.

Falling for stories, without any real evidence, just because they tell you things you want to hear, is how you get conned.

That's a fail chart.
Back when those cards existed 1440p and especially 4k was not even considered.
Most people gamed at 720p and 1080p (and pro gamers did 1200p).
4k is massively more demanding.
 
  • Like
Reactions: noko
like this
Already explained. You were defending a rumor that said that. Can't you read?



It doesn't require only 4 times the RT cores, it would require that and 4 times the shader cores as well.



It wouldn't be a fail, it would be typical.

Edit: Armenius posted this earlier in the thread. In over a decade, only two NVidia GPU generation increased performance over 40%. Only one was greater 50%, and that was the GTX 10 series.
View attachment 243479

If the average is less than 40% increases, why do people develop expectations of 40% gains or more? Because that's what they want to believe.

Falling for stories, without any real evidence, just because they tell you things you want to hear, is how you get conned.


Just looking at that graph and at first glance it already looks off. The 8800 GTX was the biggest gain in performance vs. previous gen period. Anywhere from 50-100% over the 7900 GTX. The 285 to 480 looks a tad low as well. Still have to go through the rest...
 
Just looking at that graph and at first glance it already looks off. The 8800 GTX was the biggest gain in performance vs. previous gen period. Anywhere from 50-100% over the 7900 GTX. The 285 to 480 looks a tad low as well. Still have to go through the rest...
Yep. That chart is total BS and should be disregarded simply on account of that.
 
  • Like
Reactions: noko
like this
All I want is the new 3080ti or 3090X or whatever the fuck they call it to have twice the power of my 1080ti.

Likewise, the leaks didn't say that the 3060 is gonna have the same performance as a 2080ti, they said it will have the same RAY TRACING performance. Meaning the rasterization performance likely isn't gonna be at 2080ti levels, it'll just have more/more efficient RT cores. So, more fancy light reflections at less of a performance detriment, but still less performance overall.

Also, nothing on that chart makes any sense. I think it was an attempt at conveying data, but failed it miserably.

Also brought me to this hilarious comparison from google.
https://hardforum.com/threads/6800-ultra-vs-8800gts-gtx.1181225/

(I think I went from a 6600GT to an 8800GTS if i remember correctly. What a fucking change that was. )
 
Last edited:
Just looking at that graph and at first glance it already looks off. The 8800 GTX was the biggest gain in performance vs. previous gen period. Anywhere from 50-100% over the 7900 GTX. The 285 to 480 looks a tad low as well. Still have to go through the rest...

You are probably reading it wrong.

The graph shows 8800GTX was >1.7x the previous generation. That is huge jump. One of the biggest ever. 7900GTX also had a similar huge jump, but it wasn't as well remembered 8800GTX because that jump only caught up with AMD, where the 8800GTX jump pulled ahead of AMD.
 
You are probably reading it wrong.

The graph shows 8800GTX was >1.7x the previous generation. That is huge jump. One of the biggest ever. 7900GTX also had a similar huge jump, but it wasn't as well remembered 8800GTX because that jump only caught up with AMD, where the 8800GTX jump pulled ahead of AMD.

If that is the case then there’s 4 cards with over a 50% performance gain over their predecessor. Also, 1080Ti 85% faster compared to the 980Ti...
 
If that is the case then there’s 4 cards with over a 50% performance gain over their predecessor. Also, 1080Ti 85% faster compared to the 980Ti...

Note that I was talking about the last decade. In case you haven't noticed, Moore's Law isn't what it used to be 10 to 20 years ago.

In the last decade only the 1080Ti had > 50% jump. That's the anomaly now, not the norm.
 
Already explained. You were defending a rumor that said that. Can't you read?

So I think there is a possibility that the 3060 will get close to matching the 2080Ti in Ray Traced Performance. I still never said that it would require 4 times the RT cores or that Nvidia should focus on RT cores. Argue against the points I actually make not the ones that you make up in your head. I never mentioned 4X anything until you brought it up.


It doesn't require only 4 times the RT cores, it would require that and 4 times the shader cores as well.

For that statement to be true that would mean that these has been no improvement in either Raster performance, Ray Tracing performance or the denoising process.

Just want to add, you are the one that's focused on 4x everything in our discussion. I never mentioned it, will come back to this later.

Edit: Armenius posted this earlier in the thread. In over a decade, only two NVidia GPU generation increased performance over 40%. Only one was greater 50%, and that was the GTX 10 series.
View attachment 243479

Ok, I could go through this chart and point out the obvious flaws, but, I will keep it simple. How many times in the last decade has their been both a meaningful die shrink and a new process?

The cards that are coming from Nvidia will again be on a new process and a decent die shrink.

But getting back to what I said earlier. I never mentioned 4x anything. What I said was that the 3060 matching the 2080ti is a realistic target. The 2080Ti isn't 4 times the performance of the 2060 when it comes to Ray Tracing. The 2080Ti has 10 Gigarays, the 2060 has 5. In the latest Minecraft RTX, the 2080Ti gets ~22 fps at 4K, the 2060 gets 11fps.

And I still think it's a target that Nvidia will hit.

Falling for stories, without any real evidence, just because they tell you things you want to hear, is how you get conned.

Spare me your condescending nonsense, please.
 
  • Like
Reactions: noko
like this
What I said was that the 3060 matching the 2080ti is a realistic target.

It's an absurdly out to lunch target.

x60 to x80ti in one generation is a bigger jump than even the GTX 10 series.

At best that is wishful thinking. AKA believing ridiculous stuff, because it's what you want to happen.
 
I can buy 3060 being somewhere around a vanilla 2080. It's a bit too much to think it's going to beat 2080 Ti. The last die shrink is probably an optimistic goal, in which the GTX 1060 was within 10% of the GTX 980.
 
It's an absurdly out to lunch target.

x60 to x80ti in one generation is a bigger jump than even the GTX 10 series.

At best that is wishful thinking. AKA believing ridiculous stuff, because it's what you want to happen.
He is talking about raytracing performance is the context. Your the one that indicated raster performance was more or less the limiting factor. Are you indicating that rasterization performance is less than a 30% improvement next generation? With new Arch and a new much faster node? Anyways I expect at least a minimum of 40% improvement and more like 50%. As you indicated Pascal was around 85% which is not impossible to achieve again. If Nvidia can fine tune more and get RT not stalling the rendering process when finding the intersections, then more for RT performance improvements. With 40% rasterization improvement process, a 3060 should exceed 2080 Super rasterization performance and be closer to the 2080 Ti Ray Tracing performance than the 2080 Super. If 50% rasterization improvement then I don't think reaper12 is being absurd at all. Other factors such as available Ram, bandwidth will also come into play as well.

I would like to see, at least a 40%-50% rasterization performance improvement and much more for RT performance. If it is mediocre again, Nvidia will probably get the middle finger again, at least from me.
 
Last edited:
I would like to see, at least a 40%-50% rasterization performance improvement and much more for RT performance. If it is mediocre again, Nvidia will probably get the middle finger again, at least from me.

I'd like to see lots of things, that doesn't make them more likely. Basing your beliefs on what you would like to happen, is wishful thinking, not rational thinking.

To get much more RT performance(in total isolation not in games), you need to alter the balance of cores much more in favor of RT cores vs Raster cores, meaning you spend less transistor budget on Raster cores, making that much harder to get a big raster boost, and you just end up moving the actual RT gaming performance bottleneck further onto the Raster Cores.

You need to carefully balance RT/Raster cores, and increase Raster/RT performance by similar proportions.
 
To get much more RT performance(in total isolation not in games), you need to alter the balance of cores much more in favor of RT cores vs Raster cores, meaning you spend less transistor budget on Raster cores, making that much harder to get a big raster boost, and you just end up moving the actual RT gaming performance bottleneck further onto the Raster Cores.
This is what needs to happen, though. Or at least whatever it is going to take to significantly increase RT performance.

This time around, even the RTX35x0 cards will need to be able to do RT well at 1080p, and the 3080Ti needs to be able to rock 4k120.
 
This is what needs to happen, though. Or at least whatever it is going to take to significantly increase RT performance.

This time around, even the RTX35x0 cards will need to be able to do RT well at 1080p, and the 3080Ti needs to be able to rock 4k120.

People keeps missing the point, that Raster, not RT cores are more of a bottleneck on current RTX cards, in RT enabled games.

If you spend 80% of a frame time doing Raster work, and 20% doing RT work, you don't shift your GPU drastically in favor of the reducing the 20% work.

That just doesn't make sense. Especially when you factor in most games were its' 100% raster, and 0% RT. Concentrating on reducing the 80% work, will give bigger frame-rate gains in both RT games ( and obviously Raster only games, where it's the only way to improve).

As far as RT enabled 3050 cards, that is a reasonable assumption, but all that likely means is giving them overall 2060 (non super) performance. That's one step up the OVERALL performance ladder, which is fairly normal generational upgrade.
 
My guess in the dark will be 18% improvement for gaming. Yet the raytracing cores? That is what will skyrocket. I'm gonna guess 50% - 100% improvement.
 
I'm guessing DLSS is what really salvages the performance and it will be applicable to way more games. I think they're going to put more real estate into RT and pretty moderate bump to raster.
 
Here is the data table on the contested chart. I used performance summary from TPU comparing the highest common resolution tested between prior generations, so it is "best case" scenario of GPU-bound performance.

CardFactor from 6800 UltraFactor from Prior Card
6800 Ultra
1​
1​
7900GTX
1.7560975​
1.7560975​
8800GTX
3.058536362​
1.7416666​
9800GTX
3.027950998​
0.99​
GTX 285
4.806271162​
1.5873015​
GTX 480
6.675376186​
1.3888888​
GTX 580
7.276160043​
1.09​
GTX 680
8.662094631​
1.1904761​
GTX 780 Ti
12.03068622​
1.3888888​
GTX 980 Ti
17.18669425​
1.4285714​
GTX 1080 Ti
31.82721069​
1.8518518​
RTX 2080 Ti
44.20445646​
1.3888888​
 
Here is the data table on the contested chart. I used performance summary from TPU comparing the highest common resolution tested between prior generations, so it is "best case" scenario of GPU-bound performance.

CardFactor from 6800 UltraFactor from Prior Card
6800 Ultra
1​
1​
7900GTX
1.7560975​
1.7560975​
8800GTX
3.058536362​
1.7416666​
9800GTX
3.027950998​
0.99​
GTX 285
4.806271162​
1.5873015​
GTX 480
6.675376186​
1.3888888​
GTX 580
7.276160043​
1.09​
GTX 680
8.662094631​
1.1904761​
GTX 780 Ti
12.03068622​
1.3888888​
GTX 980 Ti
17.18669425​
1.4285714​
GTX 1080 Ti
31.82721069​
1.8518518​
RTX 2080 Ti
44.20445646​
1.3888888​


Is this what you used? If so, it’s an extremely rough estimate that’s based on theoretical performance and not actual performance. You can probably determine that looking at where the 590 and 6990 sit in the lineup.
 

Attachments

  • 7A16AE85-BCF1-4341-9D62-0EC2C834C7F4.jpeg
    7A16AE85-BCF1-4341-9D62-0EC2C834C7F4.jpeg
    352.5 KB · Views: 0
Back
Top