RTX 3xxx performance speculation

I've said this ad nauseum, but it isn't the consumer's problem that Nvidia can't design a GPU that fits within the budget of expected GPU pricing. By passing on that price increase direct to the consumer at an alarming 75% markup over the previous generation (1080Ti to 2080Ti) all it did was breed ill will for many (myself included), especially when the next best option was $799 and didn't perform significantly faster than the 1080Ti which was $100 less. And 15 months after release we still only have a handful of games for RT. By the time it is mainstream, the 2080Ti will be irrelevant. If I can get a 3080Ti in the ball park of $899? I might change my mind, but I'm not paying $1000+
700 to 1,000 is a 43% increase.
 
700 to 1,000 is a 43% increase.

Try $1200 launch price (comparing FE to FE) not the phantom $999 EVGA black card that was nowhere to be seen for 6 months and used chips that weren't as highly binned.
 
  • Like
Reactions: N4CR
like this
Try $1200 launch price (comparing FE to FE) not the phantom $999 EVGA black card that was nowhere to be seen for 6 months.

But you also have to factor it was a naming screwup on NVidias part.

In previous generations, they launched the biggest chip as Titan first, for about $1000.

By naming x80Ti right from the start, they gave people looking for the worse possible comparison, something to endlessly gripe about.

Here we are nearly a year later, and the endless griping continues.

All NVidia had to do to quell this was call it a Titan, and then six months later release a slightly cut down version for $899, and the 2080Ti wouldn't have provided any more fodder(than the rest of the RTX lineup), for people looking for something to complain about.

I wonder if NVidia learned a lesson here. Will we get Titan Ampere and 3080 at launch? Waiting 6 months for the 3080 Ti. I would certainly shift back to this old pattern, when people seem so hung up on naming.
 
But you also have to factor it was a naming screwup on NVidias part.

In previous generations, they launched the biggest chip as Titan first, for about $1000.

By naming x80Ti right from the start, they gave people looking for the worse possible comparison, something to endlessly gripe about.

Here we are nearly a year later, and the endless griping continues.

All NVidia had to do to quell this was call it a Titan, and then six months later release a slightly cut down version for $899, and the 2080Ti wouldn't have provided any more fodder(than the rest of the RTX lineup), for people looking for something to complain about.

I wonder if NVidia learned a lesson here. Will we get Titan Ampere and 3080 at launch? Waiting 6 months for the 3080 Ti. I would certainly shift back to this old pattern, when people seem so hung up on naming.

I agree with that for the most part. Nvidia could have released a cut down 2080Ti this generation as the 2080 Super, but didn't. The lackluster performance of the 2070 and 2080 compared to the 1080Ti though left something to be desired, especially in light of the 1070's near equivalent performance against the 980Ti. And the lack of RT performance or games available...when you think about it, there was lots to gripe about ;).
 
I agree with that for the most part. Nvidia could have released a cut down 2080Ti this generation as the 2080 Super, but didn't. The lackluster performance of the 2070 and 2080 compared to the 1080Ti though left something to be desired, especially in light of the 1070's near equivalent performance against the 980Ti. And the lack of RT performance or games available...when you think about it, there was lots to gripe about ;).


GTX 1000 series was the standout of the decade, and probably the last hurrah of the big process perf/$ jumps. I would not expect to that kind of jump again.
 
I think nvidia dropped the ball with the RTX model naming and pricing model. Its been corrected somewhat with the super versions but maybe a tad late.
 
I guess, but the idea of just minor performance improvents stayed.
Okay. Starting both from their last 130nm card:

upload_2019-11-1_16-39-10.png


:cool:
 
Last edited:
Okay. Starting both from their last 130nm card:

View attachment 196802

:cool:

That graphic really shows what a massive outlier the GTX 1000 series was. Yet everyone keeps treating this anomaly as the normal situation.

Take the GTX 1000 series out of the picture and the RTX 2000 series has about normal performance gains lining up with everything in the last decade.

People hoping for another GTX 1000 series type gain, or going to be disappointed, likely forever going forward.
 
You take the 1080ti performance which on launch day I bought two. Two fe cards direct from Nvidia. Paid off I remember correctly 550 each. So running sli for 1100. Take one 2080ti which isn’t an upgrade since I would need two and you had 2400 at launch. Fuck it say 2000 which wasn’t doable for what six months? That was my potential upgrade path. Ya no.
Let’s say I was only running one 1080ti. I wanted to upgrade. Only upgrade would have been to one 2080ti. That would have cost me double what I paid for one 1080 ti. That didn’t work for me. It didn’t work for alot of people and Nvidia knows it. There will correct it some what this time. They aren’t stupid. They wanna move their stuff en mass.
No matter the reasons for the generational price jump. Money is money. That didn’t sit well for alot of us.
I will spend for top tier shit. But I will not throw money away. That situation I just mentioned to me felt like throwing money away.
Bear in mind I’m looking to get away from sli. I’ve finally had enough. There’s hardly any support for it any more. It’s obvious Nvidia Doesn’t give a fuck about it anymore so ya I’m out on that.
So hopefully I’m gonna do a new build around a 3080ti cars at launch. We’ll see.
 
New 1080 Ti FE from NVidia were $699, not $550.


I stand corrected. You prompted me to look through my emails. Apparently I didn't buy my 1080ti's from Nvidia I bought from bestbuy. They were FE's and I bought 2 and you're correct they were not 550 each. I paid 1399.98 for 2. Which is 700.00 each. My bad. I think I had 550 each in mind because that's whats my 3 980's cost each I believe. I could be wrong about that too so much time has passed but anyway. You are correct sir.

Edit: still doesn't change what I said though regarding 1000-1200 2080ti pricing in relation to the 1080ti's. Still way way out of wack to make sense to me. Money wise vs performance.
 
Edit: still doesn't change what I said though regarding 1000-1200 2080ti pricing in relation to the 1080ti's. Still way way out of wack to make sense to me. Money wise vs performance.

Sure, but the 1080ti was likely NVidia's biggest ever generational performance jump.

It also arrived right at the end easy gains from Moore's Law in transistors/$.

Judging things versus 1080Ti is only going to be a letdown.

If I bought a 1080Ti early on, I wouldn't be griping, at the lack of another standout successor, I would be smiling happily about the excellent longevity that purchase delivered.
 
Sure, but the 1080ti was likely NVidia's biggest ever generational performance jump.

It also arrived right at the end easy gains from Moore's Law in transistors/$.

Judging things versus 1080Ti is only going to be a letdown.

If I bought a 1080Ti early on, I wouldn't be griping, at the lack of another standout successor, I would be smiling happily about the excellent longevity that purchase delivered.

Yes glad they have had such a useful shelf life and I got my money's worth out of them for sure. That said it does not change the fact that the 2080ti was way way over priced and stil is for what you get. No way a flagship enthusiast card should run 1000-1200 each. No sorry,

As far as being let down. I work hard for my money and I will be the judge of what I feel is worth it or not. Apparently I was not alone in my thinking because alot of people didn't jump on the 2080ti bandwagon for the same reasons. I don't expect a huge performance gain every gen but I do expect reasonable pricing in line with what we have been paying for other flagship cards. That is if they want my money. Or I can pass it's really simple. Like I passed on 2080ti.
 
Yes glad they have had such a useful shelf life and I got my money's worth out of them for sure. That said it does not change the fact that the 2080ti was way way over priced and stil is for what you get. No way a flagship enthusiast card should run 1000-1200 each. No sorry,

As far as being let down. I work hard for my money and I will be the judge of what I feel is worth it or not. Apparently I was not alone in my thinking because alot of people didn't jump on the 2080ti bandwagon for the same reasons. I don't expect a huge performance gain every gen but I do expect reasonable pricing in line with what we have been paying for other flagship cards. That is if they want my money. Or I can pass it's really simple. Like I passed on 2080ti.

While I can agree that they are really expensive and the ASUS, MSI etc bigger brand models are excessively high priced at around 1200-1300 euros, despite all this I bought a 2080 Ti early this year. Why?

Well, I had sold my 980 Ti last year and in January was waiting for Sekiro to release. With no GPU I had to pick from what was available at the time and the situation was that a decent 2080 cost something like 800-900 euros, a used 1080 Ti was 500-600 and then there was the cheapest 300A chip Palit Gaming Pro 2080 Ti for about 1100 euros. I went with the 2080 Ti simply because I felt it was the best value for money at the time despite its high price and I was planning to go for 4K gaming which would need all the fps you can get.

Now? I would buy a 2070S or 2080S without question as their price has dropped and they are faster than the original versions.

I think people just got too used to the 1080 Ti being such a good card. I wish I had bought one before the mining craze made availability and pricing ridiculous. People were expecting just as big improvements from the 20xx series but as you can see from the graphs posted above, it was an unprecedented increase. Who knows, we might get one like that with Ampere but the more likely thing is that it's another 15-20% bump across the board with a more significant improvement in RT performance.
 
Nvidia wants to sell as many cards as possible across their whole lineup. They are going to make the value vs performance vs cost this time better and more in line with how the 1080ti was. They know they have to do that to attract as many customers as possible. Where exactly it's gonna fall I don't know but it will for sure be better than 2080ti was.
 
Yes glad they have had such a useful shelf life and I got my money's worth out of them for sure. That said it does not change the fact that the 2080ti was way way over priced and stil is for what you get. No way a flagship enthusiast card should run 1000-1200 each. No sorry,

As far as being let down. I work hard for my money and I will be the judge of what I feel is worth it or not. Apparently I was not alone in my thinking because alot of people didn't jump on the 2080ti bandwagon for the same reasons. I don't expect a huge performance gain every gen but I do expect reasonable pricing in line with what we have been paying for other flagship cards. That is if they want my money. Or I can pass it's really simple. Like I passed on 2080ti.

I am not saying the 2080Ti is a good deal. It isn't. I wouldn't buy one.

The unfortunate reality is that NVidia pushed too hard to try to gain performance jump when Moore's law is stalling out. The chip is ~750 mm^2 in an era were silicon cost/area is shooting up. It's the most expensive to produce consumer GPU chip ever made.

I don't think they should have put the current form in a 2080Ti. They should have released it first as a Titan, and then made a slightly more cut down one, maybe 6 months later, as a 2080Ti for a more reasonable price ($899 FE), if they could afford it.

That would have better followed the past pattern, and appeared a bit more reasonable. But a chip that size was NEVER going to be reasonably priced.

Nvidia wants to sell as many cards as possible across their whole lineup. They are going to make the value vs performance vs cost this time better and more in line with how the 1080ti was. They know they have to do that to attract as many customers as possible. Where exactly it's gonna fall I don't know but it will for sure be better than 2080ti was.

Sure it will be a little better, and they may return to the Titan first mode, so it looks even better. But there isn't going to be another stand out card like the 1080ti. Because silicon economics are not improving much anymore.
 
If they had made that a titan wouldn’t people have expected it to be more powerful than it was then? It was barely worthy of the ti moniker as it is. Calling it a titan at that level of performance doesn’t really help it imo. Just makes it worse.

The truth is the card doesn’t offer enough performance for the cost. No matter what title you slap on it. Whether that’s their fault or not don’t matter
 
I am not saying the 2080Ti is a good deal. It isn't. I wouldn't buy one.

The unfortunate reality is that NVidia pushed too hard to try to gain performance jump when Moore's law is stalling out. The chip is ~750 mm^2 in an era were silicon cost/area is shooting up. It's the most expensive to produce consumer GPU chip ever made.

I don't think they should have put the current form in a 2080Ti. They should have released it first as a Titan, and then made a slightly more cut down one, maybe 6 months later, as a 2080Ti for a more reasonable price ($899 FE), if they could afford it.

That would have better followed the past pattern, and appeared a bit more reasonable. But a chip that size was NEVER going to be reasonably priced.



Sure it will be a little better, and they may return to the Titan first mode, so it looks even better. But there isn't going to be another stand out card like the 1080ti. Because silicon economics are not improving much anymore.[/QUOTE



You may be right about the silicon economics and if you are they have a problem long term. Sure they’ll still sell cards to high end enthusiasts that will pay almost what ever they charge but that's a much smaller market than they want or need. We’ll see I guess.
 
If they had made that a titan wouldn’t people have expected it to be more powerful than it was then? It was barely worthy of the ti moniker as it is. Calling it a titan at that level of performance doesn’t really help it imo. Just makes it worse.
The truth is the card doesn’t offer enough performance for the cost. No matter what title you slap on it. Whether that’s their fault or not don’t matter

Outside of the Pascal series, it would have been a fairly normal performance bump for a Titan card.


You may be right about the silicon economics and if you are they have a problem long term. Sure they’ll still sell cards to high end enthusiasts that will pay almost what ever they charge but that's a much smaller market than they want or need. We’ll see I guess.

If that was the only card they made that might be a problem. But they have cards for every price point, so the volume will be there on other cards.
 
If they had made that a titan wouldn’t people have expected it to be more powerful than it was then? It was barely worthy of the ti moniker as it is. Calling it a titan at that level of performance doesn’t really help it imo. Just makes it worse.

The truth is the card doesn’t offer enough performance for the cost. No matter what title you slap on it. Whether that’s their fault or not don’t matter

The 2080ti is ~ 40-45% faster when not CPU limited. Problem is most reviews had setting where the 1080ti was already doing 70-90fps and then the 2080ti would be cpu limited.

I am curious if AMD (or Intel, nVidia) will try to use chiplet designs with GPUs but the bandwidth requirements are way higher than CPUs so I think most people are skeptical. It would resolve some of the cost issues.
 
The 2080ti is ~ 40-45% faster when not CPU limited. Problem is most reviews had setting where the 1080ti was already doing 70-90fps and then the 2080ti would be cpu limited.

I am curious if AMD (or Intel, nVidia) will try to use chiplet designs with GPUs but the bandwidth requirements are way higher than CPUs so I think most people are skeptical. It would resolve some of the cost issues.
at 4k the 2080 ti is not cpu limited.

its the fastest card ever made. no contest there. wether its worth the extra dough is another issue.
 
The 2080ti is ~ 40-45% faster when not CPU limited. Problem is most reviews had setting where the 1080ti was already doing 70-90fps and then the 2080ti would be cpu limited.

I am curious if AMD (or Intel, nVidia) will try to use chiplet designs with GPUs but the bandwidth requirements are way higher than CPUs so I think most people are skeptical. It would resolve some of the cost issues.

Maybe if you try to run RTX on a 1080Ti it is. Short of that I think you’re being 10-20% too generous. If I’m wrong I’d love to see a meaningful set of benchmarks backing up your claim, not an outlier or two.
 
Maybe if you try to run RTX on a 1080Ti it is. Short of that I think you’re being 10-20% too generous. If I’m wrong I’d love to see a meaningful set of benchmarks backing up your claim, not an outlier or two.

From what I've seen, the 2080Ti is about 30~40% faster than the 1080Ti @4k in most recent games. Yes the difference is much lower at 1440p and more so at 1080p, but then again why run a RTX2080Ti at those resolutions?

https://www.guru3d.com/articles_pages/xfx_radeon_rx_5700_xt_thicc_iii_ultra_review,13.html

For example.

index.png
 
From what I've seen, the 2080Ti is about 30~40% faster than the 1080Ti @4k in most recent games. Yes the difference is much lower at 1440p and more so at 1080p, but then again why run a RTX2080Ti at those resolutions?

https://www.guru3d.com/articles_pages/xfx_radeon_rx_5700_xt_thicc_iii_ultra_review,13.html

For example.

View attachment 197329

Well I went through the rest of the games, and at 4K there are more games that hit around 40% better on a 2080Ti than I initially thought, but certainly not an average. More like on the high side. I'm GPU limited at 1440p in many games, the vast majority in fact with a 1080Ti and a 3900x so there's plenty reason to go 2080Ti for resolutions <4K
 
From what I've seen, the 2080Ti is about 30~40% faster than the 1080Ti @4k in most recent games. Yes the difference is much lower at 1440p and more so at 1080p, but then again why run a RTX2080Ti at those resolutions?

https://www.guru3d.com/articles_pages/xfx_radeon_rx_5700_xt_thicc_iii_ultra_review,13.html

For example.

View attachment 197329
Your example is a 27% gap, FYI. In my graphs above I used 4K frame rates to come up with the average that is presented and it worked out to 38%. Gears 5 is actually an outlier itself where the gap in between different hardware isn't as pronounced as typically seen in other games.
 
Let's get back to the speculation of where RTX 3000 series is going.

Let's suppose there is a 30% transistor performance budget increase(combo of transistors and clockspeed). 30% may be optimistic. For the last decade, only the GTX 1000 series offered average generational gains of more than 30%. GTX 1000 was the anomaly, not the norm.

Some seem to expect a lot of that budget will go into more RT Hardware. But a push into more RT HW will be at the expense traditional Shader performance.

There are estimates that RT HW is only 15% of the die. Double that 30%, and you have just blown half your performance budget on RT, so you only get 15% percent Shader gain. This seems like a recipe for angry forum rants for another couple of years until RTX 4000....

So lets play with some options for re-balancing the split between RT and Shader HW, along with a 30% performance budget increase:

A) 15:85 * 1.3 = 19.5:110.5 = 30% RT boost, 30% Shader boost.
B) 20:80 * 1.3 = 26:104 = 73% RT boost, 22% Shader boost.
C) 25:75 * 1.3 = 32.5:97.5.5 = 117% RT boost, 15% Shader boost.
D) 30:70 * 1.3 = 39:91 = 160% RT boost, 7% Shader boost.
E) 35:65 * 1.3 = 45.5:84.5= 203% RT boost, ~0% Shader boost.


It ooks like re-balancing the RT:Shader mix only has two options. A) Status quo, giving 30% gains across the board, or B) Slight RT increase that will give ~73% RT core increase, and ~22% Shader core increase.

I would bet most people would prefer A) for the 30% across the board, but B) looks semi reasonable with large shader gains for small RT loss, though that may be deceiving.

Even on RTX effects, Shaders appear to still be the majority of frame time. So boost RT cores may not pay as much dividends as expected on RT loads.

Lets compare A) and B) for reducing frame time of load that is 40% RT , 60% shader.

A) 40 *.7 + 60 * .7 = 70

B) 40 *.27 + 60 *.88 = 64

So while there is a big jump in RT cores, they don't represent that majority of a frames time. You only get about ~10% more RT performance for re-balancing in favor of RT HW.

But Shader game performance suffers ~25% loss because of the re balance 88/70.

I can't see NVidia significantly re-balancing the RT:Shader mix. RT performance gains will likely come across the board, from increasing things evenly with the budget. Or some efficiency tweaks if those can be found.
 
Let's get back to the speculation of where RTX 3000 series is going.

Let's suppose there is a 30% transistor performance budget increase(combo of transistors and clockspeed). 30% may be optimistic. For the last decade, only the GTX 1000 series offered average generational gains of more than 30%. GTX 1000 was the anomaly, not the norm.

Some seem to expect a lot of that budget will go into more RT Hardware. But a push into more RT HW will be at the expense traditional Shader performance.

There are estimates that RT HW is only 15% of the die. Double that 30%, and you have just blown half your performance budget on RT, so you only get 15% percent Shader gain. This seems like a recipe for angry forum rants for another couple of years until RTX 4000....

So lets play with some options for re-balancing the split between RT and Shader HW, along with a 30% performance budget increase:

A) 15:85 * 1.3 = 19.5:110.5 = 30% RT boost, 30% Shader boost.
B) 20:80 * 1.3 = 26:104 = 73% RT boost, 22% Shader boost.
C) 25:75 * 1.3 = 32.5:97.5.5 = 117% RT boost, 15% Shader boost.
D) 30:70 * 1.3 = 39:91 = 160% RT boost, 7% Shader boost.
E) 35:65 * 1.3 = 45.5:84.5= 203% RT boost, ~0% Shader boost.


It ooks like re-balancing the RT:Shader mix only has two options. A) Status quo, giving 30% gains across the board, or B) Slight RT increase that will give ~73% RT core increase, and ~22% Shader core increase.

I would bet most people would prefer A) for the 30% across the board, but B) looks semi reasonable with large shader gains for small RT loss, though that may be deceiving.

Even on RTX effects, Shaders appear to still be the majority of frame time. So boost RT cores may not pay as much dividends as expected on RT loads.

Lets compare A) and B) for reducing frame time of load that is 40% RT , 60% shader.

A) 40 *.7 + 60 * .7 = 70

B) 40 *.27 + 60 *.88 = 64

So while there is a big jump in RT cores, they don't represent that majority of a frames time. You only get about ~10% more RT performance for re-balancing in favor of RT HW.

But Shader game performance suffers ~25% loss because of the re balance 88/70.

I can't see NVidia significantly re-balancing the RT:Shader mix. RT performance gains will likely come across the board, from increasing things evenly with the budget. Or some efficiency tweaks if those can be found.
RT hardware does not sit by itself, the whole chip has to support it so traces, other components as in write/read/HW process in other parts of the whole chip should also be added. 15% for just the RT cores is probably right but it would be more considering everything else that would be needed to support those RT cores.

I see real time RT limited to enhancements such as reflections (probably best use case), GI, shadows (which for shadows may not be that noticeably better), caustics (limited use case). As for full quality lighting/shadowing etc. using raytracing - long ways away. Most of the real ray tracing is done with baking in, light maps, HDRI maps etc. Shadow maps and not real time - RT is not even close to matching that quality level or may never with current or next gen tech. A hybrid is probably the best that can occur meaning most of the performance may come from smarter software and limited use of real time ray tracing.

Would be interesting to see the hardware design considerations and software for the next gen consoles - a holistic approach to using RT. I also do not see it viable if there are substantial different performance levels depending upon the GPU, making development almost pointless with dramatic different RT performance levels (way more than transitional methods where top and middle level cards are like 50% different). Everything will need to be able to use RT efficiently in other words and not just a very small percentage of cards. Basically $200 and down cards need to be able to do it well and not have a severe performance hit.

Off core as in coprocessor is my first thought which could make it very consistent level of performance for that aspect of rendering. Process real time light maps, shadow maps, caustics (could use reduce geometry info) which is fed into the rastilization engine in real time (could be at half or quarter rate interpreted). Making a huge chip will probably just be too costly, inefficient and limited in the end. Huge chipped GPU's in cards that sell over $1200 do not cut it and the performance levels of the lesser cards are just plain lacking.

Another consideration for more rasterization performance is that to render reflections you have to have way more of the scene being rendered, same as with shadows, GI etc. If one doubles/triples everything being rendered in order to support RT, rasterization will need to improve.
 
Let's get back to the speculation of where RTX 3000 series is going.

Let's suppose there is a 30% transistor performance budget increase(combo of transistors and clockspeed). 30% may be optimistic. For the last decade, only the GTX 1000 series offered average generational gains of more than 30%. GTX 1000 was the anomaly, not the norm.

Some seem to expect a lot of that budget will go into more RT Hardware. But a push into more RT HW will be at the expense traditional Shader performance.

There are estimates that RT HW is only 15% of the die. Double that 30%, and you have just blown half your performance budget on RT, so you only get 15% percent Shader gain. This seems like a recipe for angry forum rants for another couple of years until RTX 4000....

So lets play with some options for re-balancing the split between RT and Shader HW, along with a 30% performance budget increase:

A) 15:85 * 1.3 = 19.5:110.5 = 30% RT boost, 30% Shader boost.
B) 20:80 * 1.3 = 26:104 = 73% RT boost, 22% Shader boost.
C) 25:75 * 1.3 = 32.5:97.5.5 = 117% RT boost, 15% Shader boost.
D) 30:70 * 1.3 = 39:91 = 160% RT boost, 7% Shader boost.
E) 35:65 * 1.3 = 45.5:84.5= 203% RT boost, ~0% Shader boost.


It ooks like re-balancing the RT:Shader mix only has two options. A) Status quo, giving 30% gains across the board, or B) Slight RT increase that will give ~73% RT core increase, and ~22% Shader core increase.

I would bet most people would prefer A) for the 30% across the board, but B) looks semi reasonable with large shader gains for small RT loss, though that may be deceiving.

Even on RTX effects, Shaders appear to still be the majority of frame time. So boost RT cores may not pay as much dividends as expected on RT loads.

Lets compare A) and B) for reducing frame time of load that is 40% RT , 60% shader.

A) 40 *.7 + 60 * .7 = 70

B) 40 *.27 + 60 *.88 = 64

So while there is a big jump in RT cores, they don't represent that majority of a frames time. You only get about ~10% more RT performance for re-balancing in favor of RT HW.

But Shader game performance suffers ~25% loss because of the re balance 88/70.

I can't see NVidia significantly re-balancing the RT:Shader mix. RT performance gains will likely come across the board, from increasing things evenly with the budget. Or some efficiency tweaks if those can be found.

Remember there are tensor cores too, not just RT cores.
 
I hope the raster bump is considerable - because at this point that's all my 2080ti is being used for. I cranked on RT for BF5 to look at then turned it right back off / for me it's not worth the frame handicap for what it provides.

Maybe if I was doing an RPG / hybrid shooter with beautiful RT everywhere it might be worth it but in a FPS where I'm just running by and shooting at things - and not having time to stop to admire all the shiny scenery. Otherwise faster fps ultra settings will still be king for me.

I'll see how Cyberpunk2077 runs RT and see if maybe my opinion towards it changes.
 
Back
Top