RTX 3xxx performance speculation

Remember there are tensor cores too, not just RT cores.

Sure, but I am trying a bit of thought experiment and streamlining a bit, and it doesn't look like Tensor cores really play much part, but yeah they do eat some portion of die space.

I just wanted to cover the point, that a lot of people bring up that, there will be a big boost in RT performance, some thinking RT will be negligible performance hit, next release.

But in reality that doesn't seem feasible. RT performance, even "pure" RT effects still hit the shaders hard, and there is no free lunch, any extra performance poured into RT Performance will be less of the transistor/performance budge put into Shader perfformance.

I really think an across the board approach is best.
 
Sure, but I am trying a bit of thought experiment and streamlining a bit, and it doesn't look like Tensor cores really play much part, but yeah they do eat some portion of die space.

I just wanted to cover the point, that a lot of people bring up that, there will be a big boost in RT performance, some thinking RT will be negligible performance hit, next release.

But in reality that doesn't seem feasible. RT performance, even "pure" RT effects still hit the shaders hard, and there is no free lunch, any extra performance poured into RT Performance will be less of the transistor/performance budge put into Shader perfformance.

I really think an across the board approach is best.

That may be true for the current architecture, but a future one may be less shader reliant.

Turing feels like a bridge to a more RTX focused architecture.
 
How is an architecture going to be less shader reliant, when all games are primarily shader reliant, even RTX games?

Yeah, we can say that’s definitely not happening for the 3000 series. Maybe the 6000 series.

I don’t expect much of a change because we would need RTX tech to be viable at the low/mid range. Anything under $500 can’t run RTX for a damn. There’s going to be no massive shift anytime soon.
 
20% faster at Rasterization 80% faster at Ray Tracing.
80% performance increase in RT is not enough. You'll need at least twice the performance if you want to get anywhere near 60fps at 4k which would be the wholy grail.
 
80% performance increase in RT is not enough. You'll need at least twice the performance if you want to get anywhere near 60fps at 4k which would be the wholy grail.

80% would get you there in a lot of RTX games.

I maintain 60Hz in basically all RTX games, might have to knock down a setting or two, at 3440x1440. 4k is 60% more megapixels so given the proper amount of VRAM, 80% faster RT would be great.

I don’t know if I’d expect that high of gains though... but the tech is new, not impossible. A chiplet design for RT might do it but I haven’t seen any info hinting at that.
 
Last edited:
When are the 3080ti coming out? Approximately?

If you mean Navi 12 e.g. 3800 xt, it's definitely in the pipeline and breadcrumbs have been turning up in drivers and support docs for months now, as well as comments from Lisa Su (albeit I've seen noting yet appear in shipping manifests). All things considered, I'd guess Jan-March is most likely, late this year very unlikely at this point.

Performance is anyone's guess, but with the rumored 4096 SPs (64 CUs) on the 3800 xt, I'd guess somewhere around or just above 2080 Super. As for pricing, I'm guessing ~$650.
 
If you mean Navi 12 e.g. 3800 xt, it's definitely in the pipeline and breadcrumbs have been turning up in drivers and support docs for months now, as well as comments from Lisa Su (albeit I've seen noting yet appear in shipping manifests). All things considered, I'd guess Jan-March is most likely, late this year very unlikely at this point.

Performance is anyone's guess, but with the rumored 4096 SPs (64 CUs) on the 3800 xt, I'd guess somewhere around or just above 2080 Super. As for pricing, I'm guessing ~$650.

He was asking about the nVidia 3080ti, which rumors have Q2 2020.

I hope big Navi isn’t that expensive... it better match nVidia in features if it is.
 
If you mean Navi 12 e.g. 3800 xt, it's definitely in the pipeline and breadcrumbs have been turning up in drivers and support docs for months now, as well as comments from Lisa Su (albeit I've seen noting yet appear in shipping manifests). All things considered, I'd guess Jan-March is most likely, late this year very unlikely at this point.

Performance is anyone's guess, but with the rumored 4096 SPs (64 CUs) on the 3800 xt, I'd guess somewhere around or just above 2080 Super. As for pricing, I'm guessing ~$650.

Are you high?

Please, be honest.
 
I would predict Nvidia will do a more traditional launch with a 3080 and 3070 first holding off on big Ampere. Then the 3060 3-6 months later. 3080 Ti 6 - 9 months later. Unless Samsung can make big EUV 7nm chips right off the bat with decent yields or Nvidia beats AMD to chiplet designs for GPU's as in a separate RT co-processor. Not only a new node Nvidia has to overcome but also performance from very large 12nm chips already being made. So Nvidia can brag the 3080, marketing gimmick, it performs the same or slightly better than a 2080 Ti with a $400 cost savings and give AMD pure havoc with price cuts on the 2080 super and 2070 super on down. Big Ampere can retain the high prices Nvidia set with Turing unless AMD can pull a rabbit out of thin air.
 
80% performance increase in RT is not enough. You'll need at least twice the performance if you want to get anywhere near 60fps at 4k which would be the wholy grail.

I hear you, but it's not a matter of it being enough or not, Nvidia will have to make either a Massive GPU with an incredibly high die size to double their RTX performance. Or make an RTX Co-Processor dedicated to RTX Computing. Both options are very expensive and quite a gamble. Ray Tracing is not guaranteed to be the future. Momentum is in its favor, but nothing is 100% certain yet.

I anticipate they will add more RTX cores on their Next GPU by giving a larger chunk of the die size to RTX Cores. I think they might take this approach, until RTX becomes the Norm and solidifies itself as the future methodology

Then possibly by the next GPU Generation they'll be giving away 80-90% of the GPU die to be dedicated to Ray Tracing as Rasterization will be phasing out.

The good thing with this approach is they can go the opposite way if Ray Tracing doesn't take off as they anticipate and allocate less RTX cores on the Gefoce RTX 4000 Series.

I expect Nvidia to take careful baby Steps and allocate a little more Die size to Ray Tracing each time.

So I predict something like this

RTX3000 +80% Ray Tracing Boost +20% Rasterization Boost
RTX4000 +90% Ray Tracing Boost +10% Rasterization Boost
RTX5000 Dedicated to Ray Tracing. Rasterization possible but 95 - 100% of the GPU will be fully dedicated to Ray Tracing performance. At this point Most/All games coming to the market will be fully Ray traced and 4k 240hz and 8k 120hz will be the holy grail of Gaming.
 
I hear you, but it's not a matter of it being enough or not, Nvidia will have to make either a Massive GPU with an incredibly high die size to double their RTX performance. Or make an RTX Co-Processor dedicated to RTX Computing. Both options are very expensive and quite a gamble. Ray Tracing is not guaranteed to be the future. Momentum is in its favor, but nothing is 100% certain yet.

I anticipate they will add more RTX cores on their Next GPU by giving a larger chunk of the die size to RTX Cores. I think they might take this approach, until RTX becomes the Norm and solidifies itself as the future methodology

Then possibly by the next GPU Generation they'll be giving away 80-90% of the GPU die to be dedicated to Ray Tracing as Rasterization will be phasing out.

The good thing with this approach is they can go the opposite way if Ray Tracing doesn't take off as they anticipate and allocate less RTX cores on the Gefoce RTX 4000 Series.

I expect Nvidia to take careful baby Steps and allocate a little more Die size to Ray Tracing each time.

So I predict something like this

RTX3000 +80% Ray Tracing Boost +20% Rasterization Boost
RTX4000 +90% Ray Tracing Boost +10% Rasterization Boost
RTX5000 Dedicated to Ray Tracing. Rasterization possible but 95 - 100% of the GPU will be fully dedicated to Ray Tracing performance. At this point Most/All games coming to the market will be fully Ray traced and 4k 240hz and 8k 120hz will be the holy grail of Gaming.


Misses the point. Even pure RT effects use shaders a LOT. RT HW, just detects intersections. You still need to shade those pixels after you determine the intersection, you still need to texture them.

If a pure RT frame is 40% RT testing and 60 % shading, you could have 200% RT boost and get less than 40% FPS boost.
 
Ray Tracing is not guaranteed to be the future. Momentum is in its favor, but nothing is 100% certain yet.

Ray tracing is absolutely the future. Refinements like path tracing included, but we'll get to ray tracing and similar before we get to whatever follows ray tracing.

I expect Nvidia to take careful baby Steps and allocate a little more Die size to Ray Tracing each time.

It really depends on how well they can optimize things per-generation; we might see more consolidation of cores like we did with the GTX8800's for DX10.

Note that raster rendering likely isn't going away anytime soon, nor are shaders. Quite likely the architectures we'll be looking at in a decade will still be a combination of rasterization and ray tracing, with traditional rendering techniques shifting more toward fillrate for higher render resolutions as well as various scaling and anti-aliasing effects while ray tracing will largely take over for the real-time effects that shaders currently perform.
 
Misses the point. Even pure RT effects use shaders a LOT. RT HW, just detects intersections. You still need to shade those pixels after you determine the intersection, you still need to texture them.

If a pure RT frame is 40% RT testing and 60 % shading, you could have 200% RT boost and get less than 40% FPS boost.

Perhaps we should define a goal: maybe 4k120 / 8.3ms frametimes in 10bit HDR, regardless of the mix of rasterization and ray tracing?
 
So I predict something like this

RTX3000 +80% Ray Tracing Boost +20% Rasterization Boost
RTX4000 +90% Ray Tracing Boost +10% Rasterization Boost
RTX5000 Dedicated to Ray Tracing. Rasterization possible but 95 - 100% of the GPU will be fully dedicated to Ray Tracing performance. At this point Most/All games coming to the market will be fully Ray traced and 4k 240hz and 8k 120hz will be the holy grail of Gaming.

There is currently no roadmap for the other parts of the system (primarily, the monitor and video bus) to support 8K120. That would be 120Gbps of data with various transfer overhead on top of that. 4K240 has the same issue of data rate (64Gbps) with the additional issue of no 240hz panels. Heck, even a true 120hz panel which has sufficient response times so that frames don't bleed into each other is still some time away.

Over the next three generations from Nvidia, which is likely just 4 years, 4K120 is going to be the maximum goal for pixels per second even on their highest level cards. What will happen over the three generations is that the pixels are going to be massively higher quality - but there aren't going to be more of them.
 
Over the next three generations from Nvidia, which is likely just 4 years, 4K120 is going to be the maximum goal for pixels per second even on their highest level cards. What will happen over the three generations is that the pixels are going to be massively higher quality - but there aren't going to be more of them.

I agree in general, but I also wouldn't disregard 5k and >8MP ultrawides with sub-120Hz maximum refresh rates as possibilities here. The panels are easy enough to make now, they're just waiting on a decent interconnect for gaming use so VRR may be employed.

Also, OLED can do what we're looking for if LG is up to producing them or someone else gets their stuff together. I'd love to see a 5120 x 1600 43" OLED at 120Hz.
 
  • Like
Reactions: noko
like this
LOL WCCFTech.
upload_2019-11-7_12-57-45.png
 
I suspect that much like how geometry and pixel shaders were unified, RT will become a unified, dynamic feature, as well. So, RT won't be a completely separate duty.
 
I think the higher clock speed due to 7nm is going to be the game changer for the 3 series, same formula when we went from 9 series to 10 series..
 
I think the higher clock speed due to 7nm is going to be the game changer for the 3 series, same formula when we went from 9 series to 10 series..
It's going to come from the increased transistor density, which at the same time makes it harder to cool, meaning clock speed may even go down compared to 16nm. We're talking about ~50 billion transistors in the same amount of die area the 2080 Ti has for 18.6 billion.
 
I think the higher clock speed due to 7nm is going to be the game changer for the 3 series, same formula when we went from 9 series to 10 series..

I think you will be disappointed. 7nm (10nm Intel) hasn't really seen much clock speed boost. Intel actually went backwards. AMD got a few percent.
 
Well normally I would say 25%-35% game dependent but if what Lisa Su says is true and AMD does go after Nvidia on the top gpu end next year then my normal estimation could be low. After all unlike Intel Nvidia will see this coming and could decide to move the bar much higher. Timing could matter though. Word is 3xxx comes first half 2020. Does AMD challenge first half or second half? Does Nvidia do their normal increase or fight AMD early? That is yet unknown. :::prays::: Please gpu gods let there be a gpu war!! We consumers need it badly!!
 
It's going to come from the increased transistor density, which at the same time makes it harder to cool, meaning clock speed may even go down compared to 16nm. We're talking about ~50 billion transistors in the same amount of die area the 2080 Ti has for 18.6 billion.

We’re getting a 50 billion transistor GPU? Or is that just an example?
 
It's going to come from the increased transistor density, which at the same time makes it harder to cool, meaning clock speed may even go down compared to 16nm. We're talking about ~50 billion transistors in the same amount of die area the 2080 Ti has for 18.6 billion.

In practice:
16nm (and 12nm FFN) TSMC is ~25 Million transistors/mm^2
7 TSMC is ~40 Million transistors/mm^2

40/25 *18.6 = 29.8 Billion

But there is ZERO chance of even that. 1.6 times the transistors will cost close to 1.6 times as much money, on top of an already ludicrously expensive die.

I'd expect closer to 20 Billion to cheap costs under control.
 
In practice:
16nm (and 12nm FFN) TSMC is ~25 Million transistors/mm^2
7 TSMC is ~40 Million transistors/mm^2

40/25 *18.6 = 29.8 Billion

But there is ZERO chance of even that. 1.6 times the transistors will cost close to 1.6 times as much money, on top of an already ludicrously expensive die.

I'd expect closer to 20 Billion to cheap costs under control.

In that case let’s hope the rumors of bargain basement 7nm prices at Samsung are true.
 
How is an architecture going to be less shader reliant, when all games are primarily shader reliant, even RTX games?
Dem newfangled fuel injected voxel stuyff ya earin'me!?
 
Dem newfangled fuel injected voxel stuyff ya earin'me!?

Voxel, the mythical unicorn we've all dreamed about. I think everyone is putting too much emphasis on RTs importance in the near future. Unless PS5 can do RT that blows your panties off at 60 fps, don't expect devs to put much effort into it for the next 5-10 years. Give me a gigantic leap in raster performance, I couldn't care less for RT right now or even 5 years from now. Shit, some of you old bastards on this forum won't even be alive by the time RT is truly ubiquitous lol!!

If Ampere is another overpriced dud like Turing, I'm definitely going with whatever AMD has unless they really screw up too. Then I'll have to cry in a corner and hope Intel pulls a miracle out of its big blue ass.

P.S. We should have a HardOCP betting pool on future gpu releases and the winner gets a nice chunk of change. My bet is AMD will catch up to nVidia by 2021-2022 across the board because of vulkan/dx12 and RT won't be as big of a deal as some think it will. The writing is already on the wall, nVidia can't pull driver magic anymore and they're only slightly better in efficiency now. Games like Call of Duty MW and RDR2 are indicative of what's to come from AAA devs.
 
Last edited:
  • Like
Reactions: N4CR
like this
Dem newfangled fuel injected voxel stuyff ya earin'me!?

Nope still need to shade voxels.

Shading is dynamic lighting + dynamic shadows + anything that isn't geometry (fog, clouds, fire, smoke etc) + post processing (dof, motion blur, AA)

Raytracing doesn't reduce the need for shading in any way. It just changes what data you pass to the shader.
 
P.S. We should have a HardOCP betting pool on future gpu releases and the winner gets a nice chunk of change. My bet is AMD will catch up to nVidia by 2021-2022 across the board because of vulkan/dx12 and RT won't be as big of a deal as some think it will. The writing is already on the wall, nVidia can't pull driver magic anymore and they're only slightly better in efficiency now. Games like Call of Duty MW and RDR2 are indicative of what's to come from AAA devs.

I agree Efficiency is very close now, but not power that most point out (though that is also close now), but of the kind that really matters the most in delivering enthusiast cards. Perf/Transistor. Which helps on Perf/$.

If you look back at Vega vs Pascal:

Vega 64 (12.5 Billion transistors) vs GTX 1080 ( 7.2 Billion transistors). Very similar performance, but AMD had to throw almost double the transistors to match NVidia. Vega architecture was so far behind, that there was no way they could aim for the top.

Now look at Navi vs Turing:

5700 Xt (10.3 Billion transistors) vs RTX 2070 ( 10.8 Billion transistors). Very similar performance, but now AMD has completely erased the huge perf/transistor deficit.

So it looks like the most even playing field in a LONG time. AMD can challenge for the top again. Though there are some caveats:

1) AMD doesn't have RT HW yet, so it isn't paying the "RT TAX". Regardless of what anyone thinks of RT importance, it is pretty much table stakes going forward, and is expected that next AMD high end GPU will have RT HW, so AMD will start paying the "RT Tax"
2) AMD is already benefiting from the move to 7nm, NVidia isn't. I don't expect huge gains from 7nm, but it is a benefit NVidia has "in the bank".

So AMD has a penalty yet to come (RT Tax), and NVidia a benefit yet to come(7nm Shrink).

Mitigating AMDs pending RT tax somewhat is that it looks like NVidia placed a bad bet on Tensor cores. They were supposed to be used for two things: Denoising RT and DLSS. Both are largely bust, so AMD can likely leave out Tensor cores and just have RT intersection testing cores. Less wasted transistors.

My expectation of the immediate future (2020) is that AMD will draw next blood:

RX 5800 or whatever it is called. Will have RTX 2080 Beating, perhaps a bit behind 2080 Ti, but at a much better price, and it will include AMD RT HW. AMD will be extremely well reviewed and market success will follow.

Though I expect that soon after NVidia will launch RTX 3000 on 7nm which will tip the balance back.

2021/2022 get murkier. AMD might edge more wins in 2021, with a response to RTX 3000, which NVidia will be riding until 2022, where RTX 4000 arrives to shift the balance back.

Bottom line. I expect much more of race than we have had in a while, primarily owing to AMD closing the perf/transistor gap, which allows them to aim high again.

I really don't expect much from Intel in this time frame. I certainly won't be a beta test for Intel GPUs.
 
Besides pricing and 7nm AMD still brought a knife to a gunfight.

Top 3 cards are still Nvidia.

I think it's going to be a while still before we see a sub $800 AMD card with 2080ti beating specs. AMD finally came close to the 1080ti this year...

RTX will play a role if price differences level out between AMD and Nv.
 
Back
Top