RTX 3xxx performance speculation

I hope big Navi isn’t that expensive... it better match nVidia in features if it is.

Based on What AMD just did with the TR3 pricing, they are going to charge out the ass for big navi, most likely no value incentive other than maybe 50 bucks for parity to Nvidia performance.

80% performance increase in RT is not enough. You'll need at least twice the performance if you want to get anywhere near 60fps at 4k which would be the wholy grail.

For now. . .people also lost their shit when gunpowder was invented. . .they couldn't even fathom a nuclear explosion at the time, and neither can we for graphics :(
 
Last edited:
Based on What AMD just did with the TR3 pricing, they are going to charge out the ass for big navi, most likely no value incentive other than maybe 50 bucks for parity to Nvidia performance.



For now. . .people also lost their shit when gunpowder was invented. . .they couldn't even fathom a nuclear explosion at the time, and neither can we for graphics :(

Well, historically their high end card is $50-100 too much. I guess we shouldn’t expect different.
 
Not for current 2080ti owners, but if they could deliver 10% more perf with 2080ti Super and maybe offer an entry price of $899, that would be a big thing to a lot of Non owners.
As a 1080ti owner I've been looking for a decent upgrade (without being scammed) but I wouldnt get an RT card now with Version 2 round the corner.
I expect a large jump in RT performance to keep market momentum, this will make first gen cards look a bit weak.
I have no problem waiting a bit longer.
 
I wouldn't hold my breath for next gen it could be a whole two years before you have one in your hands. I mean people have this damn thing called money and they don't know what to do with it basically. So they stare at the screen until Next Gen is pumped out at records pace. I never use my cards as long as I think I'm going I used my 1080ti less for gaming than my 980ti and 970 and 670. I used my 2080 RTX more than the last 4 cards in a shorter time frame. I got a ton of use out of my 550 ti even though that was the cheapest card of the lot.

The only game I really spent alot of time with my 1080ti with was Kingdomcome Deliverance I spent almost 60 hours with that game.
 
I wouldn't hold my breath for next gen it could be a whole two years before you have one in your hands. I mean people have this damn thing called money and they don't know what to do with it basically. So they stare at the screen until Next Gen is pumped out at records pace. I never use my cards as long as I think I'm going I used my 1080ti less for gaming than my 980ti and 970 and 670. I used my 2080 RTX more than the last 4 cards in a shorter time frame. I got a ton of use out of my 550 ti even though that was the cheapest card of the lot.

The only game I really spent alot of time with my 1080ti with was Kingdomcome Deliverance I spent almost 60 hours with that game.

Yeah lots of people buy expensive hardware for the thrill and don’t actually use it for anything. More power to them.

I for one am still getting lots of use from my 1080 tearing through the old Steam backlog at 1440p/144Hz.

I can wait patiently for Ampere.
 
Voxel, the mythical unicorn we've all dreamed about. I think everyone is putting too much emphasis on RTs importance in the near future. Unless PS5 can do RT that blows your panties off at 60 fps, don't expect devs to put much effort into it for the next 5-10 years. Give me a gigantic leap in raster performance, I couldn't care less for RT right now or even 5 years from now. Shit, some of you old bastards on this forum won't even be alive by the time RT is truly ubiquitous lol!!

If Ampere is another overpriced dud like Turing, I'm definitely going with whatever AMD has unless they really screw up too. Then I'll have to cry in a corner and hope Intel pulls a miracle out of its big blue ass.

P.S. We should have a HardOCP betting pool on future gpu releases and the winner gets a nice chunk of change. My bet is AMD will catch up to nVidia by 2021-2022 across the board because of vulkan/dx12 and RT won't be as big of a deal as some think it will. The writing is already on the wall, nVidia can't pull driver magic anymore and they're only slightly better in efficiency now. Games like Call of Duty MW and RDR2 are indicative of what's to come from AAA devs.

In the past we often didn't see eye to eye but I think you called it pretty well there. They are getting close or as good in some of the main titles. It will be very interesting to see how the 5500 does in mainstream sales as it's already making a splash in PR world beating the 1650 soundly across the board. A 1650 superpooper won't close that gap. AMD weakness is a smaller driver team, I hope they put some of the extra cash they're making in to that. Even another 10-20 staff would be great and closer collaboration with major publishing houses of popular titles. I remember the 5700 Density 2 kvetching.. wew.

I was pessimistic on Navi but got it wrong, it really did better than I expected and I think many of us here are similar. Was going to hold out for 'next gen' after Navi where they hopefully go chiplet... this is also the main thing that interests me about Intel design. I have a feeling that with Raja onboard the Intel designs we have been teased are going to be in the same approach as AMD in a similar timeframe.
Scalability was one of the main things on the earlier roadmaps, and the only way they have 'scalability' with ease is via chiplets.

I agree fully re: RT. Aside from a few titles its really been a bit of a dud and will continue until consoles get it widespread in use. Even so, the in-game experience is a little underwhelming for the entry price, or highly exaggerated, like 3d in movies... gimmicky. It's still partial frame, not all of it is RT which to me gives an 'uncanny valley' effect where some has very realistic lighting (when done well) and other parts are even flatter looking as they lack RT.


Nope still need to shade voxels.

Shading is dynamic lighting + dynamic shadows + anything that isn't geometry (fog, clouds, fire, smoke etc) + post processing (dof, motion blur, AA)

Raytracing doesn't reduce the need for shading in any way. It just changes what data you pass to the shader.

I was taking the piss but it's great to learn in the process, so thank you! I still do find them the most interesting tech in a long time. RT is cool and all but the near-fractal scalability of voxels is mindblowing.. imagine a game where your POV goes from really, really small to really big..... some cool shit could be done there. I miss games in that scale, was a few TFC/CS maps like that in the pre-steam days and A Bugs Life on PSX and a few other games where you are a small critter.. very neat.
 
Well, historically their high end card is $50-100 too much. I guess we shouldn’t expect different.
Based on what metric? The 290X was a 780 Ti competitor in actual performance and it was $100 cheaper.
The Fury X was a 980 Ti competitor in actual performance and it released at the same price point.
Vega 64 was a 1080 competitor in actual performance and it released at the same price point.

This is the NVIDIA subforum and I always enjoy a good circlejerk, but come on.
 
Based on what metric? The 290X was a 780 Ti competitor in actual performance and it was $100 cheaper.
The Fury X was a 980 Ti competitor in actual performance and it released at the same price point.
Vega 64 was a 1080 competitor in actual performance and it released at the same price point.

This is the NVIDIA subforum and I always enjoy a good circlejerk, but come on.

Excluding the 290x that was a common thought, iirc even the [H] review said the same thing for the Fury X.

There’s a chance I am thinking of the watercooled Vega64 and not the air coolers which were a lot cheaper.

We’ll see what happens...
 
Last edited:
It will be very interesting to see how the 5500 does in mainstream sales as it's already making a splash in PR world beating the 1650 soundly across the board. A 1650 superpooper won't close that gap. .

It's because AMD essentially built a 1660 competitor and aimed it at the 1650. 1650 doesn't stand a chance, as I pointed out in another post. Perf/core is very close between Navi and Turing:

1650: 4.7B Transistors: 896 cores
5500: 6.4B Transistors: 1408 cores
1660: 6.6B Transistors: 1408 cores

No one should buy 1650 or even 1650 super if they are near 5500 pricing. Given the performance delta, I expect AMD to price 5500 over 1650.
 
It's because AMD essentially built a 1660 competitor and aimed it at the 1650.

5500: 6.4B Transistors: 1408 cores
1660: 6.6B Transistors: 1408 cores

I expect AMD to price 5500 over 1650.

That says it all. Didn't realise the transistor counts were so close, looks like AMD sure is 'years behind' Nvidia now... ;)

Would expect pricing between 1650 and 1660 but closer to 50, there is plenty of room between the two price points to do it. This is executed similarly to the 5700s.
 
Does the 5500 have all CUs enabled? not sure if Navi 14 has 22 or 24 CUs total.

A 5500XT with 24 CUs and 8 GB GDDR6 for $200 would be a kickass mainstream card and reinvigorate the market.
 
I wouldn't hold my breath for next gen it could be a whole two years before you have one in your hands. I mean people have this damn thing called money and they don't know what to do with it basically. So they stare at the screen until Next Gen is pumped out at records pace. I never use my cards as long as I think I'm going I used my 1080ti less for gaming than my 980ti and 970 and 670. I used my 2080 RTX more than the last 4 cards in a shorter time frame. I got a ton of use out of my 550 ti even though that was the cheapest card of the lot.

The only game I really spent alot of time with my 1080ti with was Kingdomcome Deliverance I spent almost 60 hours with that game.

The New console releases will finally bring a new era of game quality and with it, advanced graphics that the hardware can't quite push to their potential, so the graphics cards being released around or after the next console generation launches should be the next viable upgrade path for virtually anybody out there. Anything since the 9xx series (pascal) has just been pushing the current tech and graphics at a slightly higher resolution.
 
Ray tracings future will be interesting.

Hopefully we'll see more decent software only solutions, to lessen the burden of expensive hardware.

RTX will move forward for at least one more generation, and either become more popular or fade away.
I feel this will be NV's biggest challenge. Find a way to make it a thing with devs an publishers that doesnt cost too much to implement.
Just staying ahead of the competition with enough raster horsepower won't be much of a challenge, AMD still has to beat the 1080ti...

Most likely the 070-080-080ti variants of 2020 will still be the top 3 to beat, unless Intel can come up with something.

Really looking forward to seeing where RTX is heading tho. I played around with it and personally I would like to see more.
The visual difference it makes is substantial, at least for me.
 
Does the 5500 have all CUs enabled? not sure if Navi 14 has 22 or 24 CUs total.

A 5500XT with 24 CUs and 8 GB GDDR6 for $200 would be a kickass mainstream card and reinvigorate the market.

Apparently it's 24 CUs, which yields 1536 cores:
https://videocardz.com/newz/amd-radeon-pro-5500m-gets-full-navi-14
Today rogame spotted a new entry at the Geekbench database. This is not the RX 5500 XT though, but Radeon PRO (a workstation card) named 5500M. Unlike the Radeon RX variant, the PRO is listed with 24 CUs enabled.

This is identical to the 1660 Ti. So fully enabled it has the exact core count of the 1660Ti and the cut down versions has the exact core count of the 1660 Non Ti. Interesting coincidence.

The only issue compared to the 1660[Ti|Super) is the lower memory BW due to the 128 bit bus vs the 192 bit 1660s.
 
Apparently it's 24 CUs, which yields 1536 cores:
https://videocardz.com/newz/amd-radeon-pro-5500m-gets-full-navi-14


This is identical to the 1660 Ti. So fully enabled it has the exact core count of the 1660Ti and the cut down versions has the exact core count of the 1660 Non Ti. Interesting coincidence.

The only issue compared to the 1660[Ti|Super) is the lower memory BW due to the 128 bit bus vs the 192 bit 1660s.
Not necessarily.

GTX 1660 = 192 * (8 / 8) = 192 GB/s
GTX 1660 Ti = 192 * (12 / 8) = 288 GB/s

One of the leaks shows the 5500 XT will have 14 Gbps GDDR6.

RX 5500 XT = 128 * (14 / 8) = 224 GB/s

That would put it between the two 1660 cards in memory bandwidth.
 
RTX will move forward for at least one more generation, and either become more popular or fade away. I feel this will be NV's biggest challenge. Find a way to make it a thing with devs an publishers that doesnt cost too much to implement.

Nvidia isn’t solely responsible for the success of raytracing. Intel, AMD, Microsoft and Sony all have a role to play.

There are two main challenges facing RT right now. It’s new so hardware support isn’t widespread and it’s slow. Both problems will go away over time.

For some high quality effects e.g. shadows, raytracing is actually simpler to implement and probably cheaper than rasterized versions of the same thing.

Take the shadows in Call of Duty for example. In order to achieve the same level of precision you would need to render multiple very high resolution shadow map cascades which could well be slower than raytracing. Not to mention it would blow out your memory usage.
 
I wouldn't hold my breath for next gen it could be a whole two years before you have one in your hands. I mean people have this damn thing called money and they don't know what to do with it basically. So they stare at the screen until Next Gen is pumped out at records pace. I never use my cards as long as I think I'm going I used my 1080ti less for gaming than my 980ti and 970 and 670. I used my 2080 RTX more than the last 4 cards in a shorter time frame. I got a ton of use out of my 550 ti even though that was the cheapest card of the lot.

The only game I really spent alot of time with my 1080ti with was Kingdomcome Deliverance I spent almost 60 hours with that game.

I would absolutely wait for next gen. Anyone eyeing a 3080Ti, such as myself, already has an idea of what it will cost so it's not like they're waiting for it to hit the $500 mark before buying.
 
I'm kind of surprised we haven't seen a 2080 Ti Super yet. Despite AMD having absolutely nothing to compete with it, it would pretty typical of Nvidia to milk the market for all it can especially if they price it at $1000. That should clear up their inventory of TU102 and make room for Ampere a few months later.
 
I'm kind of surprised we haven't seen a 2080 Ti Super yet. Despite AMD having absolutely nothing to compete with it, it would pretty typical of Nvidia to milk the market for all it can especially if they price it at $1000. That should clear up their inventory of TU102 and make room for Ampere a few months later.
Would make no sense.
Why devalue existing product when you can just sell it for $1,300 and muppets will still buy it.
 
Would make no sense.
Why devalue existing product when you can just sell it for $1,300

This assumes that they have an inventory to sell...

and muppets will still buy it.

And this is unnecessary on a hardware enthusiast forum. There are always buyers for the very best, and it's always priced higher than a linear value plot would dictate.
 
This assumes that they have an inventory to sell...
And this is unnecessary on a hardware enthusiast forum. There are always buyers for the very best, and it's always priced higher than a linear value plot would dictate.
Of course there is inventory to sell, there certainly isn't a shortage or they'd be 'out of stock' everywhere already wouldn' they?

Sorry to hurt your sensitive feelings, by muppets I am referring to people buying hardware at top dollar when it's basically EOL in production terms. That is a stupid investment. But if you have the money to lose, be my guest.
 
My dream state wants a 3080 Ti with 12GB or more VRAM that outperforms a 2080 Ti by 10 to 15% and is priced around $900. Might not happen but one can dream. :)

That seems possible.

They could probably squeeze 10-15% out of the 2080 Ti Super. Moving to 12GB of 15.5 GHz RAM would be a ~20% boost in RAM BW, and the current 2080Ti looks a bit low on the BW/Core. Throw in a few more active units and clock speed boost, and you should be there.
 
Of course there is inventory to sell, there certainly isn't a shortage or they'd be 'out of stock' everywhere already wouldn' they?

Sorry to hurt your sensitive feelings, by muppets I am referring to people buying hardware at top dollar when it's basically EOL in production terms. That is a stupid investment. But if you have the money to lose, be my guest.

If anyone looks at buying computer parts as an investment, they are looking at it the wrong way. Just like a car, you are losing money from the start. You are buying high end computer parts for entertainment, that's exactly how I look at it, and I budget the money to do it high end and look at it as such.
 
Of course there is inventory to sell, there certainly isn't a shortage or they'd be 'out of stock' everywhere already wouldn' they?

Of course?

Like, they have stacks of dies to put into cards, their orders are keeping ahead of demand, etc.?

Hint: you don't have the answer to this question, but do note that Nvidia has let channel supply dry up before.
 
Sorry to hurt your sensitive feelings

I'd recommend not assuming anyone's feelings.

by muppets I am referring to people buying hardware at top dollar when it's basically EOL in production terms. That is a stupid investment. But if you have the money to lose, be my guest.

You just called everyone on this forum that wants the best performance they can get 'muppets'.

Why are you here, then?
 
If anyone looks at buying computer parts as an investment, they are looking at it the wrong way. Just like a car, you are losing money from the start. You are buying high end computer parts for entertainment, that's exactly how I look at it, and I budget the money to do it high end and look at it as such.
You can buy a high end GPU for many reasons and one could indeed be an investment. Some buy GPU's, as in hundreds of them to mine with (more in the past then present), for their business, for rendering etc. Some probably could argue it is an investment in themselves for having fun. Anyways investments can take many forms and a high end gaming card could be it.
 
My guess will be 10 to 15% better then the 2000 series. I expect Ray Tracing performance to be the larger increase at around 30 to 50%.
 
My guess will be 10 to 15% better then the 2000 series. I expect Ray Tracing performance to be the larger increase at around 30 to 50%.

Well that’s not going to cut it especially if they are going to keep charging these insane prices. If the want to charge around $1200 US dollars it had better have around 20-30% increase in rasterised graphics and close to 100% in Ray Tracing.
 
Well that’s not going to cut it especially if they are going to keep charging these insane prices. If the want to charge around $1200 US dollars it had better have around 20-30% increase in rasterised graphics and close to 100% in Ray Tracing.

Pricing will depend on how well Big Navi does that is coming from AMD. This also may be Nvidias last Monolithic chip design which kinds of tells you they have hit a wall on what they can do. I just dont see large jumps in performance with these small process nodes very likely.
 
Pricing will depend on how well Big Navi does that is coming from AMD. This also may be Nvidias last Monolithic chip design which kinds of tells you they have hit a wall on what they can do. I just dont see large jumps in performance with these small process nodes very likely.

Also it's again worth reminding that the 1080 Ti is an outlier in how big a performance jump it was compared to the previous gen Ti and the 1080. A 2080 Ti Super doesn't make much sense when the full fledged Titan RTX is barely any faster. They would have to improve the architecture to get better gains out of it and my understanding is that the Super series is mainly a product of better yields so they can push for more enabled CUDA cores and higher clocks.
 
Its either gonna be $1500 with 30% increase over the 2080ti

or its gonna be $699 with a 500% performance increase over the 2080ti and they will call it "U L T R A"
 
Its either gonna be $1500 with 30% increase over the 2080ti

or its gonna be $699 with a 500% performance increase over the 2080ti and they will call it "U L T R A"

I know you were only joking, but.
How good would would that be if the second part was true. I know it’s all just speculation at this stage.
 
Nvidia never disappoints in performance during a new process, 7nm 3080 and 3080ti are going to be insanely fast.

Why would you bring CPU's and Intel into this conversation? :facepalm:

I also have a feeling that Ampere is going to be quite impressive. I think with AMD coming back to the high end certainly won't hurt either.
 
Nvidia never disappoints in performance during a new process, 7nm 3080 and 3080ti are going to be insanely fast.

Why would you bring CPU's and Intel into this conversation? :facepalm:

Because there are three big players in PC silicon parts: AMD, Intel and NVidia.

Transitioning to 14nm/16nm, they made impressive performance gains.

But transition to 7nm so far, neither AMD nor Intel have made big gains this time, so I wouldn't expect Big gains from NVidia either.

AMDs gains seem to be mostly from architecture (Navi, Zen 2), not from clock speed boost.

So the evidence is that 28nm->14nm transition looks like it was MUCH better than the 14nm->7nm transition.

On top of that you have to factor that transistor economics are worse this time.

On the 28nm->14nm 980Ti to 1080Ti increased transistor count by 50%. That isn't going to happen this time. I would expect 10%-20% more transistors this time. Count ourselves lucky if it's 20%.

This is NOT going to be like 1080Ti vs 980Ti. That was the last gasp of big transistor count gains.
 
  • Like
Reactions: N4CR
like this
Because there are three big players in PC silicon parts: AMD, Intel and NVidia.

Transitioning to 14nm/16nm, they made impressive performance gains.

But transition to 7nm so far, neither AMD nor Intel have made big gains this time, so I wouldn't expect Big gains from NVidia either.

AMDs gains seem to be mostly from architecture (Navi, Zen 2), not from clock speed boost.

So the evidence is that 28nm->14nm transition looks like it was MUCH better than the 14nm->7nm transition.

On top of that you have to factor that transistor economics are worse this time.

On the 28nm->14nm 980Ti to 1080Ti increased transistor count by 50%. That isn't going to happen this time. I would expect 10%-20% more transistors this time. Count ourselves lucky if it's 20%.

This is NOT going to be like 1080Ti vs 980Ti. That was the last gasp of big transistor count gains.

Intel hasn't mae the transition to 7nm yet. They already have plenty of problems with 10nm.

I do agree that the transition to 7nm may not bring as big gains as the 28 to 14nm. But nvidia already has a lot experience with great power/performance ratio so I do expect them to do better than AMD in that regard.
 
Back
Top