4090 ti specs

Status
Not open for further replies.
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?

Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.
 
1664234817248.png
 
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?

Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.

It's because they can't.

It's all DLSS smoke and mirrors. If you look at the process size as a predictor of perf/watt and the watt specs of the GPU, the theoretical max increase in going from Samsungs 8N to TSMC's 4N process, while moving from 384 to 320w is going to be 38.5%, but since power hasn't scaled linearly with gate size since the 32nm era, the real max increase is going to be much smaller than that. Probably 20-25%.

The 2x claims are some sort of bold faced lies based on AI DLSS no sense, and misleading choices in benchmarks and baselines.
 
It's because they can't.

It's all DLSS smoke and mirrors. If you look at the process size as a predictor of perf/watt and the watt specs of the GPU, the theoretical max increase in going from Samsungs 8N to TSMC's 4N process, while moving from 384 to 320w is going to be 38.5%, but since power hasn't scaled linearly with gate size since the 32nm era, the real max increase is going to be much smaller than that. Probably 20-25%.

The 2x claims are some sort of bold faced lies based on AI DLSS no sense, and misleading choices in benchmarks and baselines.
I don't know about you, but 20-25% increase in a single gen is reasonable to me.
 
I don't know about you, but 20-25% increase in a single gen is reasonable to me.
Yep. If you don't like the jump, you're under no obligation to buy it. Wait for another gen then Zarathustra[H] ;).

I went from 970 sli to a 3080 in Dec 2020 and couldn't be happier.

yup. i just posted the pic to keep the traffic here, that site had a bunch of garbage all over it.
Ah gotcha :).
 
It is to me as well. We have had gens when we've gotten less than that.

That number is simply there to point out that their public 2x claims are complete bullshit.
We don't really need to be rescued from marketing. We can read a graph and its footnotes. Some of us also like DLSS and don't find boosts to it to be "bull".
 
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?

Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.
Because graphics is basically infinitely parallel. You can just do more at the same time, which means you can increase performance by just putting more copies of the stuff that does things. While you can do that with CPUs, it doesn't scale on everything. Not all problems can be broken down to run in parallel to an arbitrary degree.
 
Yep. If you don't like the jump, you're under no obligation to buy it. Wait for another gen then Zarathustra[H] ;).
Imagine being bothered that Nvidia or AMD isn't literally 2x doubling raw raster performance every 2 years, and may have only managed somewhere between 1.5x and 2x.

Strip away all the brand specific value adds like DLSS and RTX, it's still going to be a monster leap.

Btw I wouldn't be surprised if the actual 4090 Ti ends up being the 600W design they scaled 4090 back from.
 
Last edited:
The new power spec might be problematic for people looking to upgrade current systems.
.
 
The new power spec might be problematic for people looking to upgrade current systems.
.

Well, they care comparing the 16GB 4080 to the 3080ti, and in that comparison, the 4080 actually has a lower TDP than its predecessor.

I guess it depends on what you consider the next equivalent model you are upgrading to to be.
 
Imagine being bothered that Nvidia or AMD isn't literally 2x doubling raw raster performance every 2 years, and may have only managed somewhere between 1.5x and 2x.

Strip away all the brand specific value adds like DLSS and RTX, it's still going to be a monster leap.

Btw I wouldn't be surprised if the actual 4090 Ti ends up being the 600W design they scaled 4090 back from.

I am bothered that they are lying to their customers, not by the amount of real performance increases I think we will see.

(though wake me when the real stuff is out and tested, we will see where it lands.)

Nvidia is claiming the 16GB 4080 is 2x+ the performance of the 3080ti. It's a bold faced lie. There isn't a snowballs chance in hell it is accurate without twisting and contorting the numbers until they don't even remotely resemble anything like the truth.

These days 20% gen over gen is great. It is about as much as is physically possible given the difficulty in shrinking nodes and the poor scaling there these days, and the limiting returns on further arch optimizations.

Had Nvidia come out and demonstrated 20% over last gen, I'd be fine with it.

But if they try to tell me its 100% when it's actually 20% then you bet I am going to be pissed, especially if they are charging me for the 100%.

Lie to my face, and you might wind up with a broken nose. (figuratively speaking of course)

Don't piss down my back and tell me it's raining.

If they are going to claim 2x, it had better be 2x, like for like, with all the DLSS fluff and RT disabled, in a true apples to apples comparison.
 
Imagine being bothered that Nvidia or AMD isn't literally 2x doubling raw raster performance every 2 years, and may have only managed somewhere between 1.5x and 2x.
Well, that would be a 50%-100% increase over the 3090 ti and that isn't happening either.
 
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?
Depends,
11600k (Q2 2021) to 12600k (Q4 2021) just had a 40% passmark jump in MT, 12900K 62% from the 11900k, the 12xxx jump from the 11xxx was a really special one for 2 reason, the 11xxx barely moved from the 10xxx so it is more a jump over the 10, but that jump was also particularly good for Intel has well. they are feeling the competition at full pace and for a while now, the rumored 13xxx gain are also really good for a just one year generation gap.
 
Well, that would be a 50%-100% increase over the 3090 ti and that isn't happening either.
Not sure which one we are talking about exactly, but with the mention of a 3090TI and the 490TI specs, I am not sure why you are saying that.

The 4090 has 2.7 time the transistor, about 2.3 the pixel rate, 2.05 time the Teraflops and had in native rendering no dlss:
https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

About 1.66x time the average FPS than a 3090TI Suprim X boosted at a similar 455W power usage in Cyberpunk (1440p max setting, ultra RT + Psycho)
 
Not sure which one we are talking about exactly, but with the mention of a 3090TI and the 490TI specs, I am not sure why you are saying that.

The 4090 has 2.7 time the transistor, about 2.3 the pixel rate, 2.05 time the Teraflops and had in native rendering no dlss:
https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

About 1.66x time the average FPS than a 3090TI Suprim X boosted at a similar 455W power usage in Cyberpunk (1440p max setting, ultra RT + Psycho)

That's with DLSS enabled. That's not apples to apples.

Going from Samsung 8N to TSMC 4N in theory you'd see 50% power use if things still the way they were back in 2010, but after 32nm everything went to shit. Power and process node no longer scale anywhere near linearly anymore.

The most you are going to see is 25%.

Stop aping their fake DLSS3 numbers. You are giving them exactly what they want. If DLSS isn't disabled, the benchmark isn't valid. The same should probably go for RT as well.
 
That's with DLSS enabled. That's not apples to apples.
Thats very explicitly without:

Cyberpunk 2077 Ultra Quality + Psycho RT (Native 1440p):

  • MSI RTX 3090 TI SUPRIM X (Stock Native 1440p) - 37 FPS / 455W Power / ~75C
  • NVIDIA RTX 4090 FE (Stock Native 1440p) - 60 FPS / 461W Power / ~55C
  • RTX 4090 vs RTX 3090 Ti = +62% Faster
Cyberpunk 2077 Ultra Quality + Psycho RT (DLSS 1440p):

  • MSI RTX 3090 Ti SUPRIM X (DLSS 2 1440p) - 61 FPS / 409W Power / 74C
  • NVIDIA RTX 4090 FE (DLSS 3 1440p) - 170 FPS / 348W Power / ~50C
  • RTX 4090 vs RTX 3090 Ti = +178% Faster

the DLSS strangeness will see what to make out of it, is more 2.78 time than 1.66
 
Not sure which one we are talking about exactly, but with the mention of a 3090TI and the 490TI specs, I am not sure why you are saying that.

The 4090 has 2.7 time the transistor, about 2.3 the pixel rate, 2.05 time the Teraflops and had in native rendering no dlss:
https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

About 1.66x time the average FPS than a 3090TI Suprim X boosted at a similar 455W power usage in Cyberpunk (1440p max setting, ultra RT + Psycho)
That is in one game and using RT, let's wait and see what the average improvement is when measured by independent reviewers using non RT titles. If those extra transistors were added to improve performance in RT enabled games the improvement in raster only titles might be less.
 
Last edited:
That is in one game and using RT, let's wait and see what the average improvement is when measured by independent reviewers using non RT titles. If those extra transistors were added to improve performance in RT enabled games the improvement in raster only titles might be less.
Raster performance improvement in the top card isn't interesting anymore when the 3090 can already run very nearly everything in 4K resolution at 120+ FPS without ray tracing. If I'm spending $1,600 on a video card then the ray tracing performance is what I'm buying it for. If you're only interested in raster performance then a 3070/3070 Ti is very likely more than enough card for your use case, especially when they're going for half the price of the 4080 12GB will go for at this point.
 
Raster performance improvement in the top card isn't interesting anymore when the 3090 can already run very nearly everything in 4K resolution at 120+
After a check, still many title under 100fps at 4K ultra details without RT (ultra details can often be ridiculous hard to ran), let alone yet to be released one in the next 2 years.

far-cry-6-3840-2160.png
borderlands-3-3840-2160.png


If it is just 25-30% higher than a 3090TI instead of 60% too, that could be enough for it to be well enough, specially with a small tweak of settings that give back giant fps for little change.

I feel it is close to fair if not fully fair the idea that if you want more than 6950xt or the upcoming close to top RDNA 3 card, it will probably be to be able to ran upcoming path traced level of RT games and correct to fully include that aspect in how much performance jump we see.
 
Last edited:
After a check, still many title under 100fps at 4K ultra details without RT (ultra details can often be ridiculous hard to ran), let alone yet to be released one in the next 2 years.

View attachment 514195View attachment 514196

If it is just 25-30% higher than a 3090TI instead of 60% too, that could be enough for it to be well enough, specially with a small tweak of settings that give back giant fps for little change.

I feel it is close to fair if not fully fair the idea that if you want more than 6950xt or the upcoming close to top RDNA 3 card, it will probably be to be able to ran upcoming path traced level of RT games and correct to fully include that aspect in how much performance jump we see.

Not sure how they are getting such good framerates in Far Cry 6 on a stock 6900xt when my beastly overclock gets a lot less less than that...

Oh, I didn't realize I had AA and DXR stuff as well as FidelityFX CAS enabled. That probably explains it.

1664300423889.png
1664300491229.png
1664300553718.png


With these settings on my 6900xt I average about 65fps, but that's with an fps cap set at 75fps to keep some of the heat and noise in check during lighter scenes.

(not sure why the screenshots are so washed out. Due to HDR maybe?)
 
There is no such thing as too much GPU power for 4K Max settings. It's amazing that there are some fairly old games that aren't even impressive looking but can barely keep 60 FPS for minimums on a 3080 TI. And really a lot of older games offered msaa or other demanding anti-aliases settings that can even bring a 3080 TI to its knees at only 1440p. That said I do find it funny that it seems to be the best looking games that actually run the best in most cases. It is the jankiest most unoptimized games that seem to require throwing the most GPU power at it generation after generation. And by the time you can get those unoptimized games to run good they look pathetically outdated anyway.
 
Raster performance improvement in the top card isn't interesting anymore when the 3090 can already run very nearly everything in 4K resolution at 120+ FPS without ray tracing. If I'm spending $1,600 on a video card then the ray tracing performance is what I'm buying it for. If you're only interested in raster performance then a 3070/3070 Ti is very likely more than enough card for your use case, especially when they're going for half the price of the 4080 12GB will go for at this point.

Disagree with that completely.

Nearly everything I have played from the last 5 years, I am struggling to make much above 60fps lows at 4k Ultra, and every new title that comes out presses the envelope even more.

Yes, most new titles have RT features, but raster performance is a good indicator of baseline performance, before you start adding all the smoke and mirrors (DLSS, RT, AA, etc.) that the likes of Nvidia try to distract you with.

It's not an either or proposition. It is a both proposition. Good raster performance is a must. Then you can add the nice-to-have features like DLSS, RT and AA and see how they perform, but if you don't have good baseline raster performance you don't have a product.

By going all in on DLSS and RT, Nvidia is just muddying the waters and refusing to let people see true apples to apples comparisons, and whenever a company does that, you just know it is because if they don't, it doesn't look so good.

Nvidia is pushing HARD to make RT and DLSS the mainline features and turn the graphics market into their proprietary playground, but the truth is that even a AAA game with tons of RT effects is still going to be 90+% raster, and DLSS (and FSR) is nothing but a last ditch effort to get usable performance if you don't have enough power, probably towards the end of life of the GPU. If you are planning on using DLSS or FSR scaling as part of getting acceptable framerates from the get go, that is a major fail. Native resolution or GTFO.

Remember how Nvidia was banning reviewers who posted raster benchmarks at the 20xx launch? I bet they still are (but since then reviews in general have gone to the toilet, allowing them to pull this influencer smoke and mirrors bullshit, and completely ruin the market turning it more and more proprietary.
 
Last edited:
Thats very explicitly without:

Cyberpunk 2077 Ultra Quality + Psycho RT (Native 1440p):

  • MSI RTX 3090 TI SUPRIM X (Stock Native 1440p) - 37 FPS / 455W Power / ~75C
  • NVIDIA RTX 4090 FE (Stock Native 1440p) - 60 FPS / 461W Power / ~55C
  • RTX 4090 vs RTX 3090 Ti = +62% Faster
Cyberpunk 2077 Ultra Quality + Psycho RT (DLSS 1440p):

  • MSI RTX 3090 Ti SUPRIM X (DLSS 2 1440p) - 61 FPS / 409W Power / 74C
  • NVIDIA RTX 4090 FE (DLSS 3 1440p) - 170 FPS / 348W Power / ~50C
  • RTX 4090 vs RTX 3090 Ti = +178% Faster

the DLSS strangeness will see what to make out of it, is more 2.78 time than 1.66

My bad. I read the headline and then jumped to conclusions.

I remember now though. This is the new extreme RT mode they created just as a tech demo frot he new GPU's that intentionally sabotages performance of previous gens so it can highlight the amazing performance increases.

Play it at normal RT modes or RT off, and there is much less improvement.

And for all the talk of RT, it doesn't really do THAT much for graphics improvement. Cyberpunk just looks darker without it. Adjust the HDR settings right and you can barely tell whether RT is on or off. RT is still mostly a gimmick at this point, and Nvidia is pushing this gimmick HARD to try to invent a reason why you should buy and pay extravagant sums for their GPU's instead of the competition.

It's all marketing smoke and mirrors, and the gamer kiddies are eating it up.
 
I think these people that refuse to use something like dlss and claim native resolution is always better really haven't done their homework. There are games where dlss actually improves the overall image such as death stranding. That game is a crawly jaggy mess around buildings at native resolution and dlss on quality greatly improves the overall image. Yes there will always be drawbacks and cons but sometimes the pros easily outweigh weigh them and you end up with a better image.
 
I miss the days when this was the goal. Now it's all compromises.

I don't think it ever ceased being the goal. DLSS and FSR are nothing but shortcuts and are best avoided. Especially this new DLSS3 that inserts fake magic frames.

Nothing will ever beat native resolution, and it should be the basic expecation, not using trickery that sabotages IQ.
 
I remember now though. This is the new extreme RT mode they created just as a tech demo frot he new GPU's that intentionally sabotages performance of previous gens so it can highlight the amazing performance increases.

Play it at normal RT modes or RT off, and there is much less improvement.
Maybe but RT overdrive is not mentioned anywhere, while Psycho RT is (and it require a yet to be released dev build to access that special new RT feature I think).

And DLSS 3 + RT overdrive marketing talk about 4x the performance, which is significantly lower here under 3.0x (but maybe that only at 4K that it reach that 4x-5x boost).

Is this something you actually know (source ?) or speculate ?
 
Nothing will ever beat native resolution, and it should be the basic expecation, not using trickery that sabotages IQ.
It is absolutely possible (already is in some aspect) for a lower resolution + intelligent reconstruction to beat native without any work done on it or at least i really do not understand why that would be the case, considering the learning model use way better than regular game native 4K image to construct itself.
 
  • Like
Reactions: T4rd
like this
Maybe but RT overdrive is not mentioned anywhere, while Psycho RT is (and it require a yet to be released dev build to access that special new RT feature I think).

And DLSS 3 + RT overdrive marketing talk about 4x the performance, which is significantly lower here under 3.0x (but maybe that only at 4K that it reach that 4x-5x boost).

Is this something you actually know (source ?) or speculate ?

Silicone basics. Nvidias performance can't violate the basic laws of physics.

Moving from a Samsung 8N process to a TSMC 4N process can AT MOST provide a 50% perf per watt increase, and that was back when gate size was real, and not marketing lies, and when gate sizes were larger (32nm era) when power actually scaled linearly with gate size.

Now, theyd be lucky if they can get a 25% perf per watt by halving the node size, and we don't even know if going from Samsung 8N to TSMC 4N is actually halving the node size, since these are marketing numbers.

Once we know this, we know a reasonable estimate for performance increase is 20% per watt regardless of core count or chip size, and then we just have to look at the published TDP's (presuming they aren't also lying about those)

The 3080ti was 384W, the 16GB 4080 is 320W. So, +25% performance from node shrink at same power level, and a 16.67% decrease in power leaves us at ~7.11% increase in performance for the 16GB 4080 over the 3080ti.

They can massage the arch a little bit and get a little bit more out of it than that, but at this point the arch is pretty mature, so there aren't huge gains to be had there as there once were. This is where the educated guessing comes in, but 10-20% max without DLSS and RT trickery is about the most we can expect realistically without violating the laws of physics.
 
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?

Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.
Apple vs Orange. Do you really want a motherboard/cpu combo that demands 400W+ of power pumped somehow to "the cpu", which can now occupy 2 to 3 times the area that it did before? The GPU became the "land and power grab". Not sure if the world is ready for two, especially when were talking about the more critical component. Also, there would be tons of issues. Not necessarily a problem for those already running dual socketed workstations, but maybe a surprise for the rest.

With regards to pure performance, remember there have been huge leaps. But probably pre-date any interest in gaming GPUs, you know?

That is, I think we might be getting close to having to radically re-think how to make gaming GPUs in order to go too much further (we'll see). That is, GPUs might be getting to the same maturity point as CPUs.
 
It is absolutely possible (already is in some aspect) for a lower resolution + intelligent reconstruction to beat native without any work done on it or at least i really do not understand why that would be the case, considering the learning model use way better than regular game native 4K image to construct itself.

I've seen DLSS in person. It is easily objectively worse than native resolution. AI is not magic. It fills in the gaps with guesses, and those guesses can be and often are wrong. You may get a smoother image, but they are wrong in other ways.
 
I've seen DLSS in person. It is easily objectively worse than native resolution. AI is not magic. It fills in the gaps with guesses, and those guesses can be and often are wrong. You may get a smoother image, but they are wrong in other ways.
Are you talking about DLSS 3.0 and frame insertion? 2.0 does not do any of that.
 
I've seen DLSS in person. It is easily objectively worse than native resolution. AI is not magic. It fills in the gaps with guesses, and those guesses can be and often are wrong. You may get a smoother image, but they are wrong in other ways.
I guess someone like you would probably consider digital foundry to just be a shill for clearly pointing out that using dlss can result in an overall better image in some cases. Personally I just take it on a game by game basis and don't make ignorant generalizations.
 
I've seen DLSS in person. It is easily objectively worse than native resolution. AI is not magic. It fills in the gaps with guesses, and those guesses can be and often are wrong. You may get a smoother image, but they are wrong in other ways.
Yes they are often, does not mean it will still be the case in 2050 (or at least, how does one know that overall the better than native guess vs worst one will not average out positive)
 
Yep. If you don't like the jump, you're under no obligation to buy it. Wait for another gen then
Part of me wonders if some of the poo-pooing isn't simply some guys having a very hard time with the fact the new gen was named after a lady (Linda Lovelace, the lady that invented computers).
 
Status
Not open for further replies.
Back
Top