4090 ti specs

Status
Not open for further replies.
Imagine being bothered that Nvidia or AMD isn't literally 2x doubling raw raster performance every 2 years, and may have only managed somewhere between 1.5x and 2x.

Strip away all the brand specific value adds like DLSS and RTX, it's still going to be a monster leap.

Btw I wouldn't be surprised if the actual 4090 Ti ends up being the 600W design they scaled 4090 back from.

I actually think the 4090 was probably engineered for 600W as well. In the GTC Video Jensen was talking about the power stages and it was something pretty huge, 23 stages iirc compared to about half that on 3xxx (both from memory). It sounds like it was going to be 600W all along.

Likely they determined that a large % of the userbase couldn't immediately run a 600W card, so the decision was to go dual bios, put a switch on the card to toggle between 450W (default) and a 600W setting.

And the good news from all of that, is that the card is designed to be a beast, and there will be a crapload of OC headroom.

Just look at how HUGE all of the AIB cards are. Complete monsters, up to 14.5 inches long and standing well above the slot where the bracket is, by several inches, and 3 to 4 slot wide. These are 600W capable cards, even if they are set to 450W.

That's with DLSS enabled. That's not apples to apples.
... If DLSS isn't disabled, the benchmark isn't valid. The same should probably go for RT as well.

So you think it would be fair to disable HALF of the GPU's cores? (Those that run the DLSS and Raytracing work) Maybe that's fine for you, but I want to see what DLSS 3.0 can do. DLSS 3.0 doesn't exist on AMD or on 3xxx or older nVidia GPU's, doesn't mean it is "unfair" to show what gains it can provide.

input latency and image quality of DLSS 3.0 is one of the main things I am waiting to hear about in reviews. And I expect it provides on average 3x the performance when enabled, and this information will be in reviews as well. DLSS 3.0 Quality, what does that bring to the table...

It's like 2 cars, one has a turbo, one doesn't. "You can't race me in that, it has a turbo! You gotta disable the turbo, then it's fair..."
lol

Silicone basics. Nvidias performance can't violate the basic laws of physics.

Moving from a Samsung 8N process to a TSMC 4N process can AT MOST provide a 50% perf per watt increase, and that was back when gate size was real, and not marketing lies, and when gate sizes were larger (32nm era) when power actually scaled linearly with gate size.

Now, theyd be lucky if they can get a 25% perf per watt by halving the node size, and we don't even know if going from Samsung 8N to TSMC 4N is actually halving the node size, since these are marketing numbers.

Once we know this, we know a reasonable estimate for performance increase is 20% per watt regardless of core count or chip size, and then we just have to look at the published TDP's (presuming they aren't also lying about those)

The 3080ti was 384W, the 16GB 4080 is 320W. So, +25% performance from node shrink at same power level, and a 16.67% decrease in power leaves us at ~7.11% increase in performance for the 16GB 4080 over the 3080ti.

They can massage the arch a little bit and get a little bit more out of it than that, but at this point the arch is pretty mature, so there aren't huge gains to be had there as there once were. This is where the educated guessing comes in, but 10-20% max without DLSS and RT trickery is about the most we can expect realistically without violating the laws of physics.

Now you are just talking out of your ass.
We no longer get the 2x performance every 2 years because Moore's law is dead. The power used (per transistor) is the same (they've hit the minimum power that a silicon transistor can operate at) but the transistor keeps shrinking. Power density is going up.

but all of this isn't exactly linear anymore, and every single node process has to be analyzed independently.

Unless you are a TSMC node process engineer, gonna call bullshit on all of this nonsense.
 
Part of me wonders if some of the poo-pooing isn't simply some guys having a very hard time with the fact the new gen was named after a lady (Linda Lovelace, the lady that invented computers).
Young gamers suddenly "perked up" to the idea of getting the new card. Noting having a "very hard time" indeed.
 
The power used (per transistor) is the same (they've hit the minimum that a silicon transistor can operate at)
That sound a bit false to me an H100 gpu has almost 4 times has many transistors than a V100, far from having 3 time the TDP despite having 5time more ram on the card.

The 4090 around 2.7x times more transistor than the 3090TI, similar power usage.

But considering that the Apple M1 ultra has around 114 billion transistor around 20 times the amount of a 7600x while needing a very reasonable amount of power, maybe I am missing something major here
 
Part of me wonders if some of the poo-pooing isn't simply some guys having a very hard time with the fact the new gen was named after a lady (Linda Lovelace, the lady that invented computers).

It might be for some, I've never heard anyone make that argument.

For me, I vaguely remembered the name Lovelace, but I didn't remember that she was a woman, or what her contributions to science were,. It just stuck in my mind as vaguely scientific related, like all of Nvidias other architecture names.

For me it is all about false advertising, marketing manipulation, as well as the getting away from deterministic rendering to more of a probabilistic method, which is something I do not want.

I'm kind of opposed to all things AI/Machine learning and all black box models in general.
 
So you think it would be fair to disable HALF of the GPU's cores? (Those that run the DLSS and Raytracing work) Maybe that's fine for you, but I want to see what DLSS 3.0 can do. DLSS 3.0 doesn't exist on AMD or on 3xxx or older nVidia GPU's, doesn't mean it is "unfair" to show what gains it can provide.

input latency and image quality of DLSS 3.0 is one of the main things I am waiting to hear about in reviews. And I expect it provides on average 3x the performance when enabled, and this information will be in reviews as well. DLSS 3.0 Quality, what does that bring to the table...

It's like 2 cars, one has a turbo, one doesn't. "You can't race me in that, it has a turbo! You gotta disable the turbo, then it's fair..."
lol

As I said in a previous post, it's not an either or proposition, BOTH matter.

First show me the brute force increases in raster performance, then show me the additional features and the benefits they can offer.

Don't just make the additional features the headline number, as they come with compromises and are not the same as increases in baseline performance, and thus not apples to apples.
 
Now you are just talking out of your ass.
We no longer get the 2x performance every 2 years because Moore's law is dead. The power used (per transistor) is the same (they've hit the minimum power that a silicon transistor can operate at) but the transistor keeps shrinking. Power density is going up.

but all of this isn't exactly linear anymore, and every single node process has to be analyzed independently.

Unless you are a TSMC node process engineer, gonna call bullshit on all of this nonsense.

I think I mentioned the non-linearity in my analysis.

The thing is, the non-linearity goes only in one direction. Back in ~2010 and before, it was linear, now you get less than linear.

You are never under any circumstance going to get more than linear. That violates several basic laws of physics.

So, in my calculation, I give the best case assuming linearity, and then back off a bit as an educated guess, and I clearly state it is an educated guess, but it is a guess I feel pretty confident about, give or take maybe 5 percentage units.
 
I see Nvidia didn't learn from all those massive discounts on 3090 and 3090ti cards that are floating around still by really pushing the highest end of cards, I mean forget about the majority of your sales being closer to the 4060/4070 range, lets get those 2 grand cards out first!
 
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?

Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.

We used to see big CPU jumps all the time. That stopped roughly ten years ago. GPUs as we know them haven't been around as long. At some point, I think GPU development will slow down as well.
 
Usually the "Ti" versions come out at the midway point to totally new card's release. The new card will almost assuredly kick the Ti's ass and you could've been gaming on a new card for a year if you wait for one. Makes more sense to get the normal highest end card at launch or just wait for the completely new model IMO. They're like the S model iPhones. They're only a good gig if you specifically hop on off-schedule.
 
First show me the brute force increases in raster performance, then show me the additional features and the benefits they can offer.
Agreed.
Don't just make the additional features the headline number, as they come with compromises and are not the same as increases in baseline performance, and thus not apples to apples.

New features will always be the headline. It's both what the company is selling plus what people want to know.

I think I mentioned the non-linearity in my analysis.

The thing is, the non-linearity goes only in one direction. Back in ~2010 and before, it was linear, now you get less than linear.
This really only applies to power usage...
You are never under any circumstance going to get more than linear. That violates several basic laws of physics.

So, in my calculation, I give the best case assuming linearity, and then back off a bit as an educated guess, and I clearly state it is an educated guess, but it is a guess I feel pretty confident about, give or take maybe 5 percentage units.
But you've grouped too many assumptions together.. Coming up with 7% gain is the only possible result... you are really reaching on that one. I expect reviews, even those with dlss and rt off, will show that 7% was in error.

Couple weeks!
 
Part of me wonders if some of the poo-pooing isn't simply some guys having a very hard time with the fact the new gen was named after a lady (Linda Lovelace, the lady that invented computers).
It was Ada Lovelace, and this is the stupidest deflection of nVidia criticism I've ever seen.

Raster performance improvement in the top card isn't interesting anymore when the 3090 can already run very nearly everything in 4K resolution at 120+ FPS without ray tracing. If I'm spending $1,600 on a video card then the ray tracing performance is what I'm buying it for. If you're only interested in raster performance then a 3070/3070 Ti is very likely more than enough card for your use case, especially when they're going for half the price of the 4080 12GB will go for at this point.

Ummm, do you actually use them or just look at marketing slides?

I can state for a fact that my 3090 cannot hit 120+ at 4k in modern titles without IQ compromises. I only use 3440/1440 and I can't hit my monitors 144hz without compromise, even my prior monitors 100hz could be in question.

HZD ~85-90 at max settings (no ray tracing)
Remnant from the ashes ~75-95 at max settings
AC Odessey ~100 max settings
Vermintide 2 ~80 max settings
Guardians of the galaxy ~ 100-110 max settings edit - no raytracing.
Dying Light 2 ~100

Most of the 'demanding' or modern titles are in line with the above.
 
Last edited:
Raster performance improvement in the top card isn't interesting anymore when the 3090 can already run very nearly everything in 4K resolution at 120+ FPS without ray tracing. If I'm spending $1,600 on a video card then the ray tracing performance is what I'm buying it for. If you're only interested in raster performance then a 3070/3070 Ti is very likely more than enough card for your use case, especially when they're going for half the price of the 4080 12GB will go for at this point.
I am mostly focused on raster performance and there are games that still cannot do 4K 60 with max settings. I am getting more interested in RT as we get closer to being able to run it natively. I have no interest in DLSS as I consider that a hack.
 
Part of me wonders if some of the poo-pooing isn't simply some guys having a very hard time with the fact the new gen was named after a lady (Linda Lovelace, the lady that invented computers).
OMG You figured it out! We would have gotten away with it if it wasn't for those damn kids.
 
4090 is raster of 50-60% in reality if you look at their charts. Still respectable I mean gen to gen, that is why they were more concentrated on DLSS3 ets and AI showing you what you should see lol. The rumors were probably true of the card designed for 600w to pull that extra 20% of so. Another reason they talked about you can overclock the heck out of it with all that power available.

I do suspect AMD might take the leap in performance given how they did with RDNA2 and also how they will have over 2x shaders in Navi 31 and even higher clocks. So expect it to be pure beast in Raster performance. Excited to see what they do in Ray tracing front that might be behind nvidia still, but I suspect a bigger leap there gen to gen.
 
I think these people that refuse to use something like dlss and claim native resolution is always better really haven't done their homework. There are games where dlss actually improves the overall image such as death stranding. That game is a crawly jaggy mess around buildings at native resolution and dlss on quality greatly improves the overall image. Yes there will always be drawbacks and cons but sometimes the pros easily outweigh weigh them and you end up with a better image.
And a lot of times you only notice the difference in still frames, when you are moving around at 120 fps playing the game you aren't going to actually notice any of the DLSS drawbacks. You have to stop and look for them, but you don't have to stop to notice giant lag spikes when your frame rate drops to the 20's.
 
I see Nvidia didn't learn from all those massive discounts on 3090 and 3090ti cards that are floating around still by really pushing the highest end of cards, I mean forget about the majority of your sales being closer to the 4060/4070 range, lets get those 2 grand cards out first!
They are pushing the high prices on the 4000 series to drive customers to the leftover 3000 series, they failed to do this when the 2000 series launched and then got sued by their investors when they found out how many 1000 series cards were still unsold in the wild. Nvidia is working hard to not repeat its previous failures in dealing with its overstocked inventory. The sad reality is those 4000 series cards at those prices will sell, if they haven't all sold already, AMD doesn't have anything on the market that touches them at the moment and until somebody offers an alternative Nvidia isn't going to lower the price on a product that is flying off the shelves, it anything it gives them the incentive to raise the prices because it's better to charge too much and have to lower the price then it is too charge too little and explain to angry investors why you left money on the table.
Additionally focusing on high margin low volume parts frees up their silicon to deal with the insane amount of H100 pre-orders Nvidia has taken, seriously the H100s are all presold well into 2023, the results are in and they are 80% cheaper to run than the A100's, in a data center environment this has an ROI of like 2-3 months on replacing the A100s. I am hoping I can pick up a few A100s for the office on the cheap to upgrade my RTX 8000s.
 
Last edited:
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?

Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.
GPU's can just multiply all their tech and the software will eat it up. CPU's can really only add IPC which is limited and cores at a decreasing value based on the application. AMD could release a Thread-ripper Zen 4 with 64 cores and it would preform worse then the 7950X just released on the latest game. CPU's are hampered by the OS and the software it is running. Your new GPU will gladly kick ass over your old GPU on you fav game.
(*this is all simplified for stupidity btw*)
 
Last edited:
Now, theyd be lucky if they can get a 25% perf per watt by halving the node size,
https://www.techspot.com/review/2544-nvidia-geforce-rtx-4090/


At this frame rate, the RTX 4090 consumed just 215 watts, and that means for the same level of performance the 3090 Ti required 93% more power and the 6950 XT 40% more power.


My bad. I read the headline and then jumped to conclusions.

I remember now though. This is the new extreme RT mode they created just as a tech demo frot he new GPU's that intentionally sabotages performance of previous gens so it can highlight the amazing performance increases.

Play it at normal RT modes or RT off, and there is much less improvement.

It's all marketing smoke and mirrors, and the gamer kiddies are eating it up.
It seem it was pretty much spot on, with maybe the newer AMD CPU making the cap a bit bigger than expected or tiny change in setting:
https://static.techspot.com/articles-info/2544/bench/DLSS_1440p-p.webp

Vs said in advance performance:
Cyberpunk 2077 Ultra Quality + Psycho RT (Native 1440p):

  • MSI RTX 3090 TI SUPRIM X (Stock Native 1440p) - 37 FPS / 455W Power / ~75C
  • NVIDIA RTX 4090 FE (Stock Native 1440p) - 60 FPS / 461W Power / ~55C
  • RTX 4090 vs RTX 3090 Ti = +62% Faster
Cyberpunk 2077 Ultra Quality + Psycho RT (DLSS 1440p):

  • MSI RTX 3090 Ti SUPRIM X (DLSS 2 1440p) - 61 FPS / 409W Power / 74C
  • NVIDIA RTX 4090 FE (DLSS 3 1440p) - 170 FPS / 348W Power / ~50C
  • RTX 4090 vs RTX 3090 Ti = +178% Faster

Hardware unboxed
4090 vs 3090 TI, 1440p high quality TAA
RT off: 145 vs 107 +35%
RT Ultra: 86 vs 51 +68% (versus +62%)

At 4K:
RT off: 83 vs 55 +50%
RT Ultra: 45 vs 25 +80%
 
https://www.techspot.com/review/2544-nvidia-geforce-rtx-4090/


At this frame rate, the RTX 4090 consumed just 215 watts, and that means for the same level of performance the 3090 Ti required 93% more power and the 6950 XT 40% more power.



It seem it was pretty much spot on, with maybe the newer AMD CPU making the cap a bit bigger than expected or tiny change in setting:
https://static.techspot.com/articles-info/2544/bench/DLSS_1440p-p.webp

Vs said in advance performance:
Cyberpunk 2077 Ultra Quality + Psycho RT (Native 1440p):

  • MSI RTX 3090 TI SUPRIM X (Stock Native 1440p) - 37 FPS / 455W Power / ~75C
  • NVIDIA RTX 4090 FE (Stock Native 1440p) - 60 FPS / 461W Power / ~55C
  • RTX 4090 vs RTX 3090 Ti = +62% Faster
Cyberpunk 2077 Ultra Quality + Psycho RT (DLSS 1440p):

  • MSI RTX 3090 Ti SUPRIM X (DLSS 2 1440p) - 61 FPS / 409W Power / 74C
  • NVIDIA RTX 4090 FE (DLSS 3 1440p) - 170 FPS / 348W Power / ~50C
  • RTX 4090 vs RTX 3090 Ti = +178% Faster

Hardware unboxed
4090 vs 3090 TI, 1440p high quality TAA
RT off: 145 vs 107 +35%
RT Ultra: 86 vs 51 +68% (versus +62%)

At 4K:
RT off: 83 vs 55 +50%
RT Ultra: 45 vs 25 +80%

I'm quite surprised.

Looking back, I 50% theoretical max from node size, but assumed poor node scaling would hit it down quite a bit, because quite frankly nodes have been scaling very poorly ever since ~2010 or so. So, if we take the power numbers as gospel (384W vs 320W) that puts the theoretical max at 42.8%.

If they at 1440p actually hit 35% that is pretty damn impressive, as they are getting within a stones throw of the theoretical max, and not seeing anywhere near the poor scaling of previous node size changes. Maybe TSMC really hit it out of the park with this node?

The numbers at 4k went up from there, which makes no sense at all. The only way we would expect to see better scaling at 4k than at 1440p would be if the 3090ti 4k numbers were limited by memory bandwidth, but looking at the specs at least, they are identical at ~1008 GB/s.

So, I can't explain that one at all. Some form of new memory compression?

Beats me.

Something is definitely being done very differently though. I kind of expected that to be the case in RT numbers, as that tech is still maturing and getting better quickly every generation, but that +50% raster number at 4k seems like it should be completely impossible.

Every previous gen has give or take a few percent matched my expectations based on calculations like these. This one? I don't know how they've done it. Looks like a magic trick.

I'm not afraid to admit I got it wrong, but I am still completely puzzled as to how I got it wrong. These look like magic numbers, not real ones.

As the good old saying goes, when something looks too good to be true, it usually is, but I can't explain this.
 
Per Moore's Law is Dead, supposedly the 4090ti with a 600 watt bios was sometimes melting the power delivery cables and itself.

We used to see big CPU jumps all the time. That stopped roughly ten years ago. GPUs as we know them haven't been around as long. At some point, I think GPU development will slow down as well.

I thought it stopped with Sandy Bridge for regular desktop and Sandy-E for HEDT being the last big performance uplift from Intel until Alder Lake, at least in those programs that can take advantage of the E cores and under Win 11 to handle most of the scheduling issues.

GPU generations stalled out for a while where Raja was seemingly stifling if not undermining AMD and Nvidia seeing no need to compete against itself delivered comparatively modest gains when transitioning to the 7X0, the 9X0, the 10X0, and especially the 20X0 cards. Come buy "mid-range" Turing cards that at launch cost roughly the same, if not sometimes more than the prior generation's equivalently performing card cost when new.
 
Aaaand it's (reportedly) gone.

Nvidia RTX 4090 Ti is reportedly cancelled due to melting itself


1665530454138.png


The reasons behind the cancellation, again according to the same source, is that the RTX 4000 GPU was “tripping breakers, melting power supplies, and sometimes melting itself.”
 
The numbers at 4k went up from there, which makes no sense at all. The only way we would expect to see better scaling at 4k than at 1440p would be if the 3090ti 4k numbers were limited by memory bandwidth, but looking at the specs at least, they are identical at ~1008 GB/s.

So, I can't explain that one at all. Some form of new memory compression?

Beats me.
It could simply the more the frame is GPU bound the more a different in fps can caused by a GPU.

Say when the game run at 100 fps that of the 10ms by framem, in average 8ms are GPU bound , 2ms are CPU bound.

A video card that is twice has powerful would give you potentially less than 4ms back by (has the CPU now has less time in between frame to do is thing) but in a best case scenario you go to 6ms (166 fps)

At 4k when you are at 50 fps, you will still be around 2ms for the CPU but now 18 ms for the GPU, a GPU twice has strong will give you 9ms back, 11 ms for a frame up to 90 fps.

1440p, 100->166 fps +66%
4k, 50->90 fps +90%

Same relative gain on the GPU side of the workload can lead to really different average FPS change.

This one? I don't know how they've done it. Looks like a magic trick.
Maybe usually it is more similar nor that you use rough estimate, Samsung 8 to TSMC 4n could be quite different than usual.

The 2080Ti had 18.6 millions transistor, the 3090 had 28.3 millions transistor (or a much smaller die), the 4090 had a 2.7 times the transistor count.

If we go 980->1080ti->2080ti->3090->4090 it goes like

980->1080ti->2080ti->3090->4090
5.2->11.8 ->18.6 ->28.30->76.3
...->2.27x ->1.58x ->1.52x->2.7x



This generation was never I think in quality of improvement from the node alone, significantly higher than 980 to 1080TI from 28nm to 16nm of the Pascal superb generation (the Ti has much larger die and price point and all the gain of the 4090 is despite a very similar memory bandwidth, while the 1080Ti more than doubled the 980 in that regard).
 
Oh I see what you did there. Linda Lovelace, while certainly a lovely lady, did not in fact invent computers.

Neither did Ada Lovelace. At best she wrote programs for Charles Babbage's "analytical machine" at worst, she transcribed his notes under his guidance.

I guess Nvidia thought Lovelace was better naming scheme than Babbage though.
 
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?

Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.
There are two main drivers for this (IMO):
1) The GPU makers own the software stack. By virtue of requiring drivers to operate and having the cards effectively run as standalone computers which interface with the rest of the PC through a driver, the GPU makers are able to fully optimize their hardware architecture. They don't have to get bogged down with 20 year old Windows BS like a CPU does. When a GPU maker wants to implement a major overhaul to its hardware architecture, it can just go ahead and do it. The required software support comes from themselves, so no waiting for Microsoft to get on board with it.
2) GPUs are a lot like ASICs and thus are fully optimized for their use case. When the needs of the use case change, the GPU architectures are adjusted to match that. A CPU that has to run Windows can't really do that. For a few different reasons (such as noted in #1 above), the Windows CPU has to maintain legacy support. This seriously hampers progress. An anecdotal example would be Apple's M1. That CPU is purpose built for running Photoshop on a laptop. This specialization is how it's able to outperform x86 in those tasks while using a fraction of the power. Of course, a big part of the reason Apple is able to get this level of specialization is because they own the software stack.

As long as GPUs continue to be largely standalone compute systems, we will continue to see sigificantly larger inter-generational gains with them than we see with Windows-hobbled CPUs.
 
Per Moore's Law is Dead, supposedly the 4090ti with a 600 watt bios was sometimes melting the power delivery cables and itself.



I thought it stopped with Sandy Bridge for regular desktop and Sandy-E for HEDT being the last big performance uplift from Intel until Alder Lake, at least in those programs that can take advantage of the E cores and under Win 11 to handle most of the scheduling issues.

GPU generations stalled out for a while where Raja was seemingly stifling if not undermining AMD and Nvidia seeing no need to compete against itself delivered comparatively modest gains when transitioning to the 7X0, the 9X0, the 10X0, and especially the 20X0 cards. Come buy "mid-range" Turing cards that at launch cost roughly the same, if not sometimes more than the prior generation's equivalently performing card cost when new.
Sandy Bridge-E was released back in 2011. That was over ten years ago now. I would say that was the last big upgrade until Alder Lake-S. However, if you go back even further we had larger leaps in performance. The Pentium line went from 60MHz all the way to 233MHz. The Pentium II went from 233MHz all the way to 450MHz. Pentium III went from 400MHz to 1.4GHz. Back in those days I think we saw far more rapid advancement than we have in well over a decade. After the clock speed wars calmed down we had architectural advancements at a fairly rapid pace and your right, that pretty much ended with Sandy Bridge and Sandy Bridge-E. Alder Lake-S was the biggest jump we had seen in awhile, but we are also so many minor enhancements over Sandy Bridge that those CPU's are pretty far behind now.
 
It could simply the more the frame is GPU bound the more a different in fps can caused by a GPU.

Say when the game run at 100 fps that of the 10ms by framem, in average 8ms are GPU bound , 2ms are CPU bound.

A video card that is twice has powerful would give you potentially less than 4ms back by (has the CPU now has less time in between frame to do is thing) but in a best case scenario you go to 6ms (166 fps)

At 4k when you are at 50 fps, you will still be around 2ms for the CPU but now 18 ms for the GPU, a GPU twice has strong will give you 9ms back, 11 ms for a frame up to 90 fps.

1440p, 100->166 fps +66%
4k, 50->90 fps +90%

Same relative gain on the GPU side of the workload can lead to really different average FPS change.


Maybe usually it is more similar nor that you use rough estimate, Samsung 8 to TSMC 4n could be quite different than usual.

The 2080Ti had 18.6 millions transistor, the 3090 had 28.3 millions transistor (or a much smaller die), the 4090 had a 2.7 times the transistor count.

If we go 980->1080ti->2080ti->3090->4090 it goes like

980->1080ti->2080ti->3090->4090
5.2->11.8 ->18.6 ->28.30->76.3
...->2.27x ->1.58x ->1.52x->2.7x



This generation was never I think in quality of improvement from the node alone, significantly higher than 980 to 1080TI from 28nm to 16nm of the Pascal superb generation (the Ti has much larger die and price point and all the gain of the 4090 is despite a very similar memory bandwidth, while the 1080Ti more than doubled the 980 in that regard).

Yeah, but the transistor count isn't particularly relevant as we are power limited anyway, so it is already kind of factored into the perf/watt figures arealy mentioned.

Now, one could argue that you get greater efficiency, and thus greater per/watt by driving a larger number of transistors less hard, than by driving a smaller number of transistors harder, and there is some truth to that (as everyone who has overclocked knows, as you go up, it takes more and more power to achieve smaller and smaller increases) but I don't think that can explain the entire difference.
 
Sandy Bridge-E was released back in 2011. That was over ten years ago now. I would say that was the last big upgrade until Alder Lake-S. However, if you go back even further we had larger leaps in performance. The Pentium line went from 60MHz all the way to 233MHz. The Pentium II went from 233MHz all the way to 450MHz. Pentium III went from 400MHz to 1.4GHz. Back in those days I think we saw far more rapid advancement than we have in well over a decade. After the clock speed wars calmed down we had architectural advancements at a fairly rapid pace and your right, that pretty much ended with Sandy Bridge and Sandy Bridge-E. Alder Lake-S was the biggest jump we had seen in awhile, but we are also so many minor enhancements over Sandy Bridge that those CPU's are pretty far behind now.

In MHz "speed" it was a step down from the 486DX2 66 to the first Pentium released not long thereafter. I had a Pentium II 400 desktop and just two years later bought a nearly-dozen pound notebook as a barely portable desktop replacement with a PIII 800.

IPC in Rocket Lake vs Sandy Bridge is around 50-60% better, but nearly all those Sandy non-E chips could overclock to around 4.9-5+GHz. For much of that time, while IPC would incrementally improve with each new chip, overclocked top speed would generally decrease leaving the total performance of an overclocked Sandy Bridge versus whatever successor 4 core chip roughly at par.

Nvidia is finally again rolling out major architectural improvements due to useful die shrinks and proper competition from AMD. Der8auer was going on about the essentially pre-overclocked nature of the 4090 because of how little speed is lost when power targeted down to the low 300 watt range... which might actually be of some importance to Germany right now. Nvidia has been pushing power because AMD's 6000 series performance and expected 7000 performance necessitate it. Raster is still far more important than ray tracing and AMD can actually trade blows quite handily in rasterization. Though with more likely CPU bottlenecks, turning on ray tracing, where Nvidia will almost certainly maintain most of its existing lead, just for the sake of staying under that bottleneck is a bit more attractive.
 
In MHz "speed" it was a step down from the 486DX2 66 to the first Pentium released not long thereafter. I had a Pentium II 400 desktop and just two years later bought a nearly-dozen pound notebook as a barely portable desktop replacement with a PIII 800.

IPC in Rocket Lake vs Sandy Bridge is around 50-60% better, but nearly all those Sandy non-E chips could overclock to around 4.9-5+GHz. For much of that time, while IPC would incrementally improve with each new chip, overclocked top speed would generally decrease leaving the total performance of an overclocked Sandy Bridge versus whatever successor 4 core chip roughly at par.

Nvidia is finally again rolling out major architectural improvements due to useful die shrinks and proper competition from AMD. Der8auer was going on about the essentially pre-overclocked nature of the 4090 because of how little speed is lost when power targeted down to the low 300 watt range... which might actually be of some importance to Germany right now. Nvidia has been pushing power because AMD's 6000 series performance and expected 7000 performance necessitate it. Raster is still far more important than ray tracing and AMD can actually trade blows quite handily in rasterization. Though with more likely CPU bottlenecks, turning on ray tracing, where Nvidia will almost certainly maintain most of its existing lead, just for the sake of staying under that bottleneck is a bit more attractive.
The Pentium 60/66MHz CPU's were a huge step forward in terms of IPC. One of the biggest jumps we've ever seen in fact.
 
Per Moore's Law is Dead, supposedly the 4090ti with a 600 watt bios was sometimes melting the power delivery cables and itself.

Per Moore's Law is Dead, Intel Arc was "CaNceLleD, according to super secret inside source, swear it you guys". And a hundred other points of misinformation he's spewed out but then pretended he never said when they turned out to be BS.

He's an entertainer peddling misinformation and outright made-up-shit to get clicks from the gullible, nothing more. But he sprinkles in stuff that's obvious or expected or likely, then takes credit for being Nostradamus when the coin toss guess happens to fall in his favor- "I was right you guys I said 30-series would have PCIe 4.0" -- yeah no shit.

We're in the age of misinformation, it travels faster than information and grab more attention. Sadly others desperate for viewercount are moving in the same direction- JayzTwoBraincells just had a meltdown after being called out for slowly becoming just another MLID reaction channel - rightly so. These guys are all clowns.
 
Oh I see what you did there. Linda Lovelace, while certainly a lovely lady, did not in fact invent computers.
I was, of course kidding about Linda Lovelace inventing computers. But then Linda Loveface arguably "invented" porn - at least the modern age of it - and porn is the defacto engine that runs the entire internet, that resonant hum of superintelligence between atoms that has linked computers together and damned us all. So there may be a connection. Not necessarily a good or uplifting tale that could be shared at a Thanksgiving dinner or as part of any reasonable and responsible bedtime story, but it exists nonetheless.

Neither did Ada Lovelace. At best she wrote programs for Charles Babbage's "analytical machine" at worst, she transcribed his notes under his guidance.

Well let's hold on and maybe stop short of characterizing Ada as having been "just some dumb secretary that Charles taught how to think and how to act and how to write- or, 'write' insofar as she just held a pen in her hand and he moved it across the page for her".
 
Last edited:
Aaaand it's (reportedly) gone.
That the Titan level brand maybe and not the simple Ti version of the 4090 that would be almost certain to happen one day (seem to have really no need for those 600-700watt to drive an fully used chip and better ram, looking at people power test):
https://www.notebookcheck.net/Nvidi...d-performance-over-the-RTX-4090.661441.0.html

Would simply go from 16384 core to 18176 cuda core, that would be a lot of unused core to never used if the TSMC yield make it possible. There is a 15% memory bandwith to gain near term just by better GDDR6x ram coming up has well.
 
Per Moore's Law is Dead, Intel Arc was "CaNceLleD, according to super secret inside source, swear it you guys". And a hundred other points of misinformation he's spewed out but then pretended he never said when they turned out to be BS.

He's an entertainer peddling misinformation and outright made-up-shit to get clicks from the gullible, nothing more.
Not saying you're wrong, but could you give some notable examples?
 
Not saying you're wrong, but could you give some notable examples?
That would mean someone would have to actually watch a BS artist just to detect all the BS. Once BS is detected you tune out. :) Same reason most people tune out people like Linus and Jay.

The BS spinning is a cottage industry on YouTube. My thoughts on that one is... he spends a an hour a day or so scouring twitter, and perhaps the odd Asian messaging service (or someone points him there) looking for "leaks". Now some of those things do turn out to be somewhat true... and many turn out to be completely made up. Those are his "sources" unless you actually believe someone on the "inside" is really risking their jobs to send bits of confidential information to a guy doing YouTube videos from his attic complete with cheap Amazon LED lighting. lol When someone like Kyle says something you say... ok this guy has had industry contacts going back 30 years, worked for companies like Intel. He also isn't putting out daily videos talking about X or Y sensational thing... like X is dead or Y is garbage or Z is going to be 100x faster ect ect. MLID gets the odd thing correct by luck... or by quoting an actual leak that everyone looking sees.
 
  • Like
Reactions: noko
like this
Maybe so, but some concrete examples would be better. Everything we currently know is in line with his Arc cancelation leak, and FrgMstr himself confirmed some of the leaks talked about in MLID's videos in the last year or two. Those may be rare exemptions, but judging by the harsh dismissal there should be notable false leaks and reports that trash his reputation.
 
Maybe so, but some concrete examples would be better. Everything we currently know is in line with his Arc cancelation leak, and FrgMstr himself confirmed some of the leaks talked about in MLID's videos in the last year or two. Those may be rare exemptions, but judging by the harsh dismissal there should be notable false leaks and reports that trash his reputation.
I didn't say everything he said was BS. Many of us are on here all the time speculating... and many people here are right more then 50% of the time. That doesn't mean they have a little birdy whispering in their ears. The dude in question pretends he has little birdies whispering in his ear even though its clear the average experienced hardware geek can make the same type of predictions with about the same if not better results.

I mean if google launches hardware... it isn't a stretch to assume its going to get shuttered before it goes anywhere. If Intel launches a GPU.... I mean come on they have tried three times at this point. Both previous attempts where aborted either after one product or before they got to consumer product. Guessing they might kill it is just an educated bet.... and its still a fair bet to say they won't ever release a second gen. I wouldn't take that bet... but if the second gen sucks as hard as the second, I would bet heavily on Intel axing consumer GPUs. (and if I told you someone "high up" at Intel told me so.... it wouldn't make me an insider when it happened just a BS artist)
 
Status
Not open for further replies.
Back
Top