- Nov 29, 2004
Imagine being bothered that Nvidia or AMD isn't literally 2x doubling raw raster performance every 2 years, and may have only managed somewhere between 1.5x and 2x.
Strip away all the brand specific value adds like DLSS and RTX, it's still going to be a monster leap.
Btw I wouldn't be surprised if the actual 4090 Ti ends up being the 600W design they scaled 4090 back from.
I actually think the 4090 was probably engineered for 600W as well. In the GTC Video Jensen was talking about the power stages and it was something pretty huge, 23 stages iirc compared to about half that on 3xxx (both from memory). It sounds like it was going to be 600W all along.
Likely they determined that a large % of the userbase couldn't immediately run a 600W card, so the decision was to go dual bios, put a switch on the card to toggle between 450W (default) and a 600W setting.
And the good news from all of that, is that the card is designed to be a beast, and there will be a crapload of OC headroom.
Just look at how HUGE all of the AIB cards are. Complete monsters, up to 14.5 inches long and standing well above the slot where the bracket is, by several inches, and 3 to 4 slot wide. These are 600W capable cards, even if they are set to 450W.
That's with DLSS enabled. That's not apples to apples.
... If DLSS isn't disabled, the benchmark isn't valid. The same should probably go for RT as well.
So you think it would be fair to disable HALF of the GPU's cores? (Those that run the DLSS and Raytracing work) Maybe that's fine for you, but I want to see what DLSS 3.0 can do. DLSS 3.0 doesn't exist on AMD or on 3xxx or older nVidia GPU's, doesn't mean it is "unfair" to show what gains it can provide.
input latency and image quality of DLSS 3.0 is one of the main things I am waiting to hear about in reviews. And I expect it provides on average 3x the performance when enabled, and this information will be in reviews as well. DLSS 3.0 Quality, what does that bring to the table...
It's like 2 cars, one has a turbo, one doesn't. "You can't race me in that, it has a turbo! You gotta disable the turbo, then it's fair..."
Silicone basics. Nvidias performance can't violate the basic laws of physics.
Moving from a Samsung 8N process to a TSMC 4N process can AT MOST provide a 50% perf per watt increase, and that was back when gate size was real, and not marketing lies, and when gate sizes were larger (32nm era) when power actually scaled linearly with gate size.
Now, theyd be lucky if they can get a 25% perf per watt by halving the node size, and we don't even know if going from Samsung 8N to TSMC 4N is actually halving the node size, since these are marketing numbers.
Once we know this, we know a reasonable estimate for performance increase is 20% per watt regardless of core count or chip size, and then we just have to look at the published TDP's (presuming they aren't also lying about those)
The 3080ti was 384W, the 16GB 4080 is 320W. So, +25% performance from node shrink at same power level, and a 16.67% decrease in power leaves us at ~7.11% increase in performance for the 16GB 4080 over the 3080ti.
They can massage the arch a little bit and get a little bit more out of it than that, but at this point the arch is pretty mature, so there aren't huge gains to be had there as there once were. This is where the educated guessing comes in, but 10-20% max without DLSS and RT trickery is about the most we can expect realistically without violating the laws of physics.
Now you are just talking out of your ass.
We no longer get the 2x performance every 2 years because Moore's law is dead. The power used (per transistor) is the same (they've hit the minimum power that a silicon transistor can operate at) but the transistor keeps shrinking. Power density is going up.
but all of this isn't exactly linear anymore, and every single node process has to be analyzed independently.
Unless you are a TSMC node process engineer, gonna call bullshit on all of this nonsense.