NVIDIA CEO Jensen Huang hints at ‘exciting’ next-generation GPU update on September 20th Tuesday

Looks like if I'm upgrading to 4xxx, I'd be kissing my mini-ATX cases goodbye. The performance of the new cards looks...nice...but for what purpose? So I can get 4k 100+ FPS instead of 4k 95 FPS in Spiderman: Remastered? Back in the day, graphics in software was improving somewhat along the same rate as GPU horsepower. But now? Software developers have no use for that much GPU power. SLI has been dead/redundant for quite a long time now (and I remember having to convince people it was dead a few years back). We're entering a new ballgame. I don't blame the developers, they're just trying target and accommodate the common consumer rather than the enthusiast, and the buy-in on a regular GPU has increase an incredible amount over the last 5 years.

I feel the same way. There was time when my FPS was peaking around 30-50 (without VRR) maxed out as whatever was considered high resolution at the time. And so the jump to 60 fps the following gen was very noticeable. And then even from 60 to say, 80-100. And then VRR came along and made anything 75+ great. Now the baseline with a 3090 is 4K 80-100fps with VRR. So I can't see myself bothering with a 4090 just to get 100-110fps. It's so minimal in my gaming experience (of which I even hardly do anymore) that I'm staying put this gen.
 
Igor claims the original TDP was 600W, which is why the cards are so big.

https://www.igorslab.de/en/nvidia-g...ly-comes-from-and-why-the-cards-are-so-giant/
According to my own sources in Taiwan, the yield of the chips produced is so high that not only the quantity of what can be effectively utilized was higher than expected, but also the quality of the chips.

Because the VF curves should also only last up to 450 watts, no matter with which limit a card is sold and the rest is then just pure heating.


The agressitivity of the 4080 12 gb model name and pricing according to him, could be provocation toward AMD because:
. I don’t presume to be able to really assess AMD’s next generation, but according to all available information, it should hardly be possible to create the targeted power consumption AND increase the performance as much as NVIDIA might have managed thanks to TSMC with the older nodes.

If the yield and quality of output of the latest TSMC used here is surprisingly better than expected, big mono or close to monolithic die could continue on AMD and NVIDIA for a while.
 
According to my own sources in Taiwan, the yield of the chips produced is so high that not only the quantity of what can be effectively utilized was higher than expected, but also the quality of the chips.

Because the VF curves should also only last up to 450 watts, no matter with which limit a card is sold and the rest is then just pure heating.


The agressitivity of the 4080 12 gb model name and pricing according to him, could be provocation toward AMD because:
. I don’t presume to be able to really assess AMD’s next generation, but according to all available information, it should hardly be possible to create the targeted power consumption AND increase the performance as much as NVIDIA might have managed thanks to TSMC with the older nodes.

If the yield and quality of output of the latest TSMC used here is surprisingly better than expected, big mono or close to monolithic die could continue on AMD and NVIDIA for a while.
Nvidia got a huge node improvement moving off of Samsung, Samsung 8n process was only marginally better than TSMC 12nm and due to Samsungs manufacturing woes that difference really didn't pan out in the real world. So for node comparison, Nvidia is essentially going from TSMC 12nm to their 4N process that is a 3-generation jump and I bet it's surprising the hell out of them because of how bad the Samsung nodes were.
 
Igor claims the original TBP was 600W, which is why the cards are so big.

https://www.igorslab.de/en/nvidia-g...ly-comes-from-and-why-the-cards-are-so-giant/
https://videocardz.com/newz/nvidia-...ng-3-0-ghz-and-616-watts-with-gpu-stress-tool

I am not sure if any of this make much sense:
According to the screenshots posted on Bilibili, the RTX 4090 can run at 3.0 GHz and 425.6W or at 2.64 GHz and 615.8W The 3.0 GHz clock was performed with the default test called “msi-01” and 616W was recorded workload called “Furmark-donut”. The latter is much more power hungry.....Furthermore, GPUs such as Founder Edition reportedly has a power limit up to 600W, which may explain
 
Damn, and I just upgraded to Lian Li O11D Evo because previous mid tower didn't allow bottom to top fan air flow. Evga 3080ti FTW3's 3rd fan would spool to max because memory temp went to 90c. That got fixed with O11D with gpu mounted horizontally. Probably have to go hybrid aio as horizontal most cards won't fit and think even vertical mount the fans would be too close to window for airflow.

Game of how to fit in case, power and cool your gpu. Not a game I wanted to play, but 120hz 4k OLED gaming with DLSS3 is mighty tempting.
 
The case he's using there is the Lancool 205 Mesh, dimensions 415mm, 205mm, 485mm. Max GPU clearance is 350mm.
The 4090 Strix (in video) is 357.6 mm. So he's off by 7.6mm, or .3 inches.

This is why I bought an ugly empty box that supports 425mm GPUs, and it cost me a jaw dropping $60.
Also, the 011D Evo clearance is 426mm, so congratulations you can buy a 4090. :p
 
Last edited:
The case he's using there is the Lancool 205 Mesh, dimensions 415mm, 205mm, 485mm. Max GPU clearance is 350mm.
The 4090 Strix (in video) is 357.6 mm. So he's off by 7.6mm, or .3 inches.

This is why I bought an ugly empty box that supports 425mm GPUs, and it cost me a jaw dropping $60.
Also, the 011D Evo clearance is 426mm, so congratulations you can buy a 4090. :p
Yeah it's not the length that is an issue it's the GPU height (Motherboard to side window\panel) which is 162 mm and 4090 Strix is around 150mm so no room for power cable. FE probably fit at 137mm high and recessed power connector, but want to go AIB. Time to get the Corsair 900D out of mothballs.
 
They say in the future you'll just slide a mini-ITX board into an RTX 5090 as a daughtercard. I am ready for that future.

I mean look at this damn thing. This man is 6'4 and struggled just to wrap his lips around it.

View attachment 516544
Jesus... Couple years ago I was building capable M-ITX systems that could game... This is starting to get retarded. AND I thought my MSI Gaming X 6900 XT was big... It's a monstrous card with a support bracket so it doesn't snap the PCI-E Slot... That is just STOOPID!

And then there is this... This is F'ing ridiculous...

1200 Watt Power Requirement 4090
 
Last edited:
Honestly the power required and most of these being comically huge has made the easiest launch in my PC enthusiast life to just sit back and watch.

Popcorn for days. No interest on actually owning one of these at all.

Just fantastic.
 
Honestly the power required and most of these being comically huge has made the easiest launch in my PC enthusiast life to just sit back and watch.

Popcorn for days. No interest on actually owning one of these at all.

Just fantastic.

Yeah the problem becomes yours when you actually want one.
 
Honestly the power required and most of these being comically huge has made the easiest launch in my PC enthusiast life to just sit back and watch.

Popcorn for days. No interest on actually owning one of these at all.

Just fantastic.
If you're buying one, you honestly either need to get a waterblock, or an AIO-based one I think. These are nuts - even in something like my 1000D, I'd be seriously worried about weight and looking for a HUGE support bracket.
 
Aren't A0 samples taped out long before AIBs get the specs? The article makes it seem like it was some sudden and recent development.
Yes, that is the problem with his story. The first Nvidia design specs that went out for 4090 had 600W as its TBP. At that time NV was moving ahead with that. A decision was made that pushing that high a wattage out was going to cause huge support problems both in terms of PSU support and user problems. Power was scaled back for these issues. Many of these AIB coolers were built for 600W cards.

Some of these cards will be overclocking monsters. These will also be space heaters and PSU eaters.
 
Yeah the problem becomes yours when you actually want one.
If you're buying one, you honestly either need to get a waterblock, or an AIO-based one I think. These are nuts - even in something like my 1000D, I'd be seriously worried about weight and looking for a HUGE support bracket.
Yeah big case. Good airflow. Beefy PSU and a vertical mounted GPU or brace and so on. People will do it for sure. I find even mid towers to be pushing it size wise so for me these aren’t interesting. To each their own. Obviously I’m ok with not having halo performance and prioritizing space and power efficiency is a different approach here.
 
Yes, that is the problem with his story. The first Nvidia design specs that went out for 4090 had 600W as its TBP. At that time NV was moving ahead with that. A decision was made that pushing that high a wattage out was going to cause huge support problems both in terms of PSU support and user problems. Power was scaled back for these issues. Many of these AIB coolers were built for 600W cards.

Some of these cards will be overclocking monsters. These will also be space heaters and PSU eaters.
If Igor is right and if I read correctly and if the english is not misleading, seem very little gain to go from 450w to 600w

Not because they have to, but because they simply can. There are enough reserves, even for overclocking and as a graphical replacement for the expensive gas heating. However, the voltage limits of the VDDC should be reached very quickly, which should make an increase to values above 500 watts superfluous.
Because the VF curves should also only last up to 450 watts, no matter with which limit a card is sold and the rest is then just pure heating. I can’t give more details here, because there are unfortunately also blocking periods and I only cover the reports of third parties.


Will see once the review get out but if the performance gain to push to 600watt is minimal (some mod on the FE seem to let you do it and I imagine some ROG edition model on AIB could be like that out of the box one day), if the limitation stated above are not artificial, that could give weight that the hesitation between Samsung "5nm" and TSMC "4nm" when has long has last august and that with a 2022 summer release in mind.

But still that would still be strange to a lambda like me and would imply that the TSMC process got better than expected at some point and made it so they discovered little value to push power over time, because on the PSU support side that was all well known in 2020-2021, they learned absolutely nothing new in that regard I would imagine, the only scenario there.

We fully know how much of a problem 600 watt at the xx90 will be or more so 450 at the xx80 level would be, we reduce spike demand to not change PSU requirement that much, we learn via the usual talk around the town that the AMD GPU will not be nearly has powerful has expected we can decide to remove the power headache because we will still keep the performance crown without it.
 
If Igor is right and if I read correctly and if the english is not misleading, seem very little gain to go from 450w to 600w

Not because they have to, but because they simply can. There are enough reserves, even for overclocking and as a graphical replacement for the expensive gas heating. However, the voltage limits of the VDDC should be reached very quickly, which should make an increase to values above 500 watts superfluous.
Because the VF curves should also only last up to 450 watts, no matter with which limit a card is sold and the rest is then just pure heating. I can’t give more details here, because there are unfortunately also blocking periods and I only cover the reports of third parties.


Will see once the review get out but if the performance gain to push to 600watt is minimal (some mod on the FE seem to let you do it and I imagine some ROG edition model on AIB could be like that out of the box one day), if the limitation stated above are not artificial, that could give weight that the hesitation between Samsung "5nm" and TSMC "4nm" when has long has last august and that with a 2022 summer release in mind.

But still that would still be strange to a lambda like me and would imply that the TSMC process got better than expected at some point and made it so they discovered little value to push power over time, because on the PSU support side that was all well known in 2020-2021, they learned absolutely nothing new in that regard I would imagine, the only scenario there.

We fully know how much of a problem 600 watt at the xx90 will be or more so 450 at the xx80 level would be, we reduce spike demand to not change PSU requirement that much, we learn via the usual talk around the town that the AMD GPU will not be nearly has powerful has expected we can decide to remove the power headache because we will still keep the performance crown without it.
I think Nvidia just recycled a lot of their design elements from the A100 and H100 cards for their consumer ones this round, the server parts will do up to 450 watts on air but their specs top out at 700w, though for anything after 450w they specify that liquid cooling solutions must be used.
 
4090's appear to be hitting 3 GHz at 600W.

https://videocardz.com/newz/nvidia-...ng-3-0-ghz-and-616-watts-with-gpu-stress-tool

Looking at the 4080 16 GB, there's no direct benchmark comparison but it seems to be a difference of 20% stock vs 30% OC (above the 3090 Ti).
A lot of headroom on the table this time.

https://videocardz.com/newz/nvidia-geforce-rtx-4080-16gb-reaches-3-0-ghz-in-3dmark-timespy
https://videocardz.com/newz/alleged-nvidia-geforce-rtx-4080-16gb-3dmark-benchmarks-have-been-leaked
 
4090's appear to be hitting 3 GHz at 600W.
It is a bit counterintuitive, but it is hitting 3 GHZ at 425.6 watt is MSI Kombustor, the benchmark were it achieved to go over 600 watt it was running at only 2.64 ghz. It is 2 different benchmark, the 616 watt test is a stress test from what I understand a bit of a prime 95 for GPUs
 
Would those be true (I can imagine they can be in that part of the world)

We seem to be looking for a 4080 16 gb performing 45%-60 % over the 3080 12 gb, both in the synthetic and in Tomb Raider, would it have stayed and been the actual usual 4080 price wise, would have been one of the best generation jump ever, like Pascal achieved to be.
 
Would those be true (I can imagine they can be in that part of the world)

We seem to be looking for a 4080 16 gb performing 45%-60 % over the 3080 12 gb, both in the synthetic and in Tomb Raider, would it have stayed and been the actual usual 4080 price wise, would have been one of the best generation jump ever, like Pascal achieved to be.
From what I have seen it looks like the 4080 16GB is going to be about 50% faster overall than the plain 3080. Yeah that looks ok until you factor in that it will cost over 70% more..
 
These prices are absolutely asinine. Seriously. Oh yes ... this is what scalpers are charging and people are buying. People are only buying because that's the only option other than waiting an eternity and hoping to snag one in stock at MSRP. Let's make this the new normal! Great idea. Except scalpers are going to hike the prices even higher now. Jensen is a giant knob. There needs to be a better system in place for getting GPUs to people that aren't botting.
 
I don't see scalping being an issue at all on this launch. There is every indication there will be a massive amount of cards at launch.
 
If scalping is an issue again it would mean NVIDIA-AIB and reseller have again set price too low, sound almost impossible. Not with the Ampere price, there is no room to scalp a 4080 12gb or 16gb. Outside very surprising reviews.
 
The prices are insane because Nvidia doesn’t want you to buy these cards. They expect most people to pass on them and they want you too. Nobody has anything on the market that touches them in performance so they are charging a crapload to make their 3000 series a deal in comparison. They want people to buy as much of the 3000 stock as possible before AMD drops their new lineup.

Though AMD did just tell investors they are expecting a 1.1B drop in sales this year so I’m not sure how many GPU’s they expect to sell either.
 
It's likely that Nvidia wanted to launch the ADA architecture this year as scheduled despite the current situation because the competition will be releasing new stuff as well this year. I'm also thinking that Nvidia's going with top to bottom ADA launches to get rid of the excess inventory of the RTX-30 gpus in time before the release of the 4060 and 4070 cards next year which will compete in the most important battleground for market share (in terms of sales volume for dGPUs). For the time being, it's expected that the high end ADA SKUs which are the first to be launched, will sell at high prices and in lower volumes which is usual for this category/class of SKUs and are meant for the niche target market of early adopters and high end customers.

We just have to wait and see how this all plays out in 2023 when the full stack of ADA cards are all out and what Nvidia will do in reaction to where the competition lands.
 
Last edited:
I don't see scalping being an issue at all on this launch. There is every indication there will be a massive amount of cards at launch.
I think the point is scalpers already made it an issue, nothing to do with availability of cards, with the last launch the higher ups were definitely keeping note of how much people were willing to pay for graphics card and on this launch they decided to make the price points reflect that.
 
I think the point is scalpers already made it an issue, nothing to do with availability of cards, with the last launch the higher ups were definitely keeping note of how much people were willing to pay for graphics card and on this launch they decided to make the price points reflect that.
Could be true. Businesses to tend to price their product at the optimum price/demand level. If they can sell all they have and/or make the most profit overall at $1600 there's no reason to price lower.
 
Going from 225w to 300w, cards in the past, is a 1.33 or 33% increase in power.

Also going from 450w to 600w is a 1.33 or 33% increase in power. Somewhat typical of % used to increase an OC in the past.

For those who OC, this can also be a rather fun time with a video card designed with a cooler, circuity etc. to cool 600w. If this is not a great time to sub zero cool the case with air conditioning to punch above 600w I don't know what is. The Hard in me is starting to gripe for some unique useful OCing, something one can use 24/7. No need for gaming performance or some other BS reason some folks believe in, just shear pushing something beyond anything expected successfully.

I wonder if Nvidia will have a hard limit for frequency, like what AMD does with RNDA 2.
 
Back
Top