NVIDIA CEO Jensen Huang hints at ‘exciting’ next-generation GPU update on September 20th Tuesday

$900 MSRP for 4080. I doubt we'll se a 4070 for $500 which is about what I can afford and what I am willing to pay for those. If we're looking at $700-780 RTX 4070s then PC gaming just got a lot harder to get into for younger/poor people.
Let's be real... the 4080 12GB is a 4080 in name only. It has a 192-bit memory bus. Guaranteed that's either a cut down 104 chip or a full-fat 106 chip.

The 4080 12 GB is a joke.
 
Let's be real... the 4080 12GB is a 4080 in name only. It has a 192-bit memory bus. Guaranteed that's either a cut down 104 chip or a full-fat 106 chip.

The 4080 12 GB is a joke.

I guess Nvidia did some internal math and decided to make the 4070 the 4080 12GB, and bump the price up $400.
 
Anyone else feel like the 3060 Ti and 3070 are going to be gamers' best buys for $500 or less for quite some time?
With the 6700-6800 type probably, depending on how the used higher model look like on ebay.

Considering, a lot of game will be made to run perfectly fine on XboxX-PS5, those card could end up being perfectly fine for most games for most people for a long time. Or maybe a bunch of unreal 5 games will change that perception.
 
To rub salt in that's their MSRP for a non-existent FE model. AiB will tack more on.
Yup. Honestly. Pretty disgusted at these moves from Nvidia. Not surprised though considering the greed. When your CEO is in the mindset of “why are these guys [EVGA and other Nvidia partners] making money when they’re not doing much?”....Then have models restructured to elevated price points (4080gb 12gb being technically a 4070)....Not at all surprised....just disgusted.
 
I guess Nvidia did some internal math and decided to make the 4070 the 4080 12GB, and bump the price up $400.
I realized a long time ago that sales isn't necessarily about producing the best product or selling at the lowest price, but about creating perception (Apple is, after all, a marketing company that just happens to have roots in tech and godlevel supply chain expertise. Why call it a 4070 when they can change one number and create perception that make people feel like they're not compromising getting a '70.?

Of course the current 30-series supply is also part of their release and segmentation calculus, and a 4070 that was both faster *and* cheaper than a 3090Ti would just cannibalize the shit out of 3090Ti and everything else.
 
Last edited:
Well I'm a little late to the game here but based on what I am seeing and reading really the ball in is AMDs court. Intel doesn't have any skin in this game yet and AMD is the only party in a position to challenge them on price or performance, so we can only sit back and watch.
 
Let's be real... the 4080 12GB is a 4080 in name only. It has a 192-bit memory bus. Guaranteed that's either a cut down 104 chip or a full-fat 106 chip.

The 4080 12 GB is a joke.
Only a joke if enough people don't buy them, and then price would decrease and availability increases and it finds equilibrium. But if enough people buy them, then they're just supplying what the market will support, which is generally the goal of a business. /devil's advocate
 
I think the RTX 3060 is going to be the new GTX 1060. Nvidia doesn't seem to be too aware of their market situation if these are the prices they're going with.
I expect they are intentionally pricing this gen high because why not? They have an absurd amount of the 3000 series chips in stock that they need to burn off, so why not put insane markups on the 4000 series to drive people to the 3000s, then when those are gone or AMD forces them to reconsider their prices then they can bring them down. This essentially lets them trickle out the 4000's because demand will be lower thanks to the higher prices, and stockholders are placated thanks to the profit margins. Nvidia has to work hard right now to avoid another lawsuit like they took in 2018 after the last crypto burst when they had the overstock on the 1000 series parts.
 
Funny enough EVGA supposedly got as far as board samples of 4090: 2x FTW3, XC3 and had started on a Kingpin 4090.

Everything from EVGA's CEO when considered in whole makes me think he carefully chose words to all BUT burn the bridge, and he just wants to sit 40-series out and wait for market to improve and continue to re-evaluate.

Calling in JayzTwoBraincells and GN Steve to kick up PR drama is not the behavior of a CEO that's actually decided 100% to never make new GPUs again and slow fade the company. I'm probably wrong, but there were enough mixed messages to make me wonder.
Well, I might be proven wrong but there were no mixed messages. He was asked point blank if Evga was just skipping the 4000 series and coming back the next generation and he said no more GPUs . If they do come back he was lied.
 
I was planning on putting together a totally new build this fall, but now I'm thinking I'll just push that off until next spring. I figure by that point we'll know more about AMD's vcache Zen 4's, PCIE 5 PSU's should be more readily available, DDR5 can come down, and the value/scarcity of these should be a known thing. Too many unknowns and potential expensive pitfalls right now.
 
4080 12GB is also only 7680 cuda cores while the 4080 16GB is 9728 cuda cores.

You are damn right the 4080 12GB is a 4070 for $900.
It's not even a 4070. The last 70-class card to be less than 256-bit was an FX 5700 at 128-bit. Obviously everything was lower in 2003!

In my mind the stack should be:

4090 - 24GB 384-bit
4080 - 20GB 320-bit
4070 - 16GB 256-bit (what the 4080 16GB is)
4060 - 12GB 192-bit (what the 4080 12GB is)

Though I suppose that means they can't sandbag a 4080 Ti in there next year with this approach, but this stack makes way more sense to me.
 
With the price on the 4080 cards(especially the one that should be a 4070) it looks like Nvidia is more concerned about not undercutting the 3000 series than they are about selling the new cards, makes you wonder how many 30xx GPUs they have left at this point. The fact that based on specs the halo card appears to be the best value I think really drives this point home. I suspect we'll see price drops on the 4080s once they clear out that 30xx backlog but who knows when that will be if that excess is truly driving these prices. I'm sure it will sell out at launch because they always have a limited supply and do but it will be interesting to see if the glut of 30xx GPUs just turns into a glut of 40xx GPUs.

I'd love to see AMD come out swinging and undercut them but I doubt they'll have enough supply to do much of that and will likely adjust their pricing to nearly match Nvidia. It would be nice if Intel became a legit 3rd player in the market but I'm not holding my breath on that.
 
It's not even a 4070. The last 70-class card to be less than 256-bit was an FX 5700 at 128-bit. Obviously everything was lower in 2003!

In my mind the stack should be:

4090 - 24GB 384-bit
4080 - 20GB 320-bit
4070 - 16GB 256-bit (what the 4080 16GB is)
4060 - 12GB 192-bit (what the 4080 12GB is)

Though I suppose that means they can't sandbag a 4080 Ti in there next year with this approach, but this stack makes way more sense to me.

Do we have conformation it's actually only 192 bit wide? The GDDR5X and GDDR6 specs included support for chips with 12Gb (gigabit, aka 1.5GB) sizes, 12GB could be done using 8 of those on a 256bit bus instead of 6x2GB chips on 192 bits. IIRC reading somewhere recently that one of the GDDR6 vendors had added that size to their listed offerings.
 
Do we have conformation it's actually only 192 bit wide? The GDDR5X and GDDR6 specs included support for chips with 12Gb (gigabit, aka 1.5GB) sizes, 12GB could be done using 8 of those on a 256bit bus instead of 6x2GB chips on 192 bits. IIRC reading somewhere recently that one of the GDDR6 vendors had added that size to their listed offerings.
It has been confirmed to be 192bit
 
Do we have conformation it's actually only 192 bit wide? The GDDR5X and GDDR6 specs included support for chips with 12Gb (gigabit, aka 1.5GB) sizes, 12GB could be done using 8 of those on a 256bit bus instead of 6x2GB chips on 192 bits. IIRC reading somewhere recently that one of the GDDR6 vendors had added that size to their listed offerings.
Does it even matter anymore? I mean, we don't have a standardization of bus widths on what SKU it should be. If that is the case then AMD's graph would be insane.
 
I expect they are intentionally pricing this gen high because why not? They have an absurd amount of the 3000 series chips in stock that they need to burn off, so why not put insane markups on the 4000 series to drive people to the 3000s, then when those are gone or AMD forces them to reconsider their prices then they can bring them down. This essentially lets them trickle out the 4000's because demand will be lower thanks to the higher prices, and stockholders are placated thanks to the profit margins. Nvidia has to work hard right now to avoid another lawsuit like they took in 2018 after the last crypto burst when they had the overstock on the 1000 series parts.

I believe this is how they're doing things. They'll release them for high prices and the people who have a lot of money and must have the greatest will pay for them. They will probably keep the prices high for a few months and use the holiday season to sell off the last 30**. Prices may drop on those a bit but will hold over in the mid range/low end market. Probably early next year Nvidia may drop some prices (but who knows by how much? Probably not much) and bring out the 4070 and the 4060 in spring.

I do wonder what the 4070 price and release dates will be. I wonder if they are holding until Q1 2023.
 
I still have a 3070, so DLSS 3.0 being RTX4000 only is a real bummer!
That's not the bummer. Not getting SER for Ampere, if it can somehow be done at the driver level is the real let down. DLSS 3.0 is just fancy frame interpolation, which you already get for free on some TV's (if used as monitor).
 
Do we have conformation it's actually only 192 bit wide? The GDDR5X and GDDR6 specs included support for chips with 12Gb (gigabit, aka 1.5GB) sizes, 12GB could be done using 8 of those on a 256bit bus instead of 6x2GB chips on 192 bits. IIRC reading somewhere recently that one of the GDDR6 vendors had added that size to their listed offerings.
From Nvidia's website:

Screen Shot 2022-09-20 at 10.40.24 AM.png
 
With the price on the 4080 cards(especially the one that should be a 4070) it looks like Nvidia is more concerned about not undercutting the 3000 series than they are about selling the new cards, makes you wonder how many 30xx GPUs they have left at this point. The fact that based on specs the halo card appears to be the best value I think really drives this point home. I suspect we'll see price drops on the 4080s once they clear out that 30xx backlog but who knows when that will be if that excess is truly driving these prices. I'm sure it will sell out at launch because they always have a limited supply and do but it will be interesting to see if the glut of 30xx GPUs just turns into a glut of 40xx GPUs.

I'd love to see AMD come out swinging and undercut them but I doubt they'll have enough supply to do much of that and will likely adjust their pricing to nearly match Nvidia. It would be nice if Intel became a legit 3rd player in the market but I'm not holding my breath on that.
I hear rumors that their oversupply on the 3000 series cards dwarfs the oversupply they had with the 1000 series, like almost enough that had demand returned to a 2018-2019 level that their oversupply could have reasonably fed that demand. I don't know how true it is but their pricing strategy seems to somewhat corroborate what most were thinking they would have to do as a result of said oversupply so, while it may be an exaggeration I am not sure it is an overly large one.
 
I believe this is how they're doing things. They'll release them for high prices and the people who have a lot of money and must have the greatest will pay for them. They will probably keep the prices high for a few months and use the holiday season to sell off the last 30**. Prices may drop on those a bit but will hold over in the mid range/low end market. Probably early next year Nvidia may drop some prices (but who knows by how much? Probably not much) and bring out the 4070 and the 4060 in spring.

I do wonder what the 4070 price and release dates will be. I wonder if they are holding until Q1 2023.
The problem is how much better would a 4070 perform over a 3080 or a 3080 TI which are flooding the second-hand market as they were the most popular/profitable GPU for the miners to gobble up. you can find new 3080's in the 700's and used one of half that. Can't blame Nvidia for not wanting to compete against their own offerings at firesale prices.
 
My thinking is that Nvidia is still in the la-la land or perhaps they misjudged the price during the 3000 release under some market conditions (e.g crypto and supply). Set the price high initially, then reduce when RDNA3 comes out or when the demand changes. Easier than setting the price too low initially and then raise it on the same exact model. Best is to wait for the next few months for clarity on real market pricing. I'm sure someone somewhere will track this.

Edit: Nvidia has the option to release Ti or Super to justify price increase or decrease later on. Give them some wiggle room.
 
Last edited:
The problem is how much better would a 4070 perform over a 3080 or a 3080 TI which are flooding the second-hand market as they were the most popular/profitable GPU for the miners to gobble up. you can find new 3080's in the 700's and used one of half that. Can't blame Nvidia for not wanting to compete against their own offerings at firesale prices.
If the 4080 12GB is the same performance as a 3090...and the 3090's can be had for 650-700 used, 800-900 new AND the 4080 12GB uses 35 less watts, and has half the memory of the 3090.....Its a no brainer to get a 3090, or hell a 3090ti.
 
If the 4080 12GB is the same performance as a 3090...and the 3090's can be had for 650-700 used, 800-900 new AND the 4080 12GB uses 35 less watts, and has half the memory of the 3090.....Its a no brainer to get a 3090, or hell a 3090ti.
Exactly their strategy here to clear out that inventory.

Also the 3080 and 4080 are 320W? That can't be real right based on that graph showing Ada's power draw being quite a bit higher than ampere's?
 
Exactly their strategy here to clear out that inventory.

Also the 3080 and 4080 are 320W? That can't be real right based on that graph showing Ada's power draw being quite a bit higher than ampere's?
They can be, like a 4080 12G could easily operate at 320W, but the 16G version looks to have a hungrier chip that needs the full 700.
The names this time around confuse me, I don't like the 4080 having a different chip depending on the memory config, that should warrant a different name 4080 and 4070 respectively, but that's just me being old and not wanting to need to remember more things.
 
Exactly their strategy here to clear out that inventory.

Also the 3080 and 4080 are 320W? That can't be real right based on that graph showing Ada's power draw being quite a bit higher than ampere's?
The 3080 was considered a 350w card from Nvidia's own website. Also the 4080 16GB is such a cut down 4090, that yeah I can see it only being 320w card. The 4080 to me is even less value because of that price tag.

Bottom line the only (and I really hate to say it) value card in this release is the 4090. You get what 30-40% more cuda cores and 8GB more memory for $300 more over a 4080 16GB.
 
dunno if it's been posted here already, but the unveil of DLSS 3.0 was so confusing, it sounded like games with DLSS 3.0 would only work on 4K GPUS (correct) and that there would be no backwards compatibility with DLSS 2.x and 3K and 2K series cards (incorrect, thankfully)



DLSS 3 consists of 3 technologies – DLSS Frame Generation, DLSS Super Resolution, and NVIDIA Reflex.DLSS Frame Generation uses RTX 40 Series high-speed Optical Flow Accelerator to calculate the motion flow that is used for the AI network, then executes the network on 4th Generation Tensor Cores. Support for previous GPU architectures would require further innovation in optical flow and AI model optimization.DLSS Super Resolution and NVIDIA Reflex will of course remain supported on prior generation hardware, so a broader set of customers will continue to benefit from new DLSS 3 integrations. We continue to train the AI model for DLSS Super Resolution and will provide updates for all RTX GPUs as our research


DLSS Super Resolution is a key part of DLSS 3, and is under constant research and continues to be honed and improved. DLSS Super Resolution updates will be made available for all RTX GPUs.We are encouraging developers to integrate DLSS 3, which is a combination of DLSS Frame Generation, DLSS Super Resolution, and NVIDIA Reflex. DLSS 3 is a superset of DLSS 2.While DLSS Frame generation is supported on RTX 40 Series GPUs, all RTX gamers will continue to benefit from DLSS Super Resolution and NVIDIA Reflex features in DLSS 3 integrations.

So yes for DLSS 3.0 games, if you have a 2K or 3K card, you will just not get DLSS 3.0 features, but still get all the DLSS 2.x features. Whew 😮‍💨

Edit: why is editing quote breaks after inserting a quote with this forum software so fucking retarded?
 
The 3080 was considered a 350w card from Nvidia's own website. Also the 4080 16GB is such a cut down 4090, that yeah I can see it only being 320w card. The 4080 to me is even less value because of that price tag.

Bottom line the only (and I really hate to say it) value card in this release is the 4090. You get what 30-40% more cuda cores and 8GB more memory for $300 more over a 4080 16GB.
Correct. As I said before, 4090 or go AMD.
 
I still have a 3070, so DLSS 3.0 being RTX4000 only is a real bummer!
Yeah DLSS 3.0 requires some specific new hardware logic. Nvidia has mentioned though many of the improvements will be rolled down as DLSS 2.X so there’s that at least.
 
Back
Top