NVIDIA CEO Jensen Huang hints at ‘exciting’ next-generation GPU update on September 20th Tuesday

You expect when the up to is used in marketing that it will not be the average case (AMD latest presentation was so much better), but saying that being at 8K video gaming cards was ridiculous because a game that didn't exist when the 3090 launched does not play well with RayTracing and everything at ultra is a bit of the reverse medal here.

I have seen a different marketing graph that did had the best case scenario and the regular case scenario in a different video that would have been much more accurate.

i.e. there is many games at launch that the 4080 12 gig will do a bit less than a 3090 TI according to NVIDIA and on old title without RT on etc, we should not expect 2x gain everywhere for the 4090.

This seem
View attachment 512369

A less manipulative marketing bit than just having the right part (specially until we know the value in the subjective gaming experience of those AI interframe)
Again it basically is just a marketing pitch. It's like people forget when Turing launched we had little to no actual RTX games to use the tech with.
 
Missing from that is the settings... DLSS PERFORMANCE mode, which is poopoo in terms of IQ.
Will be fun to see the review without it, but if you have DSSL frame generation enable on the 4xxx and you make relative benchmark from a 3xxx series, it is good that they not only enabled DLSS on the 3xxx card but used the most poopoo setting, to cheat has little has possible here.
 
Doom Eternal was their poster boy for a reason, easy to enough to run:


Battlefield 5, Dirt Rally 2, Death Stranding I imagine there is alist of title that were recent to someone recent in 2020 that you can run at reasonable setting with DLSS, close enough to 60fps for VRR to be ok at "8k", the moment you have hdmi 2.1 and enough VRAM, you can play at 8k it became with what setting is it smooth enough.

Boy you really are trying to defend nvidia bad. Do you work for them? Have u ever tried to play at 8k and use dlss quality? Anything else is just pure ass.

Sorry but your spin it just making our point even more. All bullshit from nvidia marketing
 
When were they ever the same price? IIRC the 2080 was ~800 and the ti ~1200. The cheapest ti I remember was $999 and they were the dual slot/low profile kind that weren't very good as they throttled a bunch or had to run the fans near 100%.

I stuck with my 3x GTX680 SLI all the way until 2018 when that particular mining boom faded. I had my eye on the 1080Ti for over a year waiting for the price to come down, and it had (Just like the 3090 Ti price is coming down now), but by then the 2080 had also just launched. The 2080 and 1080Ti were absolutely the same price at that point.

15843.jpeg
 
I stuck with my 3x GTX680 SLI all the way until 2018 when that particular mining boom faded. I had my eye on the 1080Ti for over a year waiting for the price to come down, and it had (Just like the 3090 Ti price is coming down now), but by then the 2080 had also just launched. The 2080 and 1080Ti were absolutely the same price at that point.

View attachment 512376
Scratch that, I read your post wrong and thought you were saying the 2080 and 2080 ti were the same price, yes the 2080 was priced pretty much exactly at the 1080 ti and was considered not a great buy at the time because of that.
 
Boy you really are trying to defend nvidia bad. Do you work for them? Have u ever tried to play at 8k and use dlss quality? Anything else is just pure ass.
I am not sure what you are saying, I mean this is true:


You can say it is marketing bullshit, the 8k experience is good only in a limited number of high FPS title (doom, Death Stranding, etc...) or game that play well at low FPS and details for pixels could make sense a la Flight Sim, I was just answering the specific question of naming any title that were recent enough in 2020 that could play at 8K with DLSS on near 60 fps.
 
Moore's law is dead
Just a joke, but still the 4090 GPU has around 2.7 times more transistor by mm² in a 2 year's interval, which is significantly better than Moore Laws (double of density every 2 year's) I think. Must be a rare case for this to happen in a while.
 
Just a joke, but still the 4090 GPU has around 2.7 times more transistor by mm² in a 2 year's interval, which is significantly better than Moore Laws (double of density every 2 year's) I think. Must be a rare case for this to happen in a while.
but they also essentially jumped 3 nodes, Samsung 8n was only slightly better on paper than the TSMC 12nm process (actual results are a mixed bag), but then there is the TSMC 10, 7, 5, and now 4 (very refined 5nm process). In terms of generational leaps for the manufacturing node, they are making one hell of a jump here so not just a 2-year gap.
 
Jensen said that Moore's law is dead and the cost to manufacture this gen chip is more expensive.
https://www.windowscentral.com/hard...is-dead-following-high-priced-rtx-3090-launch

He also said that the gpu numbering system is just a number. So, move on guys, lol

Ok... crystal ball time.

The RTX 3000 cards were cheap because Samsung gave Nvidia a sweetheart deal on their 8nm node. The drawback? Heat and power consumption. However, Nvidia also learned that level of heat and power consumption is OK, so they jacked up performance as far as they can on the RTX 4000 cards. But then, a disaster struck... Nvidia was left with a surplus of high-end RTX 3090 and 3090 Ti parts due to the mining crash, and with those in the market, Nvidia is competing with itself.

The solution? Fire sale on the RTX 3090/Ti cards. Clear out the high-end stock, then jack up the prices on the new "high-end" RTX 4090 and 4080 16/12GB models in order to recover some of the lost revenue. People who were originally looking to buy a brand new high-end RTX 3090 at $1000 or less will now be forced into an inferior product at the 4080 12GB, where the profit margins are STUPENDOUS!!! And then... the 3080 and below all continue to sell at the same MSRP they've always sold at, making Nvidia a ton of money. Problem solved. Screw you, consumer.

Nvidia has an obligation to their share holders. Nvidia's market cap is currently $100 Billion higher than Intel and AMD.... combined. You don't get to that level by accident.

Mark my words: At 4K, the 4080 12GB will be inferior to the 3090/Ti. There's a possibility that the 4080 16GB will be faster, but if so, it will probably be by 10-20% max, and even then, it won't be a clean sweep victory. The 3090 has a stupid amount of memory bandwidth at its disposal.

Welcome to Turing 2.0. Enjoy the ride.

/crystalball
 
We've been here before.

August 2018 - Gamers Nexus: Nvidia uses a combination of forcing AIB partners to absorb shipments of Pascal, RTX naming, and high pricing of 2000 series to solve Pascal oversupply issue
https://www.reddit.com/r/hardware/comments/9b6ltr/gamers_nexus_nvidia_uses_a_combination_of_forcing/

Read the comments for a good time.
"So they will probably cut prices once they run out of 1000 series inventory. As if we needed another reason not to preorder..."
"People were talking about weird conspiracies around RTX pricing, Nvidia already admitted the massive oversupply issue so it makes perfect sense to not offer a price/performance increase while they are trying to shift the excess."
"Those who want the cards ASAP will end up paying more. The 2080 costs as much as the 1080ti and the 2080ti as much as the Titan X Pascal. Then after a while, when the people who absolutely 0must have the latest now are depleted, as are the more fanatical Nvidia r fans, then they lower the price."
 
However, Nvidia also learned that level of heat and power consumption is OK, so they jacked up performance as far as they can on the RTX 4000 cards.
Will see the review, but official numbers seem to agree that they aimed to somewhat similar TDP to the 3000

4090 24: 450 watts
3090 TI: 450 watts

3090 24: 350 watts
3080 TI: 350 watts

4080 16: 320 watts
3080 10: 320 watts

3070 TI: 290 watts
4080 12: 285 watts


2080 TI: 250 watts

Either artificial creation of a gap between the 4090 to 4080 to not have the 3090 close to the 3080 despite similar gaming difference in a generation that has an expensive Titan to get the non gamer big card that is not the A6000-hopper type or they had has a feedback that the 3080 320 watt was regular high end gamer and OEM limit.
 
Last edited:
Mark my words: At 4K, the 4080 12GB will be inferior to the 3090/Ti. There's a possibility that the 4080 16GB will be faster, but if so, it will probably be by 10-20% max, and even then, it won't be a clean sweep victory. The 3090 has a stupid amount of memory bandwidth at its disposal.
According to NVIDIA own marketing slide that seem to be the case 4080 under the 3090 TI at 4K even with DLSS on when they cannot use fancy new Optical-RT cores, that GPU is 60% of the size than a 3090. And I do not imagine they cherry-picked title that made the 4080 12 look bad.
 
Will see the review, but official numbers seem to agree that they aimed to somewhat similar TDP to the 3000

4090 24: 450 watts
3090 TI: 450 watts

3090 24: 350 watts
3080 TI: 350 watts

4080 16: 320 watts
3080 10: 320 watts

3070 TI: 290 watts
4080 12: 285 watts


2080 TI: 250 watts
Honestly, it's a good metric to aim at, at the same power draw you are then at least providing a generational uplift at that segment it's the one thing that remains consistent. I mean if you pull more power and have less performance than the previous gen you F'ed up, if you pull the same power but have a 15-20% performance improvement with more features then that's a win. Nvidia is sort of showing here that names are meaningless so power draw, heat generation, and relative performance are the only measurable quantities you can really look at.
 
Nvidia response to the 4080 naming (from the Q&A):

The GeForce RTX 4080 16GB and 12GB naming is similar to the naming of two versions of RTX 3080 that we had last generation, and others before that. There is an RTX 4080 configuration with a 16GB frame buffer, and a different configuration with a 12GB frame buffer. One product name, two configurations.

The 4080 12GB is an incredible GPU, with performance exceeding our previous generation flagship, the RTX 3090 Ti and 3x the performance of RTX 3080 Ti with support for DLSS 3, so we believe it’s a great 80-class GPU. We know many gamers may want a premium option so the RTX 4080 16GB comes with more memory and even more performance. The two versions will be clearly identified on packaging, product details, and retail so gamers and creators can easily choose the best GPU for themselves.
 
The GeForce RTX 4080 16GB and 12GB naming is similar to the naming of two versions of RTX 3080 that we had last generation,

that sound false to me
3080 12 gig:
28,300 million transistor on a 628 mm on a GA102 with 8960 cores

3080 10 gig 8704 cores on the same GA102

Released much after, there some feeling that the GDDR6x and the 8nm process yield simply got better over that time.

4080 12 is on a AD104 with 7680 cores on a 192 memory bits bus vs Ad103 9728 cores on a 256 bit bus.

Maybe the 3060 vs 3060 TI or something else, but not sure how similar really to the 2 versions of the 3080, seem to be in a significant way different here.
 
Last edited:
I like how they only mention the memory difference as if they're the same card besides that.

I'm legitimately angry at their answer. Copy of my reply:

The 3080 10 and 12 GB models were the same GPU, GA102. Even the GTX 1060 3 and 6 GB were both GP106. The two 4080 models are AD103 and AD104, they are not the same GPU. You can't compare the naming schemes in this situation.
So why is the 3080 Ti called the "3080 Ti" but the 4080 16 GB isn't the "4080 Ti"? The difference between the 3080 10/12 GB and the 3080 Ti is much smaller than the two 4080's. It makes no sense.
 
Nvidia really said that the 4080 16/12GB model is the same as what they did with the 3080 12/10 GB model?!?!?! They are so unbelievably arrogant. The 3080/Ti/3090/Ti were ALL GA102, while the 4080 16 and 12GB models might not even be the same GPU die... also, they are 256-bit and 192-bit memory bus... and then you've got the 4090 which has a 384-bit bus.

They are NOT the same thing!!!
 
Inflated pricing aside (which we all knew was coming) - naming has always been completely arbitrary and nonsensical apart from maybe the X2 suffix which meant you’re getting two dies. I don’t understand the outrage here.
 
Inflated pricing aside (which we all knew was coming) - naming has always been completely arbitrary and nonsensical apart from maybe the X2 suffix which meant you’re getting two dies. I don’t understand the outrage here.
We are geek on a geek forum that can be anal about irrelevant anal things like naming.

That said, outrage around naming is the perceived use of the name to try to push perceived to be too high price.

Would the 4080 12 gig "msrp" price had been the 3070TI msrp + inflation (around $660-700) reaction would have been significantly different than $900, which is more than 3080 pricing + inflation
 
If naming was irrelevant, Nvidia wouldn't feel the need to rename the 4070 at last minute for a price hike.

They'd simply release a $900 4070 and let the specs do the talking. Why some feel the need to excuse this idk.

Glad I don't have a strong attachment to PC component companies.
 
Nvidia response to the 4080 naming (from the Q&A):
What a disingenuous bunch of bullshit from him. It is only being called a 4080 so they can charge more money and everyone knows that it is truly a 70 class card. And their own fucking slides show it's not even matching the 3090 TI in games without the dlss 3.0 nonsense to factor in. Take out your calculator and that means the 4080 12 GB card is only going to be about 15 to 20% faster than the 3080 in games not using the dlss 3.0 which is the most laughable improvement ever seen in a generation.
 
Last edited:
If naming was irrelevant, Nvidia wouldn't feel the need to rename the 4070 at last minute for a price hike.

They'd simply release a $900 4070 and let the specs do the talking. Why some feel the need to excuse this idk.

Glad I don't have a strong attachment to PC component companies.
I’m not attached to any PC component company. In fact I rant about pretty much all of them pretty regularly due to their anti-consumer horseshit.

In this case I just can’t understand why anyone gets mad about the naming. Well I understand why the techtoobers do it because engagement and ad revenue but please if we’re really “geeks” or proper “enthusiasts” here we’d be discussing specs and not some mundane marketing and consumer-oriented shit like naming.
 
Inflated pricing aside (which we all knew was coming) - naming has always been completely arbitrary and nonsensical apart from maybe the X2 suffix which meant you’re getting two dies. I don’t understand the outrage here.

Most computer consumers are not consummate tech enthusiasts, this has been done intentionally to confuse the average consumer into thinking they've bought the same product with just less VRAM, when they are actually getting a much lower end model that would normally be a different product, especially since as mentioned by J2C there is nowhere on the box that these differences are typically listed at the store.

That is why the naming matters more than anything else in this thread, the average consumer is being hoodwinked and companies should be forced to list the specs on the box.
 
I’m not attached to a PC component company. In fact I rant about pretty much all of them pretty regularly due to their anti-consumer horseshit.

In this case I just can’t understand why anyone gets mad about the naming. Well I understand why the techtoobers do it because engagement and ad revenue but please if we’re really “geeks” or proper “enthusiasts” here we’d be discussing specs and not some mundane marketing and consumer-oriented shit like naming.
The name change was to get a price hike in. That's pretty anti consumer.
 
The name change was to get a price hike in. That's pretty anti consumer.
I can agree to that. FWIW I’m very amused that a $900 GPU comes with 192b memory bus in $current_year. What an embarrassment.

Also that all their marketing slides are about DLSS performance which is basically just making frames up in a “best guess” manner. Wtf has happened to this hobby/industry like holy shit.
 
Is the average consumer dropping $1000 on a video card and doing zero research but still drawing a line in the sand that $1200 is too much?
 
Is the average consumer dropping $1000 on a video card and doing zero research but still drawing a line in the sand that $1200 is too much?
Who's dropping $1k on a GPU? Your average buyer is nowhere near that, despite Nvidia trying to normalize it.
 
Who's dropping $1k on a GPU? Your average buyer is nowhere near that, despite Nvidia trying to normalize it.

Exactly why I don't think the naming convention matters. The people that are dropping a grand on a GPU are either savvy and doing 5 minutes of research or they have expendable income and aren't going to buy the 12GB 4080 card anyway.
 
Is the average consumer dropping $1000 on a video card and doing zero research but still drawing a line in the sand that $1200 is too much?
Exactly why I don't think the naming convention matters. The people that are dropping a grand on a GPU are either savvy and doing 5 minutes of research or they have expendable income and aren't going to buy the 12GB 4080 card anyway.
Thats a pretty anti-consumer stance there.
 
Ok... crystal ball time.

The RTX 3000 cards were cheap because Samsung gave Nvidia a sweetheart deal on their 8nm node. The drawback? Heat and power consumption. However, Nvidia also learned that level of heat and power consumption is OK, so they jacked up performance as far as they can on the RTX 4000 cards. But then, a disaster struck... Nvidia was left with a surplus of high-end RTX 3090 and 3090 Ti parts due to the mining crash, and with those in the market, Nvidia is competing with itself.

The solution? Fire sale on the RTX 3090/Ti cards. Clear out the high-end stock, then jack up the prices on the new "high-end" RTX 4090 and 4080 16/12GB models in order to recover some of the lost revenue. People who were originally looking to buy a brand new high-end RTX 3090 at $1000 or less will now be forced into an inferior product at the 4080 12GB, where the profit margins are STUPENDOUS!!! And then... the 3080 and below all continue to sell at the same MSRP they've always sold at, making Nvidia a ton of money. Problem solved. Screw you, consumer.

Nvidia has an obligation to their share holders. Nvidia's market cap is currently $100 Billion higher than Intel and AMD.... combined. You don't get to that level by accident.

Mark my words: At 4K, the 4080 12GB will be inferior to the 3090/Ti. There's a possibility that the 4080 16GB will be faster, but if so, it will probably be by 10-20% max, and even then, it won't be a clean sweep victory. The 3090 has a stupid amount of memory bandwidth at its disposal.

Welcome to Turing 2.0. Enjoy the ride.

/crystalball
No thanks. I'll just opt out and keep my 1070. Just because I have money to spend doesn't mean I'm going to be stupid about it and overpay. They can take their robbery prices and stuff them you know where.
 
Back
Top