NVIDIA CEO Jensen Huang hints at ‘exciting’ next-generation GPU update on September 20th Tuesday

The Hard in me is starting to gripe for some unique useful OCing, something one can use 24/7. No need for gaming performance or some other BS reason some folks believe in, just shear pushing something beyond anything expected successfully
Still, there should be some point in it. Otherwise, you could save time and effort by just setting fire to piles of cash.
 
Someone on Reddit made a 4090 comparison chart:

b22frezu1ts91.png
 
Still, there should be some point in it. Otherwise, you could save time and effort by just setting fire to piles of cash.
Pushing an envelope, going beyond what others believe is possible, in a nutshell, fun competitive drive getting others to also push things can in itself be a reason for folks to get involved. As for usefulness, even the competitive gamers who turn down settings anyways, lower resolutions are more CPU and Monitor refresh limited than GPU limited at this time. The one game/simulation use scenario that a 4090 and beyond I see as viable is VR with a very high resolution and refresh rate headsets where folks also play regular games in them as well (something not as common). As for the new tricks or DLSS 3, I definitely have my doubts on the viability of every other frame being AI generated, I don't see how fine grain or high frequency textures (hope I am wrong) is not totally messed up or dramatically changed from frame to frame.

On the practical side, game settings that hit performance but give no noticeable or trivial improvement with big FPS decreases can be used to optimize performance of current generation of cards without needing to spend $1700 plus. For me, game play is always paramount over visuals in a game. Visuals are nice extras but if a game has the best visuals but the game play sucks - it doesn't matter the FPS or how good it looks -> the game still sucks. At this time there is no practical reason for a 4090 for me. A high end game monitor is a much better choice, unfortunately most high end game monitors are lacking something significant for the costs at this time. AMD is going to have DP 2 with RNDA 3, well at least that has been rumoured , since I normally keep Graphics cards 5 years+ that is also a consideration. Speaking of RNDA 3, seems to be wise to see what AMD has to offer, last generation RNDA 2 actually kicked some ASS for the $.
 
Someone on Reddit made a 4090 comparison chart:
This is great. Decent pickings for Lian Li O11D if going horizontal mounting (was worried limited to FE). Max height clearance for O11D Evo is 169mm so thinking max card height is around 145mm and add about 20-25 mm for cable clearance (using the PSU adapter for non ATX 3.0 PSUs).
 
Someone on Reddit made a 4090 comparison chart:

View attachment 517190

Well if that is accurate then it means the FE is not the only true 3-slot solution. The air-cooled Inno3D cards are longer and taller than the FE (which is probably good because that should equate to a larger heatsink) yet still maintain the same width as the FE. It will be interesting to see if they sell those for $1600 or tack on a couple hundred extra like some other cards.
 
Those are nothing more than third party sellers trying to take advantage of morons before the launch.

I hope your right but don't want a repeat of 2020.
 

Attachments

  • Screenshot_20221011-023200.png
    Screenshot_20221011-023200.png
    457.1 KB · Views: 1
Microcenter has some listed and prices are pretty much expected. 2 K for the Strix and down to msrp for some.

73F523AD-1CD8-4E04-8455-D039C0DD6F1E.png
 

Attachments

  • C391C1E7-9E5B-4ED3-AD0E-E5D05341CA05.png
    C391C1E7-9E5B-4ED3-AD0E-E5D05341CA05.png
    351.6 KB · Views: 1
I'm thinking these Cards are going to be more scare than Toilet Paper in early 2020 especially with EVGA now gone which basically sold the most cards.
 
Last edited:
I'm thinking these Cards are going to be more scare than Toilet Paper in early 2020 especially with EVGA gone now which basically sold the most cards.

Highly doubtful, they should be pretty easy to find at the prices they are going to launch at, only a few that have to have the best will buy them. People are not as flush with cash right now.
 
Agreed, I doubt we'll ever see that level of buying again. Once the impulse wave gets theirs, these are far too price prohibitive to sell well.
 
The numbers are impressive...
Its like looking at a supercar though. Sure the performance is insane... BUT its still not for you. lol

4090 has launched a new mid life crisis class of GPU. Only instead of the vette that screams... hey I'm up for another divorce. This mid life buy says hey I don't even bother with the comb over. All the more power to folks that actually buy these, but I'm still going to laugh at them. :)
 
I bet nvidia will be launching their next gpu series as a full tower case with room for an atx motherbord and periferals, so gaming pc's will start with a video card then. It is also gonna require a dedicated 240v power outlet and an exclusive circuit braker, otherwise, we'll need to upgrade our houses' entire power grid to be able to run their stuff.
 
I was wrong, those are some impressive gains. But that power draw, good lord…. I don’t expect the 3080 class stuff to be as impressive, but I’ll happily be surprised.
 
I was wrong, those are some impressive gains. But that power draw, good lord…. I don’t expect the 3080 class stuff to be as impressive, but I’ll happily be surprised.
Considering the 4090 has 60% more cores compared to the 4080 16GB, I tend to agree.

Unlike the 3090 vs 3080... this isn't going to be even close. The 4090 will be significantly faster at 4K. HOWEVER, at 1440p, they will probably look about the same.
 
Numbers like these make me wonder what could have been if Nvidia hadn't used Samsung for the 3000 series.
The architecture is good, but I really wonder how much of these impressive gains are from the architecture and how much from the dramatically better TSMC process.
 
Numbers like these make me wonder what could have been if Nvidia hadn't used Samsung for the 3000 series.
The architecture is good, but I really wonder how much of these impressive gains are from the architecture and how much from the dramatically better TSMC process.
I don't know that is a lot more cores vs 3000. I mean usable RT is impressive... it takes 40% more tensor cores to do it.
I'm looking forward to reviews of the 4080. The reduction in shaders might not make a massive difference at 1440... and I am sure 4k raster will still be up on the 3090. The reduction in tensor core count to basically 3090 numbers though... it will be interesting to see if the impressive RT performance carries over. I believe the 4080 tensor count is actually just a bit lower then the 3090... guess well see how much RT performance is generational and how much is brute force with the insane 512 tensors on the 3090.
 
I don't know that is a lot more cores vs 3000. I mean usable RT is impressive... it takes 40% more tensor cores to do it.
I'm looking forward to reviews of the 4080. The reduction in shaders might not make a massive difference at 1440... and I am sure 4k raster will still be up on the 3090. The reduction in tensor core count to basically 3090 numbers though... it will be interesting to see if the impressive RT performance carries over. I believe the 4080 tensor count is actually just a bit lower then the 3090... guess well see how much RT performance is generational and how much is brute force with the insane 512 tensors on the 3090.
4080 will be about the same RT performance as the 3090.
 
Numbers like these make me wonder what could have been if Nvidia hadn't used Samsung for the 3000 series.
The architecture is good, but I really wonder how much of these impressive gains are from the architecture and how much from the dramatically better TSMC process.
I feel like most of the gains are just the move to tsmc. You couldn’t push the transistor count to this on Samsung. I don’t care what the Intel folks say or think, imho, tsmc is the premier fab for these sorts of products. That’s bad for us. Stuff is going to get a lot more expensive as Apple just found out.
 
I don't know that is a lot more cores vs 3000. I mean usable RT is impressive... it takes 40% more tensor cores to do it.
I'm looking forward to reviews of the 4080. The reduction in shaders might not make a massive difference at 1440... and I am sure 4k raster will still be up on the 3090. The reduction in tensor core count to basically 3090 numbers though... it will be interesting to see if the impressive RT performance carries over. I believe the 4080 tensor count is actually just a bit lower then the 3090... guess well see how much RT performance is generational and how much is brute force with the insane 512 tensors on the 3090.
It just that it turned out that Samsung 8nm wasn't even as good as TSMC 12/10nm (with a much higher failure rate), let alone the 7nm, I get that Samsung had offered Nvidia one hell of a sweet price to move them over from TSMC and I just wonder what the 3000 series could have been had Nvidia not taken that "deal".
 
I feel like most of the gains are just the move to tsmc. You couldn’t push the transistor count to this on Samsung. I don’t care what the Intel folks say or think, imho, tsmc is the premier fab for these sorts of products. That’s bad for us. Stuff is going to get a lot more expensive as Apple just found out.
Makes me really wonder what AMD is going to bring to the table, the node jump from TSMC 7nm to TSMC 4nm isn't nearly as big a jump as going from Samsung 8nm, to TSMC 4nm, AMD had a distinct node advantage for the last 2 years and I wonder how their cards will match up now that they no longer have that advantage.
 
Makes me really wonder what AMD is going to bring to the table, the node jump from TSMC 7nm to TSMC 4nm isn't nearly as big a jump as going from Samsung 8nm, to TSMC 4nm, AMD had a distinct node advantage for the last 2 years and I wonder how their cards will match up now that they no longer have that advantage.
With RDNA 3 being the first chiplet based GPU line, I'd be more concerned about first gen teething issues like any microstuttering or weirdness caused by the effective "internal-SLI". Chiplets are a nice cost reduction multiplier, but it remains to be seen at what performance cost.
 
Makes me really wonder what AMD is going to bring to the table, the node jump from TSMC 7nm to TSMC 4nm isn't nearly as big a jump as going from Samsung 8nm, to TSMC 4nm, AMD had a distinct node advantage for the last 2 years and I wonder how their cards will match up now that they no longer have that advantage.
It's all about the chiplets this time around.
 
4080 will be about the same RT performance as the 3090.
We'll see if that is true it means Gen4 RT cores are no better then Gen3.
4080 304 tensors
3090 328 tensors
The number of hardware units is basically identical... if all 4080 can do is = 3090 in RT that means basically zero RT uplift this generation outside the one SKU that super stacks the Tensor bits.

I suspect there is some generation improvement. I think it will be telling for RT how much of an improvement. It needs to be substantial imo or 4060s will be basically as useless for RT as 3060 class hardware. For RT to take off it needs to be usable in the regular human price segments.
 
I'm thinking these Cards are going to be more scare than Toilet Paper in early 2020 especially with EVGA now gone which basically sold the most cards.
Highly doubtful. 3090s were often passed on in the microcenter queues and you could often get it during the middle of the day. Theres no mining now and with the economy being the way it is I bet there is a lot less demand.
 
With RDNA 3 being the first chiplet based GPU line, I'd be more concerned about first gen teething issues like any microstuttering or weirdness caused by the effective "internal-SLI". Chiplets are a nice cost reduction multiplier, but it remains to be seen at what performance cost.
Its not really internal SLI as I understand it... I guess we'll get more details soon.

It sounds like just like Ryzen the CPU and all the bits are in one hunk of silicon. Its the supporting things that don't need to be on the latest Fab that are off loaded. CPU side that means memory controllers. GPU side we'll see what they off load.

Chiplets gets more exciting when you think about how AMD can better tailor products. They can take the same core raster chiplets and use them for consumer gaming, workstation parts, and AI... potentially I assume they can also have more compute heavy chiplets they could leave out of consumer gaming parts (reducing cost) and double up in data center packages greatly increasing yields on compute monsters. All speculation right now. I don't think we'll have to worry about SLI like issues though... we worried about the same stuff with the first Zens.

PS... looking more at what the 4090 is though looking more like a Data Center chip with the insanity level of tensor hardware. I believe in past generations Nvidia would never have dropped that chip in a consumer card (at least one that isn't an official titan) I suspect the believe AMD is going to have some god level RT bits in chiplet form or something. And yes I'll come back to this post and laugh at myself in a few weeks if AMDs cards are underwhelming. lol
 
Its not really internal SLI as I understand it... I guess we'll get more details soon.

It sounds like just like Ryzen the CPU and all the bits are in one hunk of silicon. Its the supporting things that don't need to be on the latest Fab that are off loaded. CPU side that means memory controllers. GPU side we'll see what they off load.

Chiplets gets more exciting when you think about how AMD can better tailor products. They can take the same core raster chiplets and use them for consumer gaming, workstation parts, and AI... potentially I assume they can also have more compute heavy chiplets they could leave out of consumer gaming parts (reducing cost) and double up in data center packages greatly increasing yields on compute monsters. All speculation right now. I don't think we'll have to worry about SLI like issues though... we worried about the same stuff with the first Zens.
First stop, cpu chiplet. Second stop, gpu chiplet. Third stop, hopefully, cpu and gpu chiplet together on a 100fps 1080p capable APU. First company to that will do very well.
 
With RDNA 3 being the first chiplet based GPU line, I'd be more concerned about first gen teething issues like any microstuttering or weirdness caused by the effective "internal-SLI". Chiplets are a nice cost reduction multiplier, but it remains to be seen at what performance cost.
That's what I am afraid of, sort of... TSMC and Apple developed a really good interposer for the M1 Max that could more or less seamlessly link the two GPUs in there and that seems to work well (~95% scaling), it's all about how well AMD manages to present and manage the resources internally that will matter, so drivers and firmware...
But RDNA 2 managed to perform more or less evenly with the Ampere architecture while having a 2-generation lead in the manufacturing node, now that they are on an even footing there is RDNA 3 going to be that much of a leap over RDNA 2.
The benchmarks from the Instinct series would indicate that there isn't that huge of a leap there, but the MI250 is on 6nm, not 4nm even though it is their first chiplet design for a GPU accelerator.
I just want to see what AMD puts out so we can view the 40xx parts with some degree of context because currently, the 4090 exists in a vacuum and while an impressive leap over the 30xx parts how does it fare overall?
 
We'll see if that is true it means Gen4 RT cores are no better then Gen3.
4080 304 tensors
3090 328 tensors
The number of hardware units is basically identical... if all 4080 can do is = 3090 in RT that means basically zero RT uplift this generation outside the one SKU that super stacks the Tensor bits.

I suspect there is some generation improvement. I think it will be telling for RT how much of an improvement. It needs to be substantial imo or 4060s will be basically as useless for RT as 3060 class hardware. For RT to take off it needs to be usable in the regular human price segments.
They did the same thing with Ampere. The tensor cores were improved-----but only the 3080 and 3090 included enough of them to do better than Turing.
 
  • Like
Reactions: ChadD
like this
It's all about the chiplets this time around.
Not sure if having a chiplet for the IO and cache will be that big of a deal, at least according to leaks rumors it is far from the rumored GPU chiplet design talked about around 2020.

It is still a mono compute die with cache-IO being chiplet around, possible a nice price saving using older nodes for the cache part specially if yield would have been bad otherwise, but I doubt it will be all about them. On the other part, I would not worry about that design causing much SLI like issues
 
Back
Top