Will you jump on 40xx Super Video cards or would you rather wait for 50xx series to come out?

That 2 chips connected together maybe a la M1 ultra or an other technic to combine 2 die together (2x104B), look how much bigger it is :

View attachment 644126

H100 to B100 It is more in some 80B->104B range, they have a revised new TSMC 4NP node vs their previous special TSMC 4N node, but not some new TSMC 3nm branded one, the giant gain seen are more from architecture-software-networking change for task that do not take advantage of them it is more normal ones , I am not sure if they will use something like that and make giant chips for the gaming section (we can assume not, if they do it it will be 2 small die together to have really high yield not to make some monster 1200mm (2x600) gaming chip).

I am not sure how much of a clue they give if we assume Nvidia go for that node on the desktop part, this could be the worst node update in a while, Ampere gained 82% in density, Lovelace was a ridiculous 278%, going from TSMC 4 to TSMC 4 this time around could be 30%.

Could be a massive architecture change too, they have the R&D budget and had a long time, since mid 2020 at the very least, being working on it.


Lots of good points !
 
LOL at what actually ended up happening.
The most wrong part was RDNA 3 too (it was also wrong that they pushed Ad102 above 450w). But there is 2 different kinds of being wrong in that line of work.

Having bad source (or making things up) vs having good source that end up being wrong.

At that level of complexity, it is starting to look a bit like alchemy, we have transistor count estimate because we do not actually know how much there is, actual people in Nvidia could have been surprised by how well Lovelace on TSMC N4 ended up working in the power envelope-die size they ended up choosing (plan for a 600watt 4090 could have been true until quite late) and AMD chiplet Die could have working much better in simulation than real life.

AMD-Nvidia-Intel being wrong is different than leaker having bad source and they are not in a position to be better than their source.
 
Last edited:
The most wrong part was RDNA 3 too (it was also wrong that they pushed Ad102 above 450w). But there is 2 different kinds of being wrong in that line of work.

Having bad source (or making things up), having good source that end up being wrong.

At that level of complexity, it is starting to look a bit like alchemy, we have transistor count estimate because we do not actually know how much there is, actual people in Nvidia could have been surprised by how well Lovelace on TSMC N4 ended up working in the power envelope-die size they ended up choosing (plan for a 600watt 4090 could have been true until quite late) and AMD chiplet Die could have working much better in simulation than real life.

AMD-Nvidia-Intel being wrong is different than leaker having bad source and they are not in a position to be better than their source.

Look I still believe that the 5090 will be a monster, but not because I trust MLID. This is Nvidia we are talking about, and they take no prisoners. The idea that they would purposely gimp themselves with slower products so their competition can catch up is absolutely laughable. I just take whatever MLID claims then dial it back a few notches as my expectation, so for the 5090 probably something around 45-50% faster rather than what he's claiming.
 
Sure he could be right, but he could also be wrong. From his Lovelace leak he was both right and wrong so yeah I guess anythings possible. AD102 is indeed about 65-70% faster than Ampere so that's spot on, but read the second point and you'll probably LOL at what actually ended up happening.


View attachment 644138
RDNA 3 has some hardware flaws or other issues that occurred when they pumped up the clock cycles so it is running quite a bit slower than they expected. The 7900 XTX was supposed to run at 3ghz or more which would put it around 4090 territory in raster (10-15% more performance than it has at the released clock speeds).
 
RDNA 3 has some hardware flaws or other issues that occurred when they pumped up the clock cycles so it is running quite a bit slower than they expected. The 7900 XTX was supposed to run at 3ghz or more which would put it around 4090 territory in raster (10-15% more performance than it has at the released clock speeds).
I've heard this before, but I question its validity. My Nitro+ hits 3GHz+ stock and I used to run a 4090 on this rig...performance is no where close.

7950x3d_7900xtx.png
 
I've heard this before, but I question its validity. My Nitro+ hits 3GHz+ stock and I used to run a 4090 on this rig...performance is no where close.

View attachment 644241
Some games run better on Nvidia while others run better on AMD as long as RT is off. Frostbite engine games and Ubisoft games typically runs better on AMD while Northlight engine, Ego engine and some others runs better on Nvidia. AMD's weakness is RT performance where they are 1 generation behind Nvidia and the lack of native low latency support in games. The performance gap depends on the engine the game is running.
 
Some games run better on Nvidia while others run better on AMD as long as RT is off. Frostbite engine games and Ubisoft games typically runs better on AMD while Northlight engine, Ego engine and some others runs better on Nvidia. AMD's weakness is RT performance where they are 1 generation behind Nvidia and the lack of native low latency support in games. The performance gap depends on the engine the game is running.
Of course, but my point is this magic 3GHz speed doesn't make it compete with the RTX 4090.
 
Of course, but my point is this magic 3GHz speed doesn't make it compete with the RTX 4090.

Yeah no matter how you spin it, Navi31 is nowhere near fast enough to make a 4090 have to juice it's power draw to get "close" to it's raster performance LMAO. And now the question is, why didn't MLID find out about this before the launch and "leak" it just like everything else he leaked? If he was the only one to say that Navi31 actually has some serious design flaws and will not come close to matching a 4090 while everyone else says that Navi31 is a 4090 killer, he could've gained some serious reputation. Yet he just paraded the same false info everyone else did. So all this talk about he does have insider information but his insider information is actually bad so he's not to blame, just...lol.
 
Yeah no matter how you spin it, Navi31 is nowhere near fast enough to make a 4090 have to juice it's power draw to get "close" to it's raster performance LMAO. And now the question is, why didn't MLID find out about this before the launch and "leak" it just like everything else he leaked? If he was the only one to say that Navi31 actually has some serious design flaws and will not come close to matching a 4090 while everyone else says that Navi31 is a 4090 killer, he could've gained some serious reputation. Yet he just paraded the same false info everyone else did. So all this talk about he does have insider information but his insider information is actually bad so he's not to blame, just...lol.
We could also look at it the other way around - which is people (including NVIDIA) thought RDNA3 was going to be a beast - so they dialed up the 4090 higher than they probably would have to make sure they came out on top...so I think all the rumors were correct, but no one knew that AMD was having issues or whatever with RDNA3...
 
We could also look at it the other way around - which is people (including NVIDIA) thought RDNA3 was going to be a beast - so they dialed up the 4090 higher than they probably would have to make sure they came out on top...so I think all the rumors were correct, but no one knew that AMD was having issues or whatever with RDNA3...

I find that hard to believe. Surely they would've been conducting internal testing, and yet somehow nobody at AMD knew that Navi31 isn't as fast as it's projected to be until the product actually came out?
 
I find that hard to believe. Surely they would've been conducting internal testing, and yet somehow nobody at AMD knew that Navi31 isn't as fast as it's projected to be until the product actually came out?
I think they actually got the issue with artifacts just a few months before launch and had to dial down the speed quite a lot. Basically they found it so late that they couldn't go back and fix it and just had to release it. That is also most likely the reason for why you only saw RDNA3 drivers for a long time as they were trying to fix it in drivers.
 
I think one also aspect here, Nvidia did not know that the final cook would be:

1) That efficient
2) Would not scale well:
jcgmt0ua68t91.jpg


Diminishing return hard after 360 watt, making the pushing it to 550w (600w for the fancy edition or OC your)

3) Issue with the new 600 watt connector

As for when AMD
1) Chiplet for logic would not work
2) Chiplet even just for IO was quite the issue
3) Actual performance of the final product once you consider 1-2 and how well they kept it secret until the end.... who knows (they called when ended being a simple 380mm 256 bits competitor the 7900xtx... maybe they had faith until the very end).

But AMD has yet to even have launched a second gaming gpu since Navi 31 and yet to even have a rumoured product to launch using it (RDNA 4 being a monolithic die being heavy rumoured).

The power delivery system, the cooling solutions, etc... seem to show that Nvidia thought at some point to at least have that option open to crank it, but maybe it is not just that AMD did not deliver it is also a mix that Lovelace does not do that much with the extra power as well to be worth the trouble.

why didn't MLID find out about this before the launch
Think about among who would know that you can trust and validate, who would leak it to do what ?
 
Back
Top