NVIDIA RTX 50 “GB202” Gaming GPU reportedly features the same TSMC 4NP process as B100

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,904
Pretty cool

1710853369284.png


Source: https://videocardz.com/newz/nvidia-...ly-features-the-same-tsmc-4np-process-as-b100
 
Not sure why folks are excited about the 512 memory bus, my 390X had it in 2015. The Fury X had 1 Tb/s HBM and it was a failure at 4K.
 
It's not the bus width but the bandwidth.

A 4060 with a 192bit bus has more bandwidth than a 512bit bus had in 2015...

What a 512bit bus informs, is a comparison to a previous gen that had 384bit, is a likely bandwidth increase that is at least proportional (512/384 = 1.33 so 33% inferred minimum bandwidth increase), and likely even exceeds that guesstimate.
And if they are also going from GDDR6X to GDDR7 there is likely even more of an increase.
 
Last edited:
Not sure why folks are excited about the 512 memory bus, my 390X had it in 2015. The Fury X had 1 Tb/s HBM and it was a failure at 4K.
Well if the 4090 is only 348 bit?

Keep in mind memory bits here is better though of as lanes or pins. So this will be interesting because GDDR 7 has a far more robust signalling pattern and arrangement capabilities it allows for better pattern matching to the use cases. In addition to the faster transfer and latency rates per pin.
1710865846721.png
 
The amount of may-speculation here, make it very much in the yes sure, maybe type of talk (specially the multi-die approach, if the B-100 is only dual die hard to imagine the 102 will be).

I think what you have exited people about the 512 bit on the 5090 would have been the implication for the rest of the stack and what it mean for the bandwith and VRAM quantity, if the 5080 is 384 bits and we imagine not a 12gb card is it a 24 gb card and so on to the 5060....

If the 5080 is 256 bits again... that deflate a bit the enthusiasm for the non in the $2000 USD gpu market crowd.
 
I don't buy that the 5090 is going to be a 512-bit card whatsoever and if it is, I'd expect $2500+. That's just not Nvidia's MO with consumer grade graphics. The last time they released anything north of a 384-bit bus on GeForce products (for non-HBM cards) was all the way back in 2008-2009 with the 200-series which had 448-bit and 512-bit cards.

Also what's with everyone in here saying 384-bit was too low for the 4090? Are you the same people that though 192-bit and 128-bit were perfectly fine for the 70 and 60 cards this past gen?
 
I don't buy that the 5090 is going to be a 512-bit card whatsoever and if it is, I'd expect $2500+. That's just not Nvidia's MO with consumer grade graphics. The last time they released anything north of a 384-bit bus on GeForce products (for non-HBM cards) was all the way back in 2008-2009 with the 200-series which had 448-bit and 512-bit cards.

Also what's with everyone in here saying 384-bit was too low for the 4090? Are you the same people that though 192-bit and 128-bit were perfectly fine for the 70 and 60 cards this past gen?
I don't think it's unreasonable to think that GDDR7 is all of the speed up in needs in terms of memory bandwidth, but we'll see. I could totally see them selling the card for $2499 and it selling out TBH. I won't be buying one but it'll at least be an interesting piece of tech.
 
I could totally see them selling the card for $2499 and it selling out TBH
If memory bandwith continue to be a very important element of those ML compute process, 512bits-GDDR7 5090...... if the environment is in any similar to now, hard to imagine how it would not.
 
If memory bandwith continue to be a very important element of those ML compute process, 512bits-GDDR7 5090...... if the environment is in any similar to now, hard to imagine how it would not.
Yeah there's no world in which people will only use them for gaming, small ML companies will buy these in droves. You're better off as a consumer waiting for the actual consumer grade cards at this point.
 
I don't buy that the 5090 is going to be a 512-bit card whatsoever and if it is, I'd expect $2500+. That's just not Nvidia's MO with consumer grade graphics. The last time they released anything north of a 384-bit bus on GeForce products (for non-HBM cards) was all the way back in 2008-2009 with the 200-series which had 448-bit and 512-bit cards.

Also what's with everyone in here saying 384-bit was too low for the 4090? Are you the same people that though 192-bit and 128-bit were perfectly fine for the 70 and 60 cards this past gen?
$2500 for a company like Nvidia? That's like a down payment at this point. Especially when you consider that the 4090 was their best selling card. Also what's wrong with having 512-bit? More is always better.
nvidia buyers.png
 
what's wrong with having 512-bit? More is always better.
cost more in everyway obviously. More always is.

the 4090 was their best selling card.
source ? One would have guess entry level laptop.

On this the lovelace order
https://store.steampowered.com/hwsurvey/videocard/

4060 laptop
4070
4060
4060 ti
4070 ti
4090
4080
4050 laptop

Would be around what I would guessed, obviously there could be a higher percentage of 4090 not going into gaming rigs, but I imagine that there was a good amount of time that they limited how many AD102 were "wasted" into "low value" 4090 instead of going in L40/RTX 6000 family product has much they could and sold at such ridiculous price.
 
Last edited:
4090 is BY FAR the most expensive card i've owned, real crazy.
but ,anything less than a 4090 for me at this point is peasant territory.
like, if you`re not playing cyberpunk 2077 @4K ultra 70-90fps are you even playing it? (kidding but you get my point)
so anyways, for 30-40% more performance I'd totally consider upgrading to a 5090 ( well not for 2500$ , don't give nvidia ideas)
 
I'll be honest, I'm not even going to be the least bit shocked if this is a $2500 card. After the 4090 launched at $1600 and the card had line-ups of people waiting to buy it, I immediately assumed the 5090 is going to launch at $1800-$2000, so if someone said $2500 to me, I'd be like yeah, sure, why not. Gamers told Nvidia what they're willing to spend, and they'll happily charge them that. That's exactly what I'd do if I were them. It's not as if AMD's got anything that will challenge this, so have at it.

In any case, yeah, early rumours, so I don't believe anything until I see it. I wouldn't be shocked if this is accurate, since the top-tier card is the only one in the stack that Nvidia doesn't gimp on in terms of things like memory. Launching a "12GB 4080" with a 192-bit memory bus was an absolute joke, but then they called it a 4070 Ti and knocked off $100 and suddenly their customer base and many people on this forum immediately defended the honour of Jensen by insisting 12GB was totally enough, so I also wouldn't be shocked to see that happen again either, with a "super" mid-cycle refresh that includes the VRAM we were previously told we actually didn't need.
 
I don't buy that the 5090 is going to be a 512-bit card whatsoever and if it is, I'd expect $2500+. That's just not Nvidia's MO with consumer grade graphics. The last time they released anything north of a 384-bit bus on GeForce products (for non-HBM cards) was all the way back in 2008-2009 with the 200-series which had 448-bit and 512-bit cards.

Also what's with everyone in here saying 384-bit was too low for the 4090? Are you the same people that though 192-bit and 128-bit were perfectly fine for the 70 and 60 cards this past gen?
The bandwidth per pin of GDDR7 is double that of GDDR6X. The bus width doesn't need to be north of 384-bit for a significant increase in bandwidth. They're still on 16Gb memory chips for GDDR7, meaning 24GB cards are still going to be 384-bit. The only way you'll see 512-bit is if they make a 32GB card, or use 8Gb memory chips on 16GB cards.
 
Would not use the word believe, but I think some stuff are likely to be accurate when they need to planned well in advance and to be told to a lot of people because of procurement, stuff like GDDR7 for example, TSMC being used, etc... as we get closer, power&cooling needed as AIB need to be in the loop at some point.

Something like price, how the stack of sku end up looking like, vram quantity, performance uptick, stuff that can easily be changed at anytime and that Nvidia themselve are not certain.... (I mean the 4080 could have been a cut down AD102 that performance 70% higher than a 3080 would they have wanted or needed to and that something they could have decided almost just before the official announcement), lot of rumors are about stuff that are not even decided yet.
 
Last edited:
4090 is BY FAR the most expensive card i've owned, real crazy.
but ,anything less than a 4090 for me at this point is peasant territory.
like, if you`re not playing cyberpunk 2077 @4K ultra 70-90fps are you even playing it? (kidding but you get my point)
so anyways, for 30-40% more performance I'd totally consider upgrading to a 5090 ( well not for 2500$ , don't give nvidia ideas)
The 4090 is effectively cheaper for most people than the 3090 was. A big part of the 3090 life cycle was during the tariff, only evga was below 2k during the tariff period. I paid $2200 for my Asus Strix. My MSI Suprim Liquid X was only $1750. Im not even factoring in inflation. I would need the 5090 to be 60%+ of a 4090 to upgrade again. The 4090 is such a good card, for most games I'm near or above 4k120 with DLSS.
 
The 4090 is effectively cheaper for most people than the 3090 was.
Depends on their electricity price and nicehash average return too, I imagine that for many amperes ended up among the cheapest GPU ever, even when bought at ridiculous high price points.
 
I would need the 5090 to be 60%+ of a 4090 to upgrade again. The 4090 is such a good card, for most games I'm near or above 4k120 with DLSS.
Yeah but like us 3090 users you are still thinking that your card will be properly supported like I did when I bought my card. Nividia will screw you over somehow just like they did us. What give you the new features like path tracing and dlss3? No why would we let you have that when you didn't buy a new 40x0 card? Why? Obviously because it's going to be slower on your year old card, it doesn't matter if it can run but slower we can't have you benchmarking and showing that it isn't too unplayable and tanking our 40x0 card sales, so none of these new features for you.
 
Last edited:
Not sure why folks are excited about the 512 memory bus, my 390X had it in 2015. The Fury X had 1 Tb/s HBM and it was a failure at 4K.
Radeon HD 2900 series had it too

R600 w/ 512-bit Memory interface in 2007, bro
 
The 4090 is effectively cheaper for most people than the 3090 was. A big part of the 3090 life cycle was during the tariff, only evga was below 2k during the tariff period. I paid $2200 for my Asus Strix. My MSI Suprim Liquid X was only $1750. Im not even factoring in inflation. I would need the 5090 to be 60%+ of a 4090 to upgrade again. The 4090 is such a good card, for most games I'm near or above 4k120 with DLSS.
but what if you could have a steady 120 with DLAA instead.
buy now!
 
cost more in everyway obviously. More always is.
The card will likely cost way too much, just like the 4090. How much do you think it costs Nvidia to make a 4090? The manufacturing cost is likely under $300.
source ? One would have guess entry level laptop.
I'm going by the amount of times it's sold out. The price of the 4090 has been creeping up, which means Nvidia doesn't have a problem moving product. We're talking about a $1,600 RTX 4090 that now sells for over $2k. The RTX 5090 won't be cheap. A lot of the 4090's are probably not used for gaming either. It's not like the 4060 where Nvidia has to drop the price just to get people interested.
 
The card will likely cost way too much, just like the 4090. How much do you think it costs Nvidia to make a 4090? The manufacturing cost is likely under $300.
The die or a complete card, complete card rumours were around $800-900 I think for the fancy cooling solution. There can be a hit on the margin, but if the competition need to make a 512 bits-gddr7 GPU to have a chance to compete..... without an establish market to sell the very top of them really high and a market to sells the missed die still at a good price... they will stay high, because it will not exist.


I'm going by the amount of times it's sold out.
That does not tell you how many are bought, how many are in store, how many are made, does not tell you anything at all really, only tell you that stock is 0 in store and they have no problem selling what is made.

We're talking about a $1,600 RTX 4090 that now sells for over $2k.
Yes, but the L40 using the same chips were selling for what $10k ?, big incentive to not have any die good enough to be in them to end up in a mere $2000 4090, has long the L40s were selling.
 
Last edited:
The card will likely cost way too much, just like the 4090. How much do you think it costs Nvidia to make a 4090? The manufacturing cost is likely under $300.
A 300mm wafer has a surface area of 70,000 sq mm, given the 4090 is 609mm2 the absolute best assuming 100% yield Nvidia can get there is 114 chips from that wafer. Given the current cost for TSMC is ~7000 for the wafer and another $20k to process it Nvidia is looking at $236 on just the chip, then there is the packaging and validation costs on that chip before we even get to the rest of the board and its components.

$300 doesn’t even get a finished chip out the door let alone the remainder of the board components.
 
Last edited:
A 300mm wafer has a surface area of 70,000 sq mm, given the 4090 is 609mm2 meaning the absolute best assuming 100% yield Nvidia can get there is 114 chips from that wafer. Given the current cost for TSMC is ~7000 for the wafer and another $20k to process it Nvidia is looking at $236 on just the chip, then there is the packaging and validation costs on that chip before we even get to the rest of the board and its components.

$300 doesn’t even get a finished chip out the door let alone the remainder of the board components.
yeah 4090's are an incredibly good deal, you're looking at $500 for a yielded, tested, and packaged chip plus margins, then $100 of VRAM, plus PMICs, cooler, board, testing, packaging and distribution on the AIB side...AIB margins are not so good at $1500.
 
Radeon HD 2900 series had it too

R600 w/ 512-bit Memory interface in 2007, bro
Ah yes, the 512-bit token ring bus, and AMD's first attempt at a high-end GPU after they purchased ATI.
Things did not get better until the next generation.
 
Ah yes, the 512-bit token ring bus, and AMD's first attempt at a high-end GPU after they purchased ATI.
Things did not get better until the next generation.
Unfortunate waste of the 512-bit memory interface with that broken AA resolve

Unbelievable blunder

https://www.anandtech.com/show/2231/11


Though I’m looking forward to this next generation from nvidia with 512-bit again

Bigger numbers is better
 
Radeon HD 2900 series had it too

R600 w/ 512-bit Memory interface in 2007, bro
The R600 used sixteen 2Gb memory chips to reach 4GB total. Each GDDR channel is 32 bits wide, so 16 * 32 = 512 bits. Bus width isn't some magical arbitrary number that some people seem to think it is. The total bus width is simply a function of how many memory chips are being used. It's why the 4060 and 4060 Ti are "only" 128-bit (4x 16Gb chips * 32 bits per chip = 128 bits).
A 300mm wafer has a surface area of 70,000 sq mm, given the 4090 is 609mm2 meaning the absolute best assuming 100% yield Nvidia can get there is 114 chips from that wafer. Given the current cost for TSMC is ~7000 for the wafer and another $20k to process it Nvidia is looking at $236 on just the chip, then there is the packaging and validation costs on that chip before we even get to the rest of the board and its components.

$300 doesn’t even get a finished chip out the door let alone the remainder of the board components.
Don't forget that R&D is factored into the final price, as well. Bill of materials is typically the smallest piece of the actual cost of the product to the manufacturer, which too many people seem to not understand.
 
The fact that TSMC 3 seem to be a miss for Nvidia-TSMC for the new generation (and it shows where there was no actual new architecture-software change, performance by mm-power do not seem superb) have been quite massed down in the presentation...

Could we end up seeing the same for the gaming side.... (if not, if we ever enter a world they are on different node, volume could be quite something for both product)
 
Last edited:
The fact that TSMC 3 seem to be a miss for Nvidia-TSMC for the new generation (and it shows where there was no actual new architecture-software change, performance by mm-power do not seem superb) have been quite massed down...

Could we end up seeing the same for the gaming side....
TSMC N3 is a miss for a lot of things, logic heavy chips get extraordinary expensive there with few if any performance gains with a higher than average (for TSMC) failure rate.
Apple is still demanding most of that node and delays on other facilities and delays to the rollout of N2 mean N3 is still heavily constrained.

Nvidia is already concerned that TSMC N4 won’t be able to meet demand, no way in hell could N3 meet it given TSMCs existing manufacturing and packaging facilities.
 
The R600 used sixteen 2Gb memory chips to reach 4GB total.
Are you sure about that?
No GPU in 2007 had 4GB of VRAM, and the HD 2900 (R600) normally had 512MB on it.

In fact, one of the earliest AMD Radeon GPUs that I remember even having 4GB of VRAM was the Fury X, and that was in 2015.
 
4870 & 4850 launched in '08, had 2GB. None with 4 though, except maybe the dual radeons, by technicality.
 
4870 & 4850 launched in '08, had 2GB. None with 4 though, except maybe the dual radeons, by technicality.
Are you sure about that?
No GPU in 2007 had 4GB of VRAM, and the HD 2900 (R600) normally had 512MB on it.

In fact, one of the earliest AMD Radeon GPUs that I remember even having 4GB of VRAM was the Fury X, and that was in 2015.
Even 3GB in 2011+ seemed rare

780 Ti I think had 3GB in 2013
 
Don't forget that R&D is factored into the final price, as well. Bill of materials is typically the smallest piece of the actual cost of the product to the manufacturer, which too many people seem to not understand.
Yeah but I don’t have numbers for R&D budgets associated with that silicon and If I can show that TSMC alone physically costs more than $300 than that is sufficient.

Besides R&D costs and how they relate to a specific product line is some sort of nebulous accounting voodoo that is well beyond me.
It’s there and it exists but I can at best guess at it and just hope I don’t anger the accounting spirits that haunt the forums that would tear it to shreds.
 
Are you sure about that?
No GPU in 2007 had 4GB of VRAM, and the HD 2900 (R600) normally had 512MB on it.

In fact, one of the earliest AMD Radeon GPUs that I remember even having 4GB of VRAM was the Fury X, and that was in 2015.
Yeah, I was thinking about Hawaii XT when I typed that. The 2900 XT still had sixteen memory chips. 256 Mb GDDR3, in that case.
 
Even 3GB in 2011+ seemed rare

780 Ti I think had 3GB in 2013
There was a 3GB version of the GTX 580. Then next wasn't until the 3GB GTX 780, GTX 780 Ti in 2013.

On the AMD side, had the HD 7970 and 7950 both with 3GB in 2012.
 
Back
Top