• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

5080 Reviews

That said...does paint a picture. So let's look at the similar die classes. In this case we'd be talking about the x04 and since Ada, x03 dies.
If this is not a same node I am not sure how instructive that would be, depending of what this exercise is trying to do, using transistor count would be tricky has logic density usually increase way faster than IO or cache
 
Last edited:
If this is not a same node I am not sure how instructive that would be, depending of what this exercise is doing to do, using transistor count would be tricky has logic density usually increase way faster than IO or cache
People sure are trying real hard in this thread to just avoid saying the obvious.

Nvidia really did put out some impressive stuff on the software side. The new transformer models are in my opinion the most impressive evolution of DLSS.

On the other hand, the hardware outside the 5090 is very much phoned in. You're telling me they couldn't do a ~500mm2 die for the 5080 and at least without question matched or beat the 4090 with it? They could have. They chose not to. "Oh it costs more". Sure...and I expect we would have seen that. On the other hand, the 4080 Super got cheaper than the 4080, and now with 50-series reusing the same custom 4N node as the 40-series, it's now a matured node, so I'd expect yields to be better in terms of the cost factor. Still I'd expect a larger died 5080 to be a bit more expensive. But if it no question was meeting or beating the 4090, I think it would have been ok.

As it is currently, I don't see how this is anything but phoned in for the 5080 and below on the hardware side. It's not like much really changed in the architecture as far as gaming is concerned anyway. Tensor got an upgrade and INT32 pathing can now run through the entire cluster of CUDA cores in an SM per clock cycle instead of its own path. Doesn't help games much as that's more FP32 and that's more of what you use gaming. In that sense, Blackwell is not much different from Ada, so yes, they could have done better by just spec-ing each model better to be true next gen replacements for their outgoing 40-series predecessors. It isn't because its "impossible".

1738634230928.png

.
 
Last edited:
It's worth noting that TSMC's 10FF (10nm) node has better yields and a higher density than Samsung 8nm does even now.
Though the Samsung 8nm process was still nearly double the density of the TSMC 12nm, but the TSMC 12nm was still less dense than Intel 14nm. In fact, Intel 14nm sits pretty happily in the middle between TSMC 12 and Samsung 8, but wasn't as dense as TSMC 10.

So what I am reinforcing is the names still don't matter.

The more you know!
 
n. You're telling me they couldn't do a ~500mm2 die for the 5080 and at least without question matched or beat the 4090 with it?
Sure and make a ~400mm one called a 5070 to fit in the laptop line calling it a 5090 mobile, how is that less phoning it, is there anything impressive about die size ? (specially in the context of the conversation about easy gain) ?

People are not impressed by the 30% more count, 33% more power, 70% more bandwith 5090 performance either. And people were impressed that much by the much bigger and much more expensive 2060...

There is nothing impressive or less phoned in by the die being bigger, imo, outside some new tech that achieved to make bigger die cheaper or something. That would like finding impressive that the 5090 is faster than the 5080. A bigger, more power hungry and more expensive 5080 would have been not phoned in to some, would have been worst than what they did to others...
 
Last edited:
Sure and make a ~400mm one called a 5070 to fit in the laptop line calling it a 5090 mobile, how is that less phoning it, is there anything impressive about die size ? (specially in the context of the conversation about easy gain) ?

People are not impressed by the 30% more count, 33% more power, 70% more bandwith 5090 performance either. Or people not impressed that much by the much bigger and much more expensive 2060...

There nothing impressive or less phoned in by the die being bigger, imo, outside some new tech that achieved to make bigger die cheaper or something. That would like finding impressive that the 5090 is faster than the 5080. A bigger, more power hungry and more expensive 5080 would have been not phoned in to some, would have been worst than what they did to others...
Yeah I am not saying either situation is ideal, but I think if I'm Nvidia and am reusing the same node, I want my next gen 80-class card to at least have a meaningful gen over gen performance increase and eclipse the last gen flagship. That'll come at the sacrifice of something...power and a larger die to do for sure.

I'm just more saying the folks saying its "impossible" or the "easy gains" are done. What are they talking about? 20 series to 30 series was good gains. 30 series to 40 series was great gains. 50-series didn't get the advantage of a die shrink advancement, ok...but then Nvidia also chose not to do the one thing they should have done to guarantee it still provides a maybe lower end generational improvement compared to the last two. That was a design choice.
 
Yeah I am not saying either situation is ideal, but I think if I'm Nvidia and am reusing the same node, I want my next gen 80-class card to at least have a meaningful gen over gen performance increase and eclipse the last gen flagship. That'll come at the sacrifice of something...power and a larger die to do for sure.
Yes that good plan imo, if they think they can make enough of them at a good price, bigger the die, less card can hit the market, now they need to make a much bigger 5070 to replace the 4080/4090mobile (or change the laptop line to fit that bigger die, maybe power wise it would be a waste) and less 5070 available..

30 series to 40 series was great gains.
If we compare the core count change / frequency I am not sure if it was special.

4070 had the same core count than the 3070, its boost clock was 43% higher, it performance gain at 1440p was around 24%, 28% at 1080p

Did they made significant gain in how transistors do matrix and vector operation ? how shaders works, or just had a gpu that ran faster ?

Did it ran faster because of some significant change that made it possible to ran it faster or mostly tsmc node on a smaller die clocked faster than Samsung by a lot.

You could be right, but it seems to me that often, even before a gpu launch, they look at the core count, frequency and guess the performance really well, to take a recent example PS5 pro, we had the frequency, we had the core count, die size, performance.... yet we debated if they were RDNA 2, 3, 3.5 or 4 cores, because of how little it seem to change over time. Lovelace feat was more cache handling the lower bitrate, their version rdna 2 infinity cache, but in term of using a certain group of transistor to do raster, not so sure.

I imagine I just turn down just how hard it is to go up in frequency and all the work that goes into it, changing node is probably not magic like that and Blackwell clocking much faster is maybe not just tsmc 4N getting better... but if it is a lot of work, that goes into not an easy gain bucket. Since the 90s the amount of money that went into optimizing a very specific step, 3dworld vertex position transferred to a current view camera then transferred to a 2d screen matrix and the interpolating bunch of value for the pixels between those vertex has been quite something, John Carmark wrote a blogpost about just how ridiculous that had become almost 10 years ago, one of the most efficient and pushed to the limit process in human history (about why raytracing would have an hard time replacing it at the time).
 
Last edited:
Yes that good plan imo, if they think they can make enough of them at a good price, bigger the die, less card can hit the market, now they need to make a much bigger 5070 to replace the 4080/4090mobile (or change the laptop line to fit that bigger die, maybe power wise it would be a waste) and less 5070 available..


If we compare the core count change / frequency I am not sure if it was special.

4070 had the same core count than the 3070, its boost clock was 43% higher, it performance gain at 1440p was around 24%, 28% at 1080p

Did they made significant gain in how transistors do matrix and vector operation ? how shaders works, or just had a gpu that ran faster ?

Did it ran faster because of some significant change that made it possible to ran it faster or mostly tsmc node on a smaller die clocked faster than Samsung by a lot.

You could be right, but it seems to me that often, even before a gpu launch, they look at the core count, frequency and guess the performance really well, to take a recent example PS5 pro, we had the frequency, we had the core count, die size, performance.... yet we debated if they were RDNA 2, 3, 3.5 or 4 cores, because of how little it seem to change over time. Lovelace feat was more cache handling the lower bitrate, their version rdna 2 infinity cache, but in term of using a certain group of transistor to do raster, not so sure.

I imagine I just turn down just how hard it is to go up in frequency and all the work that goes into it, changing node is probably not magic like that and Blackwell clocking much faster is maybe not just tsmc 4N getting better... but if it is a lot of work, that goes into not an easy gain bucket. Since the 90s the amount of money that went into optimizing a very specific step, 3dworld vertex position transferred to a current view camera then transferred to a 2d screen matrix and the interpolating bunch of value for the pixels between those vertex has been quite something, John Carmark wrote a blogpost about just how ridiculous that had become almost 10 years ago, one of the most efficient and pushed to the limit process in human history (about why raytracing would have an hard time replacing it at the time).
Yeah sorry I was more thinking 2080 Ti to 3090 to 4090. Fair points about lower end of stack like 3070 to 4070.

I'm just old man yells at cloud about graphics these days. I was hoping to upgrade my 3080 Ti this gen and I really feel underwhelmed enough to just not even bother.

Now if they do a 24GB 5080 Ti, maybe I'll change my mind.
 
Yeah sorry I was more thinking 2080 Ti to 3090 to 4090. Fair points about lower end of stack like 3070 to 4070.

I'm just old man yells at cloud about graphics these days. I was hoping to upgrade my 3080 Ti this gen and I really feel underwhelmed enough to just not even bother.

Now if they do a 24GB 5080 Ti, maybe I'll change my mind.
A 4070S is equal to, in some cases faster and with more features, than a 3080Ti. I wouldn't really class the 4070 tier as the lower end of the stack. I'd class the 4060 (non Ti) and below as the lower end of the stack.
 
A 4070S is equal to, in some cases faster and with more features, than a 3080Ti. I wouldn't really class the 4070 tier as the lower end of the stack. I'd class the 4060 (non Ti) and below as the lower end of the stack.
Same problem I have now at 4k...12GB.

16GB is probably fine for a while, but I'd prefer to have more so I don't potentially have the same problems I have now with 12GB over the next 5+ years.

Which is exactly why they won't do that.

AMD never misses an opportunity to miss an opportunity.
I'm cautiously optimistic now after the Nvidia paper launch. AMD in the past is usually quick to rush out too, but they didn't do that this time. I think at this point if they can actually launch with volume with cards that can rival the 5070 and 5070 Ti in raster, with RT uplift to that of a 4070 Ti-ish level (rumors currently), and deliver a solid product out of FSR4, I think the only thing for them to do then is make sure they price to woo the mainstream market. None of this Nvidia price minus $50 crap. That's a losing strategy.

Lot of ifs there. They can absolutely mess it up like usual.
 
Nvidia pushed their launch a lot, still feel rush, AMD will maybe still quick to rush it out, that not something we can really judge, without a feel for the drivers-game updates-world volume ready, a date do not tell us much.

Considering the current situation, would not surprise me (would be very good for them) if AMD is rushing things as much as they can, there is no 7900xt, xtx left to be bought right now.

If they did not come out with a way to produce giant volume in the current environment, $50 under Nvidia msrp will still sell out, even Intel did....
 
Last edited:
sad that the 4080 and 5080 are now considered the GPU's that 'no one wants'...the 80 cards used to be considered high end...now it's the 90 series or bust
 
sad that the 4080 and 5080 are now considered the GPU's that 'no one wants'...the 80 cards used to be considered high end...now it's the 90 series or bust
I mean the 80's have been using the mid-range dies of old since the 600-series Kepler 680. Only the 780 and 3080 since then have used the bigger flagship dies (albeit heavily cut down).

The difference this gen is just how far cut down the 5080 is from the flagship (5090) for an 80 class card. Literally almost exactly half of the 5090 in shaders. If I look at previous gens the 80 card is much closer to the flagship than the 5080 is to the 5090.

The 3090 Ti had 10752 shaders (same as 5080 oddly enough). What was about half of a 3090 Ti? The 3070 at 5888 shaders was not quite half. 3060 Ti was a little more than half. The 3080 at 8704, much closer to the flagship.

How about the 20-series? 2080 Ti had 4352 shaders. What was about half of that? Well the 2070 had 2304 shaders. Little less than half again.

How about one more? 1080 Ti at 3584 shaders. The 1070 is a bit less than half again at 1920 shaders, while the 1060 6GB at 1280 is more than half.

So we have never seen the 80 card cut down so far from the flagship like the 5080 is from the 5090 before.
 
Nvidia pushed their launch a lot, still fill rush, AMD will maybe still quick to rush it out, that not something we can really judge, without a feel for the drivers-game updates-world volume ready, a date do not tell us much.

Considering the current situation, would not surprise me (would be very good for them) if AMD is rushing things as much as they can, there is no 7900xt, xtx left to be bought right now.

If they did not come out with a way to produce giant volume in the current environment, $50 under Nvidia msrp will still sell out, even Intel did....
It is my understanding that all retail outlets have the AMD cards on shelves in their warehouses and are pissed that AMD delayed the launch. Since the cards are already manufactured, I can only assume that AMD either had to deal with the accounting realities or a driver issue to try to squeeze as much power out of the new series as possible. Rumors are that the card was to retail for 899.00 however once caught wind of the 5070 price point, there was panic.
 
I can’t help but think there was supposed to be a chip between the GB202 and the GB203, that Nvidia just said nope to. Then they just scrapped it and moved everything else up the stack.
Or they decided to do the same thing AMD is doing and focusing on the mid range and focused on a mainstream GPU that they can easily price drop.
The 5080 resupply won’t land until after the launch of the AMD 9000 series so when it does get here they can just change the price accordingly. Nothing in the rules saying Nvidia needs to keep that MSRP and because so few of them made it to the retailers they don’t need to do any of the rebait and refund stuff AMD is supposedly stuck doing.

Come April the fact it was a paper launch likely won’t mean squat if things are priced appropriately.
 
Explains the $999 price. It's a slightly better 4080 Super.

The fact it's not even a 4090 level of performance is really poor for a new generation, and there's no excuse for it. Nvidia chose to make GB203 and the overall specs what they did. This was deliberate.
I mean, I honestly wonder if Nvidia should can the 5090, seeing as it is no longer positioned as a gaming card- its for AI bros... It would be interesting to see Nvidia make a line of high vram "AI optimized cards" for the AI bros, then, make the 5080 the $1000 slightly cut down "AI card" as a gaming card.
 
I mean, I honestly wonder if Nvidia should can the 5090, seeing as it is no longer positioned as a gaming card- its for AI bros... It would be interesting to see Nvidia make a line of high vram "AI optimized cards" for the AI bros, then, make the 5080 the $1000 slightly cut down "AI card" as a gaming card.
This.
 
I can’t help but think there was supposed to be a chip between the GB202 and the GB203, that Nvidia just said nope to. Then they just scrapped it and moved everything else up the stack.
Or they decided to do the same thing AMD is doing and focusing on the mid range and focused on a mainstream GPU that they can easily price drop.
The 5080 resupply won’t land until after the launch of the AMD 9000 series so when it does get here they can just change the price accordingly. Nothing in the rules saying Nvidia needs to keep that MSRP and because so few of them made it to the retailers they don’t need to do any of the rebait and refund stuff AMD is supposedly stuck doing.

Come April the fact it was a paper launch likely won’t mean squat if things are priced appropriately.
I doubt highly Nvidia drops the pricing on the 5080. Even if the 9070 is close in performance and $400 cheaper.
Nvidia is done giving gamers deals.
If AMD starts getting any real market share back. A 5080 super will be a long at the same price point with 10% more performance... and a 6x frame gen mode. :) lol
 
I mean, I honestly wonder if Nvidia should can the 5090, seeing as it is no longer positioned as a gaming card- its for AI bros... It would be interesting to see Nvidia make a line of high vram "AI optimized cards" for the AI bros, then, make the 5080 the $1000 slightly cut down "AI card" as a gaming card.
They have been calling it a prosumer card, they didn’t really even call it a gaming card.

The push to keep it as a 2 slot I see as a reinforcement of this as keeping it 2 slots is very important if you are using multiple GPU’s in a workstation. Also notice they haven’t even announced their workstation lineup yet.
 
I doubt highly Nvidia drops the pricing on the 5080. Even if the 9070 is close in performance and $400 cheaper.
Nvidia is done giving gamers deals.
If AMD starts getting any real market share back. A 5080 super will be a long at the same price point with 10% more performance... and a 6x frame gen mode. :) lol
NVidia’s gaming division still makes more than AMD does. They aren’t just going to roll over on a multi billion dollar market as a gift to their competitors.
 
NVidia’s gaming division still makes more than AMD does. They aren’t just going to roll over on a multi billion dollar market as a gift to their competitors.
You assume AMD is going to snap up market share. Nothing saying that is realistic quite yet.
Gaming aside Nvidia has real issues. Enough dumb gamers will still buy Nvidia no matter what.
On the other hand if they don't get real Blackwell parts shipping they are going to have to re negotiate more contracts to Rubin parts instead. If Rubin isn't on track they are going to have major issues.
Nvidia as massive as they are... are in the middle of a massive datacenter screw up. If their follow up doesn't FIX all that, as well as make new money. They are actually in a pretty bad position. I feel they would rather just keep trickling out 5000 stock so it never sits on a shelf... and they can keep saying we sell em as fast as we make them. Then admit AMD got a one up on them to investors that mostly pay no attention to gaming. I mean the entire reason they shipped 5000 before stock was ready, was to say blackwell is shipping in some form. The AI investors see oh good blackwell is moving when they announce 5000 gaming parts. They aren't paying any attention to what is really going on 4 months on from that. NV is selling what they can make... they aren't paying attention to Steam User numbers or anything the way gamers do.
 
Rumors are that the card was to retail for 899.00
From what I saw there wasn't any value to that rumor and it wasn't very widespread. IIRC it was a Belgian (?) retailer listing the price, which included VAT, making it 20% or so higher than a US price would be, and it was an AIB model, etc., etc. That is to say, a rumor that shouldn't be taken seriously, once you do a bit of thinking about it.
 
From what I saw there wasn't any value to that rumor and it wasn't very widespread. IIRC it was a Belgian (?) retailer listing the price, which included VAT, making it 20% or so higher than a US price would be, and it was an AIB model, etc., etc. That is to say, a rumor that shouldn't be taken seriously, once you do a bit of thinking about it.
It was also limited to cards like the top shelf customized brands like the Red Devil or what ever that brand is.
 
From what I saw there wasn't any value to that rumor and it wasn't very widespread. IIRC it was a Belgian (?) retailer listing the price, which included VAT, making it 20% or so higher than a US price would be, and it was an AIB model, etc., etc. That is to say, a rumor that shouldn't be taken seriously, once you do a bit of thinking about it.
Yes but the price was also way higher than $900, (both the relative to the 7900xt and it absolute), it was a fancy calculation people made taking all of that into account, not a straight to USD convention.

He say that the 7900xt AS rock phantom goes for 1650 BGN (bulgaria) while over 2000 BGN for the 9070xt, those are $877 USD and over $1,060 figure also said that it was 500 BGN more than the 7900xt ($265) and say that the 9070 was around 1800 BGN ($950 USD), people took those $1150-$950 USD type of numbers and tried from those come back to what it would look like for the US market and speculated $900 (which was wrong).

.. It would be interesting to see Nvidia make a line of high vram "AI optimized cards" for the AI bros,
what would make them different than the 48GB of vram version of the 4090 of the past (and the upcoming 64-96 gb vram version of the 5090)?
https://www.techpowerup.com/gpu-specs/l40.c3959
https://www.techpowerup.com/gpu-specs/rtx-6000-ada-generation.c3933
 
You assume AMD is going to snap up market share. Nothing saying that is realistic quite yet.
Gaming aside Nvidia has real issues. Enough dumb gamers will still buy Nvidia no matter what.
On the other hand if they don't get real Blackwell parts shipping they are going to have to re negotiate more contracts to Rubin parts instead. If Rubin isn't on track they are going to have major issues.
Nvidia as massive as they are... are in the middle of a massive datacenter screw up. If their follow up doesn't FIX all that, as well as make new money. They are actually in a pretty bad position. I feel they would rather just keep trickling out 5000 stock so it never sits on a shelf... and they can keep saying we sell em as fast as we make them. Then admit AMD got a one up on them to investors that mostly pay no attention to gaming. I mean the entire reason they shipped 5000 before stock was ready, was to say blackwell is shipping in some form. The AI investors see oh good blackwell is moving when they announce 5000 gaming parts. They aren't paying any attention to what is really going on 4 months on from that. NV is selling what they can make... they aren't paying attention to Steam User numbers or anything the way gamers do.
While your not wrong your missing a few things.
When Trump announced tariffs Walmart, Costco, and many others made huge orders to stockpile before they kicked in, Nvidia has an exemption until May 31, 2025 so they aren’t in the same hurry. No need to compete for cargo space on the busy boats before the tariffs when they will be emptier and cheap after. I still think that once they knew they were looking at a paper launch they jacked the 5080 price knowing it would still clear out.
Now once the second shipment arrives they can be aggressive on the price if they need to as they don’t have to worry about rebates down the supply chain. The number of people who bought the card at or above MSRP are so few it’s barely a blip.
 
Now once the second shipment arrives they can be aggressive on the price if they need to as they don’t have to worry about rebates down the supply chain. The number of people who bought the card at or above MSRP are so few it’s barely a blip.
Screwing over the early adopter on price would normally be a bad idea.
 
Screwing over the early adopter on price would normally be a bad idea.
Yeah but if there is only 50 of them… besides it’s going to be 3 months later that’s like a quarter of its life cycle. Because at that point we are less than 8 months until we start seeing leaks for the Super variants and 10 months until the Supers are being ordered.

This wasn’t the normal early adopter window this was paid Beta key early access before the rival launched.
 
With the current market, being aggressive on the price could just mean selling a lot of $750, $1000 msrp skus and no one would feel a lot skrewed by it, because they bought the fancy more expensive one, it would feel "normal"..., not sure how much control they can have in that regard too.
 
While your not wrong your missing a few things.
When Trump announced tariffs Walmart, Costco, and many others made huge orders to stockpile before they kicked in, Nvidia has an exemption until May 31, 2025 so they aren’t in the same hurry. No need to compete for cargo space on the busy boats before the tariffs when they will be emptier and cheap after. I still think that once they knew they were looking at a paper launch they jacked the 5080 price knowing it would still clear out.
Now once the second shipment arrives they can be aggressive on the price if they need to as they don’t have to worry about rebates down the supply chain. The number of people who bought the card at or above MSRP are so few it’s barely a blip.
Time will tell. So far the second wave of stock is a myth. I hope so I don't really want to see gamers getting ripped off for months on end.
 
Yeah but if there is only 50 of them
We know that Microcenter got 293. I think it's safe to assume that all of them are already sold. We can SWAG that BestBuy got at least that many and that they sold out too.

Not the same as ripping off a million 5060 buyers, but still, doesn't seem smart in general.
 
We know that Microcenter got 293. I think it's safe to assume that all of them are already sold. We can SWAG that BestBuy got at least that many and that they sold out too.

Not the same as ripping off a million 5060 buyers, but still, doesn't seem smart in general.
But if Product A launches, then a month later product B launches at a lower price but similar value for a month later to have product A lower its price to be of equal or greater value, were the people who purchased it earlier before the competitive product was available ripped off?

The only current competitor to the 5080 is the 4080 Super, so until something else enters the market the price is actually pretty good compared to the 4080 Super, looking at my options here the 5080’s sold for less than most 4080 supers currently sell for.
 
But if Product A launches, then a month later product B launches at a lower price but similar value for a month later to have product A lower its price to be of equal or greater value, were the people who purchased it earlier before the competitive product was available ripped off?
I'm going to suggest that, given the vendor could've released product A on day one at the lower price, the strong possibility of "yes". Also, we started out talking about the 5080, not the 5090, so "50 buyers" was a pretty ridiculous number on your part. But even at 50, one has to ask, how many people getting ripped off is OK?
 
I'm going to suggest that, given the vendor could've released product A on day one at the lower price, the strong possibility of "yes". Also, we started out talking about the 5080, not the 5090, so "50 buyers" was a pretty ridiculous number on your part. But even at 50, one has to ask, how many people getting ripped off is OK?
Given the 5080 is cheaper than the 4080 super, I’d argue they weren’t ripped off at all given what’s available on the market.
 
Back
Top