Join us on November 3rd as we unveil AMD RDNA™ 3 to the world!

I have to believe that the 7900 XTX was designed to compete with the best Nvidia had to offer, but when it was being designed, AMD did not anticipate that Nvidia would push its halo card to 450+ watts of power usage. Therefore, AMD had to regroup and re-price their product to compete. That could explain why the 7900 XTX is the same MSRP as the 6900 XT was, but using a much wider bus and bigger core.

Pretty sure Nvidia just broke out the champagne glasses at HQ; they won.
You win in Sales of your Products.
 
Jesus Steve is autistically anal about the power adapter and size dig.
You are familiar with his ouvre, right? This is the guy who spent 15 minutes ranting about how horrible the Corsair iCue 220T's thermals were going to be because of the front-panel design, only to later briefly muble through the chart showing it wasn't actually bad at all.

I like Steve's videos, but he does have a couple of bees in his bonnet.
 
It's half the size and half the transistor count, transistors still count for something.
The problem comparing the two in terms of gaming is transistor budgets for irrelevant things. Well irrelevant depending who your talking too anyway. How many transistors on Nvidias card are for tensor cards that will sit un powered 95% of the time in a gamers system. AMD isn't doing ray tracing by adding tensor cores. I get it... according to what AMD said Nvidia probably still has the better RT performer. Still with the number of games with RT still well under 100.... its still a feature that is probably irrelevant to most gamers.
 
Yeah, so they put this out now, at this price point, force Nvidia to lower their prices, and still have the option to go big with a refresh, when there's no surplus stock left eating into their market share.

TSMC's max die size on 4nm is around 850. So Nvidia can't go bigger than that. AMD is at 300 (and 5nm).

If these were car engines, Nvidia's got a six-cylinder engine making 450/450 and gets a combined MPH of 19. AMD just rolled out a 3-cylinder that's making 350/350 that gets 28. What happens when AMD decides to throw some displacement into it?

And right now, the only real market for new cards is mobile. Focusing on efficiency, whether it was planned around the crypto collapse, or just a conservative approach to this gen, puts AMD in a real solid position, more than I think a lot of people think.
The problem AMD has with "throwing displacement into it" is that, like the previous generation, AMD seemed to focus heavily on rasterization performance and less on ray tracing. Thus, the die size and power consumption is lower. At this point, it's a balancing act. Ray Tracing is expensive from the perspective of transistor budget and power budget. Nvidia threw a TON of transistor budget at RT over the last 3 generations, and it shows. AMD seems more focused on rasterization, and it shows.

We will see what the benchmarks look like.
 
Raytracing performance is still going to suck if that matters to you, from what little they showed in their slides it seems bad. Could still be a compelling product. We'll see.
 
Yeah, so they put this out now, at this price point, force Nvidia to lower their prices, and still have the option to go big with a refresh, when there's no surplus stock left eating into their market share.

TSMC's max die size on 4nm is around 850. So Nvidia can't go bigger than that. AMD is at 300 (and 5nm).

If these were car engines, Nvidia's got a six-cylinder engine making 450/450 and gets a combined MPH of 19. AMD just rolled out a 3-cylinder that's making 350/350 that gets 28. What happens when AMD decides to throw some displacement into it?

And right now, the only real market for new cards is mobile. Focusing on efficiency, whether it was planned around the crypto collapse, or just a conservative approach to this gen, puts AMD in a real solid position, more than I think a lot of people think.
I think the refresh could be the interesting thing here.

AMD has a new wrinkle to add to refresh... that goes beyond faster memory and bumped clocks of days past. They can now refresh by updating the controller chip and doubling cache.

I don't wanna sound like a cheerleader. Sounds like this gen they will probably best the 4080 and be just shy of the 4090. For the 6 month refresh though... I'm not sure what more Nvidia could really do with the 4090. I mean we know the full chip has a couple more compute units, but there won't be a die shrink for it I don't think. So they have perhaps another 10-15% bump tops with a TI. Where as AMD has a TON of room to push a 7950.

I don't know its frustrating seeing both Nvidia and AMD playing games trying to unload the last over stocked gen.
 
Raytracing performance is still going to suck if that matters to you, from what little they showed in their slides it seems bad. Could still be a compelling product. We'll see.
Down the stack RT becomes less and less important because those cards can’t do it fast enough. Raster will continue to be the most important metric for the near future I believe.
 
How many years have to pass before we can put that non issue to rest?
Only a non-issue if you believe ARC is "good" as is, or if you believe it's DOA. Which side are you taking? Years??? Have a feeling you're thinking about something else.
 
You are familiar with his ouvre, right? This is the guy who spent 15 minutes ranting about how horrible the Corsair iCue 220T's thermals were going to be because of the front-panel design, only to later briefly muble through the chart showing it wasn't actually bad at all.

I like Steve's videos, but he does have a couple of bees in his bonnet.
He ranted about the design more than the cooling impact, but it was still annoying since it was just a design choice, not a mistake as he put it.
 
I am not so certain about RDNA 3 better efficiency at all, a 350w (80p) 4090 do this:

VuGaGUV8EwofumnTgfDBfk-1200-80.png.webp


Virtually what a 450 w 4090 do, when talking about laptop that what the 4090 do the performance when locked at 50-80% that give some idea.
 
I am not so certain about RDNA 3 better efficiency at all, a 350w (80p) 4090 do this:

VuGaGUV8EwofumnTgfDBfk-1200-80.png.webp


Virtually what a 450 w 4090 do, when talking about laptop that what the 4090 do the performance when locked at 50-80% that give some idea.
Who knows? We'll just have to wait and see how much people can play with the power envelope of a 7900xt.
 
He ranted about the design more than the cooling impact, but it was still annoying since it was just a design choice, not a mistake as he put it.
My memory was that his complaint was that the design itself necessarily would cripple the cooling, but I'm not majorly disagreeing with you.

I wound up buying that case this spring and it works pretty well with a 12600K and 6800--no thermal throttling.
 
  • Like
Reactions: Meeho
like this
Yeah, so they put this out now, at this price point, force Nvidia to lower their prices, and still have the option to go big with a refresh, when there's no surplus stock left eating into their market share.

TSMC's max die size on 4nm is around 850. So Nvidia can't go bigger than that. AMD is at 300 (and 5nm).

If these were car engines, Nvidia's got a six-cylinder engine making 450/450 and gets a combined MPH of 19. AMD just rolled out a 3-cylinder that's making 350/350 that gets 28. What happens when AMD decides to throw some displacement into it?

And right now, the only real market for new cards is mobile. Focusing on efficiency, whether it was planned around the crypto collapse, or just a conservative approach to this gen, puts AMD in a real solid position, more than I think a lot of people think.
AMD can and they can't, I need to go digging through the TSMC announcements from their stuff this year but I want to say their max size for their high-speed 5TB+ interconnects was in the mid 800mm2 for a complete package the current RDNA 3 chips still come in at a package size of 522mm2 + the spacing between the individual chips. TSMC can go much larger than that 800'ish mm2 size out to 1700 mm2 but only if they drop the speeds significantly down to 2.7 TB/s. So they can go bigger sure but not substantially. TSMC has been teasing their new 2500mm2 high-speed interposer for a while but at last check was delayed due to various issues.

Either way, I am tempted by these cards and will probably be seriously considering one by April.
 
Last edited:
I am not so certain about RDNA 3 better efficiency at all, a 350w (80p) 4090 do this:

VuGaGUV8EwofumnTgfDBfk-1200-80.png.webp


Virtually what a 450 w 4090 do, when talking about laptop that what the 4090 do the performance when locked at 50-80% that give some idea.

The most surprising thing about this slide was that they found 8 games with raytracing to test in.
 
AMD has a new wrinkle to add to refresh... that goes beyond faster memory and bumped clocks of days past. They can now refresh by updating the controller chip and doubling cache.

Yep. AMD has tons of options. Bigger die can mean more raster or just more RT. Or both. They can open up their power envelope. They can add cache. They can add memory. And they're a node behind Nvidia.

AMD *opted* to be a generation behind Nvidia with RDNA3, and they're still cost-competitive, and they're in a preferable form factor.

FSR isn't in a position to woo people away from DLSS, and they're sitting on a pile of old stock. They're dueling with their left hand here.

AMD can and they can't, I need to go digging through the TSMC announcements from their stuff this year but I want to say their max size for their high-speed 5TB+ interconnects was in the mid 800mm2 for a complete package the current RDNA 3 chips still come in at a package size of 522mm2 + the spacing between the individual chips.

I'll take your word for it. But the total package size isn't all that relevant since it's spread across seven chips, six of which cost something like three bucks and change. Sure, their total package costs are higher than some people believe, but their yields are tremendous.

Nvidia has two options: slightly bigger, and future nodes. You can bet they're putting their bottom dollar on DLSS and other features, because AMD has serious hardware option advantages.

Also they don't have a reputation for starting fires, heh.
 
Only 100 bucks difference between the cards seems so odd.

And so much for the BS rumors about super high clocks or the 2.5 x perf increase nonsense that went on for months earlier this year. These people and their "sources"...
 
Yep. AMD has tons of options. Bigger die can mean more raster or just more RT. Or both. They can open up their power envelope. They can add cache. They can add memory. And they're a node behind Nvidia.

AMD *opted* to be a generation behind Nvidia with RDNA3, and they're still cost-competitive, and they're in a preferable form factor.

FSR isn't in a position to woo people away from DLSS, and they're sitting on a pile of old stock. They're dueling with their left hand here.



I'll take your word for it. But the total package size isn't all that relevant since it's spread across seven chips, six of which cost something like three bucks and change. Sure, their total package costs are higher than some people believe, but their yields are tremendous.

Nvidia has two options: slightly bigger, and future nodes. You can bet they're putting their bottom dollar on DLSS and other features, because AMD has serious hardware option advantages.

Also they don't have a reputation for starting fires, heh.
It's not the size of the individual chips but in this case the layer they connect into, the 5.3tb/s interposer layer has a max size that can't currently be exceeded without dropping the speeds downward but that does leave AMD with enough room to introduce a new larger GCD.
I mean the MI250x is also a chiplet design and clocks in at a massive total size of 790 mm2, but that interconnect is only operating at 2.7TB/s so little better than half the speed of the interconnect used for the 7900 parts.
 
The most surprising thing about this slide was that they found 8 games with raytracing to test in.
I have 10 games installed on my PC that have ray tracing right now.
  1. Battlefield V
  2. Cyberpunk 2077
  3. Doom Eternal
  4. Dying Light 2
  5. Marvel's Spider-Man Remastered
  6. Quake II RTX
  7. Resident Evil 2
  8. Resident Evil 3
  9. Resident Evil 7
  10. Resident Evil Village
 
Yeah, seems like AIBs are not getting enough volume from Nvidia and are shifting the coolers to RDNA3. Could have some interesting OC results.
Probably looking at full load gpu temps in the mid 50s with those coolers....
 
I have 10 games installed on my PC that have ray tracing right now.
  1. Battlefield V
  2. Cyberpunk 2077
  3. Doom Eternal
  4. Dying Light 2
  5. Marvel's Spider-Man Remastered
  6. Quake II RTX
  7. Resident Evil 2
  8. Resident Evil 3
  9. Resident Evil 7
  10. Resident Evil Village
I highly recommend adding Gotham Knights to your list in the near future.
 
It's not the size of the individual chips but in this case the layer they connect into, the 5.3tb/s interposer layer has a max size that can't currently be exceeded without dropping the speeds downward but that does leave AMD with enough room to introduce a new larger GCD.

Ah, got it. Still, that leaves them, what, and I'm being realistic, 150-200 extra mm^2 they can tap? If they're doing this with 300 for the GCD, what would 400, 450 look like? Especially if they dedicate 25 percent of the transistors to raster and the rest to RT? That would probably bring power up to or over 400, still less than Nvidia, and possibly beat it on both raster and RT, just without DLSS.

All without changing nodes?

The only way the refresh isn't lit is if they're already fast-tracking RDNA4.

ETA: their naming scheme leads me to believe they are planning on a refresh FWIW.
 
I am not so certain about RDNA 3 better efficiency at all, a 350w (80p) 4090 do this:

VuGaGUV8EwofumnTgfDBfk-1200-80.png.webp


Virtually what a 450 w 4090 do, when talking about laptop that what the 4090 do the performance when locked at 50-80% that give some idea.
We don't know where these RDNA3 cards are on the efficiency curve.
 
Ah, got it. Still, that leaves them, what, and I'm being realistic, 150-200 extra mm^2 they can tap? If they're doing this with 300 for the GCD, what would 400, 450 look like? Especially if they dedicate 25 percent of the transistors to raster and the rest to RT? That would probably bring power up to or over 400, still less than Nvidia, and possibly beat it on both raster and RT, just without DLSS.

All without changing nodes?

The only way the refresh isn't lit is if they're already fast-tracking RDNA4.

ETA: their naming scheme leads me to believe they are planning on a refresh FWIW.
Exactly, there is room to grow.
Looking into it TSMC constructs their interposers using the same lithography as their chips it is just a very simple chip to make so their individual interposers still have the same 858mm2 reticle limit their chips have.
TSMC partnered up with Broadcom and they found ways to seamlessly stitch those together and that's how they get their 1700mm2 and 3400mm2 interposers together, but they use that almost exclusively for their big stuff as sweet Jesus it looks expensive.
But still, even if they released a 7950 xtx STFUATMM edition with a GCD clocking in at 450 mm2 that would be a dope card and up there for the top spot, but their current choice really makes good use of wafer sizes and because they are small these cards should be abundant as F... I eagerly await what early 2023 has to offer for these.
 
It would be nice for us gearing up to refresh this gen that AMD goes for market share this gen. I am still on 1080TI, and expect to buy something over next 6 months. This launch looks promising.
 
I'm very disappointed that this launch gave us basically zero useful performance metrics to go on. No 3rd party benchmarks obviously, but in this case not even AMD exaggerated marketing benchmarks either... Really the only thing that this tells us is that the cards aren't going to be particularly competitive beyond perhaps a performance per dollar standpoint. Why don't they have something like a 7950XTX with 3x 8-pin connectors and a larger heatsink? If Crossfire had continued into the DX12 era, two of these cards would have been a pretty nice setup also, only using 4x 8-pin connectors total.

I've been trying, unsuccessfully, to buy a 4090 ever since launch. I wondered if maybe that was a blessing in disguise and that the 7900XTX would kick some serious ass... but nope - back to trying to get a 4090.
 
I'm very disappointed that this launch gave us basically zero useful performance metrics to go on. No 3rd party benchmarks obviously, but in this case not even AMD exaggerated marketing benchmarks either... Really the only thing that this tells us is that the cards aren't going to be particularly competitive beyond perhaps a performance per dollar standpoint. Why don't they have something like a 7950XTX with 3x 8-pin connectors and a larger heatsink? If Crossfire had continued into the DX12 era, two of these cards would have been a pretty nice setup also, only using 4x 8-pin connectors total.

I've been trying, unsuccessfully, to buy a 4090 ever since launch. I wondered if maybe that was a blessing in disguise and that the 7900XTX would kick some serious ass... but nope - back to trying to get a 4090.
Take 6950 performance in your favorite games, add ~60% and see if that's worth ~$1100 to you. If RT or CUDA are not important to you, I really don't see a reason to consider a 4090. If they are, I don't see a reason to consider any AMD card.
 
I'm very disappointed that this launch gave us basically zero useful performance metrics to go on. No 3rd party benchmarks obviously, but in this case not even AMD exaggerated marketing benchmarks either... Really the only thing that this tells us is that the cards aren't going to be particularly competitive beyond perhaps a performance per dollar standpoint. Why don't they have something like a 7950XTX with 3x 8-pin connectors and a larger heatsink? If Crossfire had continued into the DX12 era, two of these cards would have been a pretty nice setup also, only using 4x 8-pin connectors total.

I've been trying, unsuccessfully, to buy a 4090 ever since launch. I wondered if maybe that was a blessing in disguise and that the 7900XTX would kick some serious ass... but nope - back to trying to get a 4090.
Doesn't mean the AIBs won't be producing them. And, yeah the lack of perf metrics is concerning, although it could just mean it won't beat a 4090, so no point in telling the world your new card is in 2nd place.
 
I'm very disappointed that this launch gave us basically zero useful performance metrics to go on. No 3rd party benchmarks obviously, but in this case not even AMD exaggerated marketing benchmarks either.
I mean, what would be the point? Even if they put up any numbers like this, everyone would just accuse of cherry picking, "optimizations", fudging numbers, and so on. It's pointless, and AMD knows it.
 
I'm very disappointed that this launch gave us basically zero useful performance metrics to go on. No 3rd party benchmarks obviously, but in this case not even AMD exaggerated marketing benchmarks either... Really the only thing that this tells us is that the cards aren't going to be particularly competitive beyond perhaps a performance per dollar standpoint. Why don't they have something like a 7950XTX with 3x 8-pin connectors and a larger heatsink? If Crossfire had continued into the DX12 era, two of these cards would have been a pretty nice setup also, only using 4x 8-pin connectors total.

I've been trying, unsuccessfully, to buy a 4090 ever since launch. I wondered if maybe that was a blessing in disguise and that the 7900XTX would kick some serious ass... but nope - back to trying to get a 4090.
Consumer systems do not have the kind of memory bandwidth needed to make Crossfire/SLI work at the kind of resolutions you would want them for, your home PC just can't move that much data that fast to be useful at 4K and above, if we still lived in a world of 1080p then it could be a thing but we don't so it isn't.

AMD is playing the smart game here, they are releasing a card that competes well against the 4080 16GB, and another that will go toe to toe with anything that the 3000 series has to offer, leaving their existing RDNA lineup to fill in the gaps below for the time being.
Everybody is focusing on the 7900 xtx because it is AMD's top card but it's that xt that has my interest because it's dangerous. It's not trying to stand up against the 4000 series it's aimed squarely at the 3000's, and that is the card Nvidia has to be afraid of right now, it is going to be cheap (for AMD), and its simplicity could make it extremely abundant.
But 3'rd party verification asside, looking at AMD's slides it should be an easy alternative to a 3090 (maybe not a 3090TI), but for a much nicer price, and if you had to choose between anything 3000 series and the 7900 xt, unless you had to go Nvidia for some specific non-gaming related reason then I think AMD wins that fight with room to spare.
 
I mean, what would be the point? Even if they put up any numbers like this, everyone would just accuse of cherry picking, "optimizations", fudging numbers, and so on. It's pointless, and AMD knows it.
Ya I'll take no real numbers over lies....
Still they did pull out the BS look look 70FPS at 8k... and 300FPS engine limit. Now 300 FPS engine limit in some games may well be very true... and perhaps when it comes to the average game (and not the break your GPU because Nvidia paid us hairworks money even though it looks meh) perhaps we are getting to the point or stupidity. Esports titles stopped needing more horsepower long ago... and the games people really really play don't either.

AMD did talk about some interesting upcoming driver tech.... really AMDs PR team does sort of suck. They have some cool upcoming shit that gets back burned, they need to get some fancy names on that stuff and get SUE to talk about it like its the second coming like Jacket man. Instead people are going to take away how uncomfortable they all looked, how they didn't provide even internal bench numbers... and worse talked about frame generations AHHH that entire idea needs to be laughed at not copied.
 
Consumer systems do not have the kind of memory bandwidth needed to make Crossfire/SLI work at the kind of resolutions you would want them for, your home PC just can't move that much data that fast to be useful at 4K and above, if we still lived in a world of 1080p then it could be a thing but we don't so it isn't.

AMD is playing the smart game here, they are releasing a card that competes well against the 4080 16GB, and another that will go toe to toe with anything that the 3000 series has to offer, leaving their existing RDNA lineup to fill in the gaps below for the time being.
Everybody is focusing on the 7900 xtx because it is AMD's top card but it's that xt that has my interest because it's dangerous. It's not trying to stand up against the 4000 series it's aimed squarely at the 3000's, and that is the card Nvidia has to be afraid of right now, it is going to be cheap (for AMD), and its simplicity could make it extremely abundant.
But 3'rd party verification asside, looking at AMD's slides it should be an easy alternative to a 3090 (maybe not a 3090TI), but for a much nicer price, and if you had to choose between anything 3000 series and the 7900 xt, unless you had to go Nvidia for some specific non-gaming related reason then I think AMD wins that fight with room to spare.
Scarier for Nvidia... AMD keeps dropping their 6000s and will run through their smaller overstock much quicker. When that happens a 7800 XT or worse a 7700 XT could hit the market while the supply chain is still sitting on a ton of 3080/70s.
 
Consumer systems do not have the kind of memory bandwidth needed to make Crossfire/SLI work at the kind of resolutions you would want them for, your home PC just can't move that much data that fast to be useful at 4K and above, if we still lived in a world of 1080p then it could be a thing but we don't so it isn't

I've never heard about "memory bandwidth" being the primary limitation preventing Multi-GPU from working in the current environment. The only thing that killed Multi-GPU was Nvidia/AMD passing the buck to the game developers (DX12 intended for multi-GPU to be built directly into each game instead of being done at the driver level, as was the case with DX11 and prior), and those game developers declined the offer. And I know that most people over the years focused on SLI/Crossfire being done using two high-end cards, but it also opened up the door to use two mid-range cards or two older cards as another way to get high-end performance, which would have been especially useful in the current environment. I bet Nvidia wouldn't have nearly as much of an issue clearing out that old 3000-series inventory if people who already owned one could simply add a 2nd card as an easy upgrade.

Ya I'll take no real numbers over lies....

Agreed, obviously any numbers given directly by the company itself need to be taken with a grain of salt, but there is usually at least some basis in fact there.
 
AMD didn't show performance compared to Nvidia products.
AMD didn't show performance compared to their own products.

Performance is THE. ONE. THING. that matters. They didn't even give us a hint.

This comes across as AMD being INCREDIBLY embarrassed about what they have. They priced it low, but didn't give us any reason to believe its a good deal. They made it small, but didn't let us know how much we're giving up for that size. They talked about features, but not what those features are offering.


I was honestly NOT expecting AMD to be so embarrassed. This is giving me VEGA launch vibes. This is giving me Raja vibes.

None of those things are good.
 
Back
Top