Join us on November 3rd as we unveil AMD RDNA™ 3 to the world!

AMD didn't show performance compared to Nvidia products.
AMD didn't show performance compared to their own products.

Performance is THE. ONE. THING. that matters. They didn't even give us a hint.

This comes across as AMD being INCREDIBLY embarrassed about what they have. They priced it low, but didn't give us any reason to believe its a good deal. They made it small, but didn't let us know how much we're giving up for that size. They talked about features, but not what those features are offering.


I was honestly NOT expecting AMD to be so embarrassed. This is giving me VEGA launch vibes. This is giving me Raja vibes.

None of those things are good.
Eh lots of people did not trust nvidia's slides when they compared it to their own products. My guess is people would not trust AMD's either. Wait for reviews as always.
 
I've never heard about "memory bandwidth" being the primary limitation preventing Multi-GPU from working in the current environment. The only thing that killed Multi-GPU was Nvidia/AMD passing the buck to the game developers (DX12 intended for multi-GPU to be built directly into each game instead of being done at the driver level, as was the case with DX11 and prior), and those game developers declined the offer. And I know that most people over the years focused on SLI/Crossfire being done using two high-end cards, but it also opened up the door to use two mid-range cards or two older cards as another way to get high-end performance, which would have been especially useful in the current environment. I bet Nvidia wouldn't have nearly as much of an issue clearing out that old 3000-series inventory if people who already owned one could simply add a 2nd card as an easy upgrade.



Agreed, obviously any numbers given directly by the company itself need to be taken with a grain of salt, but there is usually at least some basis in fact there.
PCIe5 is still not fast enough to deal with coordinating 2 or more cards and system memory when dealing with 4k assets, additionally, you need a bare minimum of 4 memory channels otherwise you are going to be in a constant state where you have either the CPU or one of the GPU's waiting for access to system RAM.
So until we see consumer systems with dual PCIe5, and 4 or more dedicated memory channels, then GPUs with NVLink or whatever AMD would call theirs then it isn't feasible and a seamless method for presenting those GPUs as a single interface it isn't going to happen.
DX12 made it so you can implement multi-GPU into any project you want, but the fun thing with low-level coding is somebody needs to build that functionality into an engine, test, and maintain it, and it is a lot of work to do that. It may easily add a few million dollars to the cost of launching a title with it, and there is no way it would result in millions' worth of additional sales because it isn't going to bring in any additional users. The only people it directly benefits are whoever sold the GPU not the developer who uses them, so there are just no reasons for a developer to invest those sorts of resources because any features that would make it worthwhile are going to be hung up on other things long before they get to the GPU.
Nvidia and AMD did do it previously in the drivers in that they added thousands of lines of extra code to the drivers adding tons of complexity to add profiles for games that they would then need to keep and maintain for year and years to come, all for the sake of selling 2 cards where they may have only sold one before. But the cost of maintaining that eventually hit a point where they spent more time and money maintaining the SLI profiles than they were making sales of additional cards, it's a dead-end solution for the consumer space.
 
Last edited:
AMD didn't show performance compared to Nvidia products.
AMD didn't show performance compared to their own products.

Performance is THE. ONE. THING. that matters. They didn't even give us a hint.

This comes across as AMD being INCREDIBLY embarrassed about what they have. They priced it low, but didn't give us any reason to believe its a good deal. They made it small, but didn't let us know how much we're giving up for that size. They talked about features, but not what those features are offering.


I was honestly NOT expecting AMD to be so embarrassed. This is giving me VEGA launch vibes. This is giving me Raja vibes.

None of those things are good.
Screenshot_20221104-010413.jpg
 
Nvidia has huge teams of software and hardware engineers pioneering research in fields that AMD doesn't even bother with and that trickledown is what helps keep them in the lead.
And you forgot the bought lock-in with the walled garden effect for data science in the form of CUDA, even in "open source" projects.

This is still an issue in late 2022:
https://towardsdatascience.com/on-t...g-outside-of-cudas-walled-garden-d88c8bbb4342
Nikolay Dimolarov said:
Open source code that targets only a proprietary target is not exactly open open source. We can do better!
 
And you forgot the bought lock-in with the walled garden effect for data science in the form of CUDA, even in "open source" projects.

This is still an issue in late 2022:
https://towardsdatascience.com/on-t...g-outside-of-cudas-walled-garden-d88c8bbb4342
CUDA is a disgustingly huge platform, people like to simplify it and say its a language like OpenML or OpenCL, but it's not it is a unified language that exists in a massive development environment, that to replicate the functionality of you need to work in 3 or 4 different languages on 2 or 3 different development platforms. Replacing CUDA will be an economic undertaking I doubt many are willing to attempt.
 
Eh lots of people did not trust nvidia's slides when they compared it to their own products. My guess is people would not trust AMD's either. Wait for reviews as always.
"I don't trust first-party performance numbers"

- Not investors.

These presentations are just as much for investors as for potential consumers.

When AMD isn't willing to talk about their performance versus their old products (IE 6900XT vs 7900XT) its because they have something to hide from their investors.

When you have a huge dick, you let people see it. When you keep the towel on, you have something to hide.
 
I wish I could remember where I saw folks claiming that prices will pretty match what the Nvidia prices are, so I could see them eat their hats. The pricing is absolutely incredible for the cards performance level, plus the fact that the 6000 series is still out there and on sale for far less. :)
 
I wish I could remember where I saw folks claiming that prices will pretty match what the Nvidia prices are, so I could see them eat their hats. The pricing is absolutely incredible for the cards performance level, plus the fact that the 6000 series is still out there and on sale for far less. :)

So, what are the performance levels you are talking about? How much faster is this versus.... anything? Not talking about "efficiency at given power" or "performance at 250w" or "power at given performance"

product versus product: what are the performance numbers you are talking about?
 
Plus I bet the driver teams are five tens right up until the launch.
 
Eh lots of people did not trust nvidia's slides when they compared it to their own products. My guess is people would not trust AMD's either. Wait for reviews as always.
To say "people would've been skeptical anyway" is kind of a cop out though. If you have full faith and conviction in the strength of your product then the skeptics don't matter and you certainly don't revolve your business decisions around them.

If nothing else, marketing benchmarks demonstrate some accountability. When Nvidia showed them, even if some slagged them off as cherrypicked and idealized synthetic scenarios, it was still a point of reference that third party reviewers could compare against and be able to judge if they really were bullshit marketing exaggerations - or not.

Simplest explanation is that with six weeks until launch, AMD calculated better to not commit to any hard numbers in case the driver and firmware teams find some extra performance in the meantime (and that's not to imply performance is lacking, but the earliest numbers are usually also the ones that become the permanent record even if they improved later).
 
Last edited:
To say "people would've been skeptical anyway" is kind of a cop out though. If you have full faith and conviction in the strength of your product then the skeptics don't matter and you certainly don't revolve your business decisions around them.

If nothing else benchmarks dsmonstrate some accountability. When Nvidia showed them, even if some slagged them off as cherrypicked and idealized synthetic scenarios, it was still a point of reference that third party reviewers could compare against and be able to judge if they really were bullshit marketing exaggerations - or not.

Simplest explanation is that with six weeks until launch, AMD's cost-benefit analysis concluded better to not commit to any hard numbers in case the driver and firmware teams can find some extra performance somewhere.
That is a good point...

It was probably a combination of all the things we have talked about.
They probably are in second place vs a 4090...
Probably would destroy a 4080, which imo is very likely why Nvidia isn't currently selling them.
You also have a point... with a little over a month to go on a brand new arch. Perhaps they can find another 5-10% in the drivers when reviewers get their cards in a month or so.
 
We don't know where these RDNA3 cards are on the efficiency curve.
True and the fact that they spoke so much about optimising that aspect could be all talk, but why would a mix of TSMC 5 and 6 with the added interconnect would end up more efficient than TSMC 4 ?

Possible but all sign seem to be in the other direction, what make it possible would be like last generation, ram being way more power hungry.
 
Eh lots of people did not trust nvidia's slides when they compared it to their own products. My guess is people would not trust AMD's either. Wait for reviews as always.
They did the usual percentage over some product and some FPS numbers, my only issue is some numbers being unclear on the FSR setting being used:
AMD%20RDNA%203%20Tech%20Day_Press%20Deck%2043.png

Could be AMD raytracing having became good enough to enter the 4k RT on at 60 fps if upscaling is on group or it could be FSR performance being used, I cannot find endnote RX-832, looking at the 44 fps a 6950xt was getting with FSR performance at 1440p I imagine it is not quality (https://www.thefpsreview.com/2022/07/11/sapphire-nitro-amd-radeon-rx-6950-xt-pure-review/5/)
 
True and the fact that they spoke so much about optimising that aspect could be all talk, but why would a mix of TSMC 5 and 6 with the added interconnect would end up more efficient than TSMC 4 ?
Split clocks and architecture optimization could bring enough gains.

Only 100 bucks difference between the cards seems so odd.
Agreed, the price difference seems too small compared to the specs difference.
 
Simplest explanation is that with six weeks until launch, AMD calculated better to not commit to any hard numbers in case the driver and firmware teams find some extra performance in the meantime (and that's not to imply performance is lacking, but the earliest numbers are usually also the ones that become the permanent record even if they improved later).
I think it's much more simple than that. Both of these cards are aimed at the 4080 and not the 4090. The 4080 isn't launched yet. Thus, there are no numbers for AMD to show.
 
Split clocks and architecture optimization could bring enough gains.
That and GDRR6 vs 6x I imagine and maybe the giant die of the 4090 enable it to have much better efficacy than the smaller one will have when it is pacing itself, but I do not see why it would be significatif.
 
I'm not sure where they got them but FPS Review has some fps numbers for 6 games including a couple with ray tracing, there's no comparison to other cards and just says it's the XTX version at 4k max settings without any further details(FSR?). I really wish they would both stop pushing dlss/fsr performance numbers and stick with native resolutions.

I'll wait for reviews to judge performance but at least the price didn't scare me away and I like that I might even be able to run the xtx without replacing my power supply or measuring my case to see if it will fit(at least for reference).

I'm still not convinced that RT is worth caring about much currently but based on what numbers they did share it would have been nice to see a bigger bump there to push it closer to being worthwhile if nothing else.
 
I'm not sure where they got them but.....I really wish they would both stop pushing dlss/fsr performance numbers and stick with native resolutions.

I'll wait for reviews to judge performance but at least the price didn't scare me away and I like that I might even be able to run the xtx without replacing my power supply or measuring my case to see if it will fit(at least for reference).

I'm still not convinced that RT is worth caring about much currently
they can google existing results and do math?!
and ummm:
1667526645868.png

i agree but might need a larger case or reconfig...
i agree.
 
they can google existing results and do math?!
That seem more than close enough to go with the $600 cheaper option, I cannot see the $1200 4080 gb competing. (will have to full third party review, AMD could be choosing the best sample with that low of example)
 
they can google existing results and do math?!
and ummm:
View attachment 524046
i agree but might need a larger case or reconfig...
i agree.
It was exact fps numbers for several games listed in their coverage of the announcement, I doubt FPSR just made them up and threw them in there without mentioning it.

Edit: I found the same numbers on AMD's site here. It had a little more info about the hardware config but still just says 4K max settings.
 
Last edited:
It was exact fps numbers for several games listed in their coverage of the announcement, I doubt FPSR just made them up and threw them in there without mentioning it.
i didnt watch it end to end just skipped through bits, but based on what i saw and pics that have been posted ive only seen fps numbers for rt, the raw 4k was a 1.*x number. it wouldnt be making it up, they could have used their own existing test numbers. the numbers posted on their site dont appear, imo, to be part of the preceding quotation.
edit: but im sure one of the fps crew lurking around here may correct me.
 
i didnt watch it end to end just skipped through bit, but based on what i saw and pics that have been posted ive only seen fps numbers for rt, the raw 4k was a 1.*x number. it wouldnt be making it up, they could have used their own existing test numbers. the numbers posted on their site dont appear, imo, to be part of the preceding quotation.
See my edit, It's on AMD's website and I would assume went out with press packets as well.
 
but they could have googled and found the page
Or they could have read the online press release and followed the learn more link like I did, regardless it solves where it came from. I'm still not sure why they wouldn't include that in the presentation though, especially since the graphic almost looks like it was made to be a slide.
 
They did the usual percentage over some product and some FPS numbers, my only issue is some numbers being unclear on the FSR setting being used:

Could be AMD raytracing having became good enough to enter the 4k RT on at 60 fps if upscaling is on group or it could be FSR performance being used, I cannot find endnote RX-832, looking at the 44 fps a 6950xt was getting with FSR performance at 1440p I imagine it is not quality (https://www.thefpsreview.com/2022/07/11/sapphire-nitro-amd-radeon-rx-6950-xt-pure-review/5/)

Things are going to get interesting from here.
This is my 4090 on the Cyberpunk benchmark - 4K, DLSS Quality (highest DLSS), RT Ultra, all other settings maxed.

76.5 FPS
Now the question is... what level of FSR did AMD use on their run?

Cyberpunk bench 4K RT Ultra DLSS Quality.jpg


The other wrinkle is that there is a level of RT above Ultra in Cyberpunk... "Psycho", which the 4090 does easily using DLSS Quality. I wonder if the 7900 XTX is capble of this.
 
It's $500 less with no stats besides "1.7x faster". It means nothing without reviews. Nothing.
its actually $600 less. You could build a PC for $600 and still get a 7900xtx for the same price as a 4090 RTX.

Not saying its worth it in the long run since we have 3rd party benchmarks. Makes me wonder when reviews go out.
 
its actually $600 less. You could build a PC for $600 and still get a 7900xtx for the same price as a 4090 RTX.

Not saying its worth it in the long run since we have 3rd party benchmarks. Makes me wonder when reviews go out.
I see you have mentioned this before as well. Sure, you can source ALL the other components for A computer for 600, but it will fall victim to the same thing the 4090 did, and that's CPU bottlenecks. You aren't building a system that can push either card (if we are to believe the 1.7x) for $600.

Edit: I get it. You guys are FROTHING at the mouth for an AMD card and just hoping and praying it's as fast, or faster than Nvidia. You want it so bad! Unfortunately, I don't see it happening and AMD intentionally not even showing it's hand before reviewers isn't a "smooth move, because who cares about FPS anyway", it's a silly move that stinks of low performance numbers.
 
its actually $600 less. You could build a PC for $600 and still get a 7900xtx for the same price as a 4090 RTX.

Not saying its worth it in the long run since we have 3rd party benchmarks. Makes me wonder when reviews go out.
Wondering how badly the aib cards will fair as far as msrp creep.
 
Back
Top