This Is BAD, AMD Basically Lies About CPU Performance

What AMD shows, is the difference between ReBAR (which is universal) and Smart Access Memory (which is proprietary).
SAM is better for GPUs, hands down,
is it by much a 1080p too ?


View: https://youtu.be/FM-mDf0U38k?t=722

Gain by enabling SAMéRebar at 1080p with a 6800xt

5950x: +12
12900k:+ 13

On a a 6600x:

View: https://youtu.be/FM-mDf0U38k?t=743

+2% on the 5950x, +1% the 12900k (10900k having the biggest boost with +8%), Far cry 6 best boost 12900k again:
View: https://youtu.be/FM-mDf0U38k?t=779, same for Tom Clancy right after

Maybe it changed since or the 6600 is quite different, of all the game tested 2 years ago on a 6800xt and a 6600, on a 12900k vs 5950x system, seem to be a bit of a coin toss which gain the most by enabling SAM/rebar and never a clear big gap.
 
False advertising, the nerve of these tech overlords! Hell we get lied to from the President all the way down the ladder, nothing to see here, move along.
 
is it by much a 1080p too ?


View: https://youtu.be/FM-mDf0U38k?t=722

Gain by enabling SAMéRebar at 1080p with a 6800xt

5950x: +12
12900k:+ 13

On a a 6600x:

View: https://youtu.be/FM-mDf0U38k?t=743

+2% on the 5950x, +1% the 12900k (10900k having the biggest boost with +8%), Far cry 6 best boost 12900k again:
View: https://youtu.be/FM-mDf0U38k?t=779, same for Tom Clancy right after

Maybe it changed since or the 6600 is quite different, of all the game tested 2 years ago on a 6800xt and a 6600, on a 12900k vs 5950x system, seem to be a bit of a coin toss which gain the most by enabling SAM/rebar and never a clear big gap.

It varies heavily at 1080p, but SAM is generally better than ReBar because of some GPU specific optimizations involved. Doesn’t need to be much better but as those slides showed it was ~5%’ish and that is what they were showing in their graphs.
 
as those slides showed it was ~5%’ish
Which side are we talking about ? (not sure they make it clear how bigger the SAM boost would be from the rebar boost, it is a bit all over the place, in some game rebar seem better)

a 5900xt is not really close to a 13700k when it come with gaming, 25%-30% slower, they need a lot to make it work, going for the ddr-4 13700k help but still
 
What AMD shows, is the difference between ReBAR (which is universal) and Smart Access Memory (which is proprietary).
SAM is better for GPUs, hands down, AMD tailor-fits the solution to the problem, and it's good which does give an advantage to keeping to an all-AMD system in the mid to low class.
But this is a highly misleading result at best because they are not showing CPU results, they are showing platform results. the CPU is not the limiting factor in those tests and AMD knows it!
I've seen people claim AMD has some sauce in SAM. But, I have seen no official claim of that by AMD, let alone good data from a reputable site/channel.

The best I have seen, is a video Hardware Unboxed did a couple of years ago, with RDNA 2 GPUs, used on Zen 2, 3, and whatever Intel generation was relevant at that moment. And the benefit from REBAR on Intel VS. SAM on AMD was basically a wash.
And actually, ZEN 2 at that time, seemed like it hadn't been payed as much attention for whatever BIOS/microcode optimizations there might be.


**looks like those videos were posted already ;)

It varies heavily at 1080p, but SAM is generally better than ReBar because of some GPU specific optimizations involved.
Shown us some info and data on that please.
 
I've seen people claim AMD has some sauce in SAM. But, I have seen no official claim of that by AMD, let alone good data from a reputable site/channel.

The best I have seen, is a video Hardware Unboxed did a couple of years ago, with RDNA 2 GPUs, used on Zen 2, 3, and whatever Intel generation was relevant at that moment. And the benefit from REBAR on Intel VS. SAM on AMD was basically a wash.
And actually, ZEN 2 at that time, seemed like it hadn't been payed as much attention for whatever BIOS/microcode optimizations there might be.


**looks like those videos were posted already ;)
You can’t really test it but ReBAR is an old tech, it’s been around for years and it’s very generic. SAM is some GPU specific functions build ontop of ReBAR to improve latency and reduce return times on system RAM.

So when you use an AMD GPU in an AMD system it gets the bonuses but put it in an Intel system and they go away. They aren’t huge bonuses at any stretch but they exist.
 
You can’t really test it but ReBAR is an old tech, it’s been around for years and it’s very generic. SAM is some GPU specific functions build ontop of ReBAR to improve latency and reduce return times on system RAM.

So when you use an AMD GPU in an AMD system it gets the bonuses but put it in an Intel system and they go away. They aren’t huge bonuses at any stretch but they exist.
Uh huh and please Show us some kind of reputable source about that.

I have seen AMD say that developers can chunk engine data to better fit REBAR/SAM.

You can do driver side customizations with REBAR, which Nvidia is often doing. (And you can take some manual control of, with Nvidia Profile Inspector).

But I haven't seen AMD broadly claim that SAM is much more than REBAR with a brand name in it. I mean, sure, they are probably doing work in their GPU drivers, BIOS, and chipset drivers, to make it work as well as possible. But I haven't seen any data that they are doing a relatively better job of it, than anyone else. Or that SAM is actually technically different from REBAR.
 
I think the reason why the response has been so blase to this is partially because it's also so nonsensical. Why in the world would they do this for a (very minor) refresh of an older gen part? It just doesn't make any sense. Under any scrutiny it would be easily an easily discovered ruse. There would be no long term gains... only detriments. Like who even cares enough about an older gen part at this point, that you would need to mischaracterize its performance to this degree?

To be fair, they stated their test bench conditions at least. There's not some stupid hidden refrigerator cooling their processor. It's just that they made a very cherry picked test bench for the testing (and on top of that took any "this is within testing deviations" sort of results as a "lead"). It's still, of course, absolutely idiotic and pointless.
 
I think the reason why the response has been so blase to this is partially because it's also so nonsensical. Why in the world would they do this for a (very minor) refresh of an older gen part? It just doesn't make any sense. Under any scrutiny it would be easily an easily discovered ruse. There would be no long term gains... only detriments. Like who even cares enough about an older gen part at this point, that you would need to mischaracterize its performance to this degree?
Like it's been said, it's not like every company doesn't do this. The reason AMD did this was because they didn't think the tech press would care about a Zen3 CPU in 2024.
To be fair, they stated their test bench conditions at least. There's not some stupid hidden refrigerator cooling their processor. It's just that they made a very cherry picked test bench for the testing (and on top of that took any "this is within testing deviations" sort of results as a "lead"). It's still, of course, absolutely idiotic and pointless.
This is why nobody should ever go by manufacturer benchmarks because it's very easy to skew them in their favor.
 
What AMD shows, is the difference between ReBAR (which is universal) and Smart Access Memory (which is proprietary).
SAM is better for GPUs, hands down, AMD tailor-fits the solution to the problem, and it's good which does give an advantage to keeping to an all-AMD system in the mid to low class.
I thought they were testing and the Graph showed 1 GPU (5900XT) but they actually used another GPU (6600XT, see fine print)?
But this is a highly misleading result at best because they are not showing CPU results, they are showing platform results. the CPU is not the limiting factor in those tests and AMD knows it!
Well, if it was a CPU they were trying to compare, it's sorta valid. But at the same time if it's only for AMD GPU users if I understand what you are saying.

So, pointless numbers if you use an Nvidia GPU, correct?
They might not be lying, but they are pulling their truth out of a technicality, and it's a "feels bad" all around.
Pretty standard Marketing bs. But yeah it's another level of Cherry-picking.

I suspect they are desperate.. see those last quarter GPU sales numbers...

I think it's fine, as long as they are pointing out that it requires AMD CPU+AMD GPU.

Nvidia includes DLSS performance numbers in their marketing, but we all know those require Nvidia GPU's.
 
Last edited:
I thought they were testing and the Graph showed 1 GPU (5900XT) but they actually used another GPU (6600XT, see fine print)?
5900xt is the name of the new CPU (not a 5900 XT GPU, if they really exist, never heard about them before now)
 
WTF they are naming CPU's and GPU's the same damn naming scheme...

Are you sure, because that is exactly what the HU guys were saying in the first few minutes of their video. Seems hard to believe they would make that kind of mistake.
 
WTF they are naming CPU's and GPU's the same damn naming scheme...

Are you sure, because that is exactly what the HU guys were saying in the first few minutes of their video. Seems hard to believe they would make that kind of mistake.
They seem to clearly say they used a 6600xt gpu to make a 13700k vs 5900xt cpu comparison, the 5900xt GPU does not exist to start with to possibly create that confusion (that I know of) outside some 20 years old Nvidia gpus.
 
Wonder if they hired that JM-AMD guy back or whatever his screen name was. Remember him back in the Bulldozer days? He was on here and a couple other forums before it launched talking about how great it was gonna be....then when it dropped he was never seen again.
 
….
The RX 5900 XT was their 2020 flagship..
It’s not even that old.
Was it not a 6900xt ? and after that a 6950xt ? Didn't the rdna 1 stop at the RX 5700 XT level tier:
https://www.techpowerup.com/gpu-specs/?architecture=RDNA+1.0&sort=generation

Cannot find it anywhere
https://www.newegg.ca/p/pl?d=rx+5900+xt
https://www.google.com/search?q=rx 5900xt++site:techpowerup.com

When I google the only things I find are some engineering sample with that code name (or early 2000s Nvidia gpus, called FX 5900 XT not RX), I could be completely out of touch, but that sound a bit alien to me.

Amd never implied that they tested their new 5900 XT CPU with a RX 5900 XT GPU and no one in the press got mixed up because that GPU does not exist to start with seem to me %100 factual.
 
Last edited:
So they used a high end 4 year old engineering sample? Was 5900XT a CPU?
Some BS going on...
 
So they used a high end 4 year old engineering sample? Was 5900XT a CPU?
Some BS going on...
No the RX 5900 XT was a GPU only released as an engineering sample. So technically there wasn’t a GPU in the AMD lineup to share the 5900xt name.

They used an RX6600 for the GPU in the testing to level the playing field.
 
Ok, I think I got it sorted out.
5900XT is a new old Zen 3 cpu (old architecture, Zen 4 is current and Zen 5 is coming soon™).
They used a bottom end GPU for CPU testing. This was the issue because if you really want to test CPU performance, you eliminate the GPU as a performance inhibitor as much as possible. So you use a 4090 running at 1080p resolution, not an AMD 6600 bottom end GPU. The results do show off the SAM feature which requires AMD CPU + AMD GPU, but this is not applicable performance expectations for 90% of the market, for the CPU. Pretty misleading.
It would definitely be misleading if they do not point out what they are showing. HU guys didn't catch it because it is 2 generations old architecture and they weren't interested in any of that news.

AMD did list the testing machine specs, so ultimately not as big of a deal as I initially thought.
 
Haven't we seen this all before, for all of the market players... Including AMD?
There are level to it, recent AMD release had been quite good, which create the surprise and reaction a bit here. (For an Apple GPU graph...during a presentation who knows what they are doing)

Take the almost exactly the same CPU launched in 2020:
03xwb8j7DMiPgVEfjODCc5i-10.fit_lim.size_845x.png


Margin of error faster or slower than a 10900k in most game, big deal for some high fps title was spot on and a fair representation.

Now they rebrand that CPU and it is suddenly faster than a 13700K at gaming, a newer and a full tier faster product than a 10900k ?

This one we can suspect, did not had the same level of care to it, it was not meant for the general public (that do not care about those CPU) but OEM that will do their own research and for cpu people already knoiw to the 2% margin of error exactly what they will do and do not need benchmark to start with anyway.

Would they have done something like that for their upcoming zen5 cpu launch, that would have been a real big deal.
 
Last edited:
Take the almost exactly the same CPU launched in 2020:
View attachment 660535

Margin of error faster or slower than a 10900k in most game, big deal for some high fps title was spot on and a fair representation.

Now they rebrand that CPU and it is suddenly faster than a 13700K at gaming, a newer and a full tier faster product than a 10900k ?.
Perhaps the difference has to do with the performance Intel has lost in the meantime due to security mitigations- Downfall, Spectre,etc?
 
Perhaps the difference has to do with the performance Intel has lost in the meantime due to security mitigations- Downfall, Spectre,etc?

Probably just cherry picking title for which at that resolution and details with 3200 ddr-4 ram and a RX6600 a 13700k was never faster. In the past they would have used a 3600mhz dd4 on a 2080TI and on more games to let the cpu shine. They tested on a gpu inferior to the 2019 mid range RX 5700 xt/2060 super.

Depending on setting a 1080p a rx 6600 in cyberpunk will do low 50 fps, possible that the CPU does not matter that much past a nice 5xxx Ryzen
 
and out right lied on some tests entirely.
that it hard to tell ? (if they do not precise the game setting do you lie...), I think it is mostly if not only the first part, completely GPU bound where any CPU over a 5600x become a test noise coin flip situation.
 
HU's testing was with a 5800X, not 5800XT. They expect the performance to be within a few%, so he extrapolates. The XT versions of those zen3s are announced but unreleased CPUs. But they are 2 gen's old and the bench's were skewed in such a way that they showed them beating newer Gen Intels, that HU's own testing are shown to be 13% or more faster.

It's all low end CPU stuff too, Old zen3's, and i5's.

The TLDR; it's a new low in misleading charts/performance claims.
 
The
View: https://youtu.be/MkKQ4VCDL7M?t=720 screenshot is funny, that what I thought AMD did.

HU didn't test with DDR4 3200 for the 5800X/T and the 13700K, like AMD did.

It seems like there could be a performance anomoly there, where the 5900XT is as good or faster in gaming (in some titles), when 13700K is also paired with DDR4 3200 and a weak GPU. And AMD is pushing that anomoly to confuse people into thinking their 2 generations older CPU is actually a performance match (or better).

If AMD's claimed 13600K and 13700K performance is repeatable in Intel labs, its likely Intel will tweak their microcode for 13th/14th Gen, to perform better with DDR4.
 
It seems like there could be a performance anomoly there, where the 5800XT is as good or faster in gaming (in some titles), when also paired with a weak GPU.
Not sure if it is an anomaly, if many game with a weak enough GPU a 5700x will be as fast than a 7700x or a 14900k, that just standard and normal.

HU didn't test with DDR4 3200 for the 5800X/T and the 13700K, like AMD did.

Which show how much it is about the game and gpu setting and nothing with the CPU, they could have made the graph with a 14900k using 7200 DDR5 and still have a bad binned underclocked 5950x (which is the new 5900xt if I not mistaken) being neck and neck.
 
Not sure if it is an anomaly, if many game with a weak enough GPU a 5700x will be as fast than a 7700x or a 14900k, that just standard and normal.



Which show how much it is about the game and gpu setting and nothing with the CPU, they could have made the graph with a 14900k using 7200 DDR5 and still have a bad binned underclocked 5950x (which is the new 5900xt if I not mistaken) being neck and neck.
The point is, HU's tests didn't show any game where the 5800X/T and 5900X/T were faster, while using the RX 6600 GPU. But, they didn't test with DDR4 3200, like AMD did.

When using DDR4 3200 for both Zen 3 and Intel 13th gen while also using an RX6600: there may be a performance anomoly where AMD's 2 generations older CPU actually performs as well and sometimes slightly faster in gaming.

HU is usually pretty thorough about details. I'm not sure why they didn't test with the same RAM speed AMD stated that they used.
 
The point is, HU's tests didn't show any game where the 5800X/T and 5900X/T were faster, while using the RX 6600 GPU. But, they didn't test with DDR4 3200, like AMD did.
AMD only claimed they were faster in a game, Cyberpunk and considering the performance was still 100% GPU bound (with a 6600) with a 12100 or Ryzen 2600 (at 9:03) it is hard to imagine that a DDR-4 13600 or 700k would have been slower than those 2.

Could be more about the game settings AMD choose to use than the ddr-4 or not being used to be able to find that performance gap in the zen3 favor.
 
Making a point that a marketing slide is off when all of us say "Wait until independent reviews" is stupid. What marketing slide is accurate? These are bargain bin SKUs too which they didn't even promote. Seems more like "oh look what we found (that won't matter to anyone)"
 
Making a point that a marketing slide is off when all of us say "Wait until independent reviews" is stupid.
Those marketing slide where from an event with none of us in it and never to be seen by us I would imagine, it is for system builder, can they wait until independent reviews to make decisions ?

Would they ever do this for marketing for us for product that interest us, I think people would be more onboard with the outrage.
 
Those marketing slide where from an event with none of us in it and never to be seen by us I would imagine, it is for system builder, can they wait until independent reviews to make decisions ?

Would they ever do this for marketing for us for product that interest us, I think people would be more onboard with the outrage.
HP, Dell, etc all test the chips before buying them. They don't go on marketing slides. That's for press. If AMD wasn't really promoting the SKUs then it's really a nothing burger. They are replacement SKUs. No one ever really focuses on those, including AMD.
 
HP, Dell, etc all test the chips before buying them.
Yes buying, did any decision or time was ever took before being able to test them and would they take money&time to test time ? What the point of that event at all then ? (serious question) it is not like the press care much about oem only CPU people cannot buy, maybe it is for investor if every oem that exist really can afford to not make any plan before getting their hands on them.
 
Last edited:
it is hard to imagine that a.....13600 or 700k would have been slower than those 2.
Exactly.

But, AMD posted the numbers. These numbers:

Z3JX8A9Jpqpv5NMTojaBDT.png

AMD COMPUTEX CLIENT PRESS DECK-01-01 (20).png


So they are either exploiting a DDR4 and/or GPU limited performance anomoly in 13th gen or.....I dunno what. Its weird and I would have liked to see someone repeat the test with DDR4 3200.
And as HU said in one of their coverage videos, AMD would have been better off not saying anything about the performance of the new CPUs.

12% faster in Cyberpunk is huge.
4% faster in Naraka is still noteable, as a Zen 3 shouldn't be faster at all. And 4% is more than a trivial difference. Especially considering the limited GPU used.


TheFPSReview had some words about it. Watching it now:

View: https://www.youtube.com/watch?v=M1p8WFhiwHc
 
Last edited:
.I dunno what.
Could it be some game setting, 720p extremelly low some crowd, and paff you create a little gap for the AMD platform for some reason or the other way around where the 6600 struggle in the down low 20s and then 12% faster become a margin of error, one at 20 the other at 22-23 fps. Usually 5% or less sound like margin of error territory but lower the fps the less a percentage mean something.
 
Back
Top