AMD’s RX 6000 GPUs to Boost Perf With Ryzen 5000 CPUs via Smart Memory Access

DPI

Nitpick Police
Joined
Apr 20, 2013
Messages
11,062
So by your logic, any benchmark using DLSS doesn't count because it is game specific and is most definitely vendor specific.
DLSS benches would be the definition of vendor-specific, but they really only count for determining the difference between DLSS on and off; not for a presentation that tries to imply or mislead people into thinking the DLSS-enabled numbers are what could reasonably be expected across the board for any or most games.

In other words if the Ampere presentation consisted of a roundup of only DLSS-enabled games, with big "Ampere vs Navi" charts, and only a small *DLSS enabled results line of disclaimer text at the bottom of the slides, that would be disingenuous as hell and they'd be getting rightly mocked for it.
 
Last edited:

chameleoneel

2[H]4U
Joined
Aug 15, 2005
Messages
3,711
But it would be disingenuous as hell if an Nvidia presentation for a new gen launch consisted of benchmark charts showing a roundup of only DLSS capable games, then enabling DLSS and comparing those numbers to stock AMD cards, with only a *DLSS enabled disclaimer text at the bottom of the slides.
Yeah but AMD clearly lists Smart Access Memory as part of their normalized system configuration specs. Just like any review site would do. This is no attempt to sneak a fast one on us.

I also don't think DLSS is an analog for comparison. As the GPU is not rendering the same internal resolution workload and its resulting image quality varies widely, depending upon the game. Among the few games which support it.
Smart Access Memory has no effect on image quality and it does not change the internal rendering resolution load of the game.
 

cybereality

Supreme [H]ardness
Joined
Mar 22, 2008
Messages
6,441
Fact is, AMD designed hardware that increases performance, from what I understand, in all games. Not just a couple sponsored titles.

Of course they are going to show benchmarks with all their features enabled, and it was clearly shown on the slide what they were doing, with more details in the footnote. It was not deceptive at all.
 

Marees

Gawd
Joined
Sep 28, 2018
Messages
825
Interesting question




I think we really deserve to know what the "real" technical requirements are for making this work.

AMD has indicated that this will require a 5000 series (Zen3) CPU and a 500 series chipset motherboard in order for this to work. Why exactly can't it work on Zen2 and/or 400 series motherboards? Why? Does it REQUIRE PCIe 4.0? Because apparently Nvidia's solution doesn't.

Nvidia has indicated that they can make this work with their cards on Intel OR AMD CPUs, and even with PCIe 3.0. But only on Ampere of course... Why not Turing, etc?

I want to know what is being done (or not done) due to technical limitations as opposed to artificial product segmentation purposes.
 

rinaldo00

[H]ard|Gawd
Joined
Mar 9, 2005
Messages
1,539
"I want to know what is being done (or not done) due to technical limitations as opposed to artificial product segmentation purposes."

That is like asking Krusty Burger for their secret sauce recipe, good luck.

Spoiler: Put the mayonnaise in the sun.
 

Marees

Gawd
Joined
Sep 28, 2018
Messages
825
AMD SAM review



"Our launch-day coverage of the Radeon RX 6800 series includes: Radeon RX 6800 XT Review, Radeon RX 6800 Review, AMD Smart Access Memory Review.

By now, if you've read our full Radeon RX 6800 XT review, you'll know that AMD has gained solid ground in performance, and the days of intense Radeon vs. GeForce competition are back. The RX 6800 XT averages just 2% behind the RTX 3080 at 1080p, just 1% behind at 1440p, but 6% behind—as tested on a machine powered by a Ryzen 9 5900X, these gaps are different in our main review, which is using an Intel processor.

Enabling Smart Access Memory (SAM) is enabled by toggling a switching in the UEFI setup program of a compatible motherboard—if you've satisfied the requirement of a Ryzen 5000 series processor and AMD 500-series chipset motherboard. With SAM enabled, we see the averages change "dramatically" (in context of competition), with the RX 6800 XT now being 2% faster across all three resolutions. This helps the RX 6800 XT match the RTX 3080 at 1080p, while beating it by 1% at 1440p, and being just 4% slower at 4K UHD—imagine these gains without even touching other features such as Radeon Boost or Rage Mode!

It's important to understand that SAM doesn't work with all game engines. In "Borderlands 3," and "Divinity" for example, SAM negatively impacts performance. It seems that SAM comes with a small CPU overhead, which will cause a loss in FPS in games that are CPU bound, because the CPU is losing time dealing with SAM, time that it can't handle frame rendering. In certain other games, there's zero impact (eg: DOOM Eternal). In certain games, though, such as "Gears 5," there are significant frame-rate gains seen with SAM enabled, which help tilt the averages in favor of SAM being enabled.

Overall, SAM isn't snake oil per-se, it offers tangible performance gains that are surprisingly large, considering nobody cared about the resizable BAR feature for years. We only wish that AMD hadn't restricted it to only its latest platform, and the latest processor. If AMD's excuse is "we want to maximize PCIe Gen 4 bandwidth utilization, Gen 3 would post bottlenecks," then our retort would be "what about Ryzen 3000 Matisse?" What about Intel's "Rocket Lake" which comes out next year? The restriction to Ryzen 5000 seems arbitrary.

NVIDIA has already announced that they will add a similar feature to their GeForce graphics cards, probably with wider platform support. I'm sure this will lead AMD to open their discovery to more chipsets and hardware combinations. Enabling the feature requires you to boot in UEFI mode. If you've been denying UEFI like I have, and rather use CSM and MBR, then the time to switch has come, the performance gains finally justify it.

If you're one of the lucky few who flew to the horn of Africa, joined a pirate gang, hijacked a Maersk superheavy, broke into the right container, and pulled out a Ryzen 5000 processor, then Smart Access Memory is a cool feature to have (no, don't do that)."


https://www.techpowerup.com/review/amd-radeon-sam-smart-access-memory-performance/
 

Marees

Gawd
Joined
Sep 28, 2018
Messages
825
According to a recent PCWorld interview, AMD, has people on the Ryzen team working to get SAM working on Nvidia GPUs, & people on the Radeon team working with Intel to get the feature functional with Intel CPUs and chipsets.

According to AMD, there’s some work required to support the feature appropriately, implying we may not see it enabled immediately on Intel and Nvidia platforms.

https://www.extremetech.com/gaming/...ccess-memory-support-to-intel-nvidia-hardware

It’ll be interesting to see what kind of performance we see other platforms and hardware pick up from enabling this capability — Intel might benefit more than AMD (or vice-versa) and AMD GPUs might benefit more than Nvidia cards or the reverse.
 

chameleoneel

2[H]4U
Joined
Aug 15, 2005
Messages
3,711
According to a recent PCWorld interview, AMD, has people on the Ryzen team working to get SAM working on Nvidia GPUs, & people on the Radeon team working with Intel to get the feature functional with Intel CPUs and chipsets.

According to AMD, there’s some work required to support the feature appropriately, implying we may not see it enabled immediately on Intel and Nvidia platforms.

https://www.extremetech.com/gaming/...ccess-memory-support-to-intel-nvidia-hardware

It’ll be interesting to see what kind of performance we see other platforms and hardware pick up from enabling this capability — Intel might benefit more than AMD (or vice-versa) and AMD GPUs might benefit more than Nvidia cards or the reverse.
Yeah I keep hearing youtubers and reading articles saying that "nvidia will implement it soon so AMD having it no longer matters" and that is not what Nvidia said at all and not what I expect, either.

I think its definitely going to take a fair amount of time for them to validate it and it will probably be an incremental roll out, much like what they are doing with Freesync. With each driver, they add a few more monitors as validated for Freesync compatibility. I think resizeable BAR will be something like that, with Nvidia. Not to mention its likely going tin require bios updates from board brands, as well.

Likewise, that's why AMD only has it available for Zen 3 and X570 right now. Because it takes time to get it all working.
 

Ebernanut

[H]ard|Gawd
Joined
Dec 15, 2010
Messages
1,348
It's interesting that there's a push to make this cross platform with both Nvidia and Intel after the way they introduced it but I'm all for advancing tech as a whole rather than segmenting features by brand.

It would have been nice if the article addressed whether older AM4 boards might eventually get support but it sounds like it should work with PCIe 3.0 so I'm optimistic. It would be nice since I'm currently hoping to pick up a 6800xt soon and leaning towards picking up a 5900x and dropping it into my x470 board assuming it gets a good bios update for Ryzen 3(bios updates have been hit or miss on this board).
 

MavericK

Zero Cool
Joined
Sep 2, 2004
Messages
30,216
It's interesting that there's a push to make this cross platform with both Nvidia and Intel after the way they introduced it but I'm all for advancing tech as a whole rather than segmenting features by brand.
Trying to take some of the wind out of AMD's sails (read: sales), I'm guessing.
 

Ebernanut

[H]ard|Gawd
Joined
Dec 15, 2010
Messages
1,348
Trying to take some of the wind out of AMD's sails (read: sales), I'm guessing.
I'm sure that's why Nvidia announced their own version but I meant more in relation to AMD now actively trying to get it working with Ryzen/Nvidia and Radeon/Intel. I'm also sure this is a direct result of Nvidia's announcement but I do think it's interesting how quickly they pivoted away from promoting it as an ecosystem feature to one where they're just ahead of the curve.
 

MavericK

Zero Cool
Joined
Sep 2, 2004
Messages
30,216
I'm sure that's why Nvidia announced their own version but I meant more in relation to AMD now actively trying to get it working with Ryzen/Nvidia and Radeon/Intel. I'm also sure this is a direct result of Nvidia's announcement but I do think it's interesting how quickly they pivoted away from promoting it as an ecosystem feature to one where they're just ahead of the curve.
I mean it makes total sense, of course they want to sell you a GPU AND a CPU, so how they presented it at first was that you needed both. The interesting thing is that they apparently didn't anticipate that any other companies would point out that it's not a proprietary feature?
 

Marees

Gawd
Joined
Sep 28, 2018
Messages
825
The frame buffer is what moves data from system ram to the VRAM. Right now it is 256Mb. To fill up an 8Gb vram, would take 32 writes from the 256Mb frame buffer.

This tech increases the size of the frame buffer. It's really going to be most helpful on systems with smaller amounts of vram, or use cases where there is a need to make many writes to the VRAM.

I suspect this will have bigger impact on low to midrange graphics. Systems with very low amounts of system ram, say 4Gb or less, it could be a hindrance. Would depend on what the new size of the frame buffer will be. It is goes to 512Mb, it takes half as many writes to get data into vram, but you have .5Gb of system memory tied up for this function...

This buffer size has surely had increases over the years, as both system ram and vram capacities have increased, it makes sense that the buffer size should increase as well. 384Gb, 512Gb seem logical. This is a tradeoff of speed to move data into vram, and consuming system ram to make the buffer. AMD is making it so they can use a larger buffer before it becomes an officially increased specification, but it seems highly likely that this buffer size be reviewed and increased. Users have mentioned x86, seems plausible that this buffer size is a part of the x86 specification. To increase it across the board, Intel and AMD would likely have to implement the change, Microsoft would have some dll update, and the GPU drivers and possibly hardware design would need to take it into account as well. AMD can do most of these changes themselves on the CPU/GPU designs that they make. It's a logical step and will be helpful for larger textures (4k and beyond), GPU's are still getting there when it comes to performance at 4k. Doesn't hurt to make the move to a larger buffer now.

Seems like a change that the rest of the vendors will implement at some point.

For the high end cards, I really doubt it will help much. A good game engine already makes optimized use of the 256Mb buffer size. This is the same datapath(chokepoint) that is "faster" with PCIe4. Yet the reviews on performance of PCIe3 vs PCIe4 showed little to no change. About 1%, and in some cases slower. This is a 2x increase in speed, that increased write speed to the frame buffer but also to the GPU in general. And the impact was negligible. To see the same bandwidth improvement the framebuffer would need to grow to 512Mb minimum. Maybe they are going farther, 1Gb. But taking less time to fill the VRAM, which is already a well optimized pipe, is small increment in overall performance. Much of the time the VRAM doesn't needed large file transfers anyway. Larger buffer will help when a lot of data needs moved and will be equal when less data needs moved.

Don't get your hopes up for 10% faster GPU's from this single change. 10% faster data transfer seems more likely, but that translates into (very) small fps or level loading improvements.

it looks like it's actually the lower resolutions, 1080p and 1440p, where the technology seems to be most effective

Borderlands 3 is a game where the RX 6800 XT already does well, and it was highlighted in AMD's reviewer's guide as being a title that really benefits from Smart Access Memory - so let's give it a try.

We see very little difference between our Intel and AMD test systems at 4K, but turn on Smart Access Memory and the AMD platform gets out to a four per cent lead - not bad!
This lead lengthens at 1440p to eight per cent,
then again at 1080p to a mighty 12 per cent.

That's a really respectable return, pushing the game at its highest settings from one that can max out a 144Hz monitor to one that can do the same with a 165Hz monitor.

Note however that our lowest one per cent scores actually worsen slightly with SAM engaged, something that persisted after several retests, so that's something to keep an eye on.

https://www.eurogamer.net/articles/digitalfoundry-2020-amd-radeon-rx-6800-and-6800-xt-review?page=6

So there you have it - some big gains in Borderlands 3 and non-RT Control, but unfortunately we don't see significant improvements in the other titles we tested that could shift the RX 6800 XT into closer contention with the RTX 3080.

And contrary to our expectations, it looks like it's actually the lower resolutions, 1080p and 1440p, where the technology seems to be most effective, although the extra horsepower is needed more at 4K.
 

GoodBoy

[H]ard|Gawd
Joined
Nov 29, 2004
Messages
1,885
I did theorize that higher end cards with faster VRAM, this change would have less impact. They are already very fast when using the buffer as it is. If it it more effective at lower resolutions only, it is a bit less exciting. Must be helping CPU bound games by using less CPU time for the transfers, in the GPU bound scenarios the CPU can spend a little extra to make more frequent VRAM writes and it has little or no impact on gameplay. I suspect games that make less than optimal use of the GPU, and do more with the CPU, will see bigger benefit from this.

There are some games that take a hit to performance when a larger buffer is used, those games are blacklisted in the drivers, the feature is disabled in those cases.

All in all, less exciting that one would hope for. A few percent free speed increase in cases where it helps is always good of course. Just sucks it isn't a more widespread, overall, guaranteed speed bump.
 
Top