AMD’s RX 6000 GPUs to Boost Perf With Ryzen 5000 CPUs via Smart Memory Access

Marees

[H]ard|Gawd
Joined
Sep 28, 2018
Messages
2,039
As a basic explainer (we’ll learn more details at an upcoming Tech Day), AMD says that the CPU and GPU are usually constrained to a 256MB ‘aperture’ for data transfers. That limits game developers and requires frequent trips between the CPU and main memory if the data set exceeds that size, causing inefficiencies and capping performance. Smart Access Memory removes that limitation, thus boosting performance due to faster data transfer speeds between the CPU and GPU.

AMD says that game developers will have to optimize for the Smart Memory Access feature, which means it could take six to twelve months before we see games optimized for the new tech. The company does expect to benefit from some of the shared performance tuning efforts between PC and the new consoles, namely the Sony PS5 and Microsoft Xbox Series X.


https://www.tomshardware.com/news/a...-with-ryzen-5000-cpus-via-smart-memory-access

Complete specs for Rx 6000 series below
👇

Official Spec List:

Powerhouse Performance, Vivid Visuals & Incredible Gaming Experiences​

AMD Radeon™ RX 6000 Series graphics cards support high-bandwidth PCIe® 4.0 technology and feature 16GB of GDDR6 memory to power the most demanding 4K workloads today and in the future. Key features and capabilities include:

Powerhouse Performance

  • AMD Infinity Cache – A high-performance, last-level data cache suitable for 4K and 1440p gaming with the highest level of detail enabled. 128 MB of on-die cache dramatically reduces latency6 and power consumption7, delivering higher overall gaming performance than traditional architectural designs.
  • AMD Smart Access Memory8 – An exclusive feature of systems with AMD Ryzen™ 5000 Series processors, AMD B550 and X570 motherboards and Radeon™ RX 6000 Series graphics cards. It gives AMD Ryzen™ processors greater access to the high-speed GDDR6 graphics memory, accelerating CPU processing and providing up to a 13-percent performance increase on a AMD Radeon™ RX 6800 XT graphics card in Forza Horizon™ 4 at 4K when combined with the new Rage Mode one-click overclocking setting.9,10
  • Built for Standard Chassis – With a length of 267mm and 2x8 standard 8-pin power connectors, and designed to operate with existing enthusiast-class 650W-750W power supplies, gamers can easily upgrade their existing large to small form factor PCs without additional cost.
True to Life, High-Fidelity Visuals

  • DirectX® 12 Ultimate Support – Provides a powerful blend of raytracing, compute, and rasterized effects, such as DirectX® Raytracing (DXR) and Variable Rate Shading, to elevate games to a new level of realism.
  • DirectX® Raytracing (DXR) – Adding a high performance, fixed-function Ray Accelerator engine to each compute unit, AMD RDNA™ 2-based graphics cards are optimized to deliver real-time lighting, shadow and reflection realism with DXR. When paired with AMD FidelityFX, which enables hybrid rendering, developers can combine rasterized and ray-traced effects to ensure an optimal combination of image quality and performance.
  • AMD FidelityFX – An open-source toolkit for game developers available on AMD GPUOpen. It features a collection of lighting, shadow and reflection effects that make it easier for developers to add high-quality post-process effects that make games look beautiful while offering the optimal balance of visual fidelity and performance.
  • Variable Rate Shading (VRS) – Dynamically reduces the shading rate for different areas of a frame that do not require a high level of visual detail, delivering higher levels of overall performance with little to no perceptible change in image quality.
Elevated Gaming Experience

  • Microsoft® DirectStorage Support – Future support for the DirectStorage API enables lightning-fast load times and high-quality textures by eliminating storage API-related bottlenecks and limiting CPU involvement.
  • Radeon Software Performance Tuning Presets – Simple one-click presets in Radeon™ Software help gamers easily extract the most from their graphics card. The presets include the new Rage Mode stable over clocking setting that takes advantage of extra available headroom to deliver higher gaming performance10.
  • Radeon™ Anti-Lag11 – Significantly decreases input-to-display response times and offers a competitive edge in gameplay.

AMD Radeon™ RX 6000 Series Product Family​

ModelCompute UnitsGDDR6Game Clock12 (MHz)Boost Clock13 (MHZ)Memory InterfaceInfinity Cache
AMD Radeon™ RX 6900 XT8016GB2015Up to 2250256 bit128 MB
AMD Radeon™ RX 6800 XT7216GB2015Up to 2250256 bit128 MB
AMD Radeon™ RX 68006016GB1815Up to
2105
256 bit128 MB
 
At the moment, AMD have seen performance gains of between 5-11% in titles they’ve tested internally – and, as we saw during AMD’s Big Navi event, up to 13% when you combine it with the RX 6000’s one-click overclocking feature called Rage Mode. That might not sound like a whole lot in practice, but AMD believe that once game developers are able to start testing it for themselves, we’ll see even bigger gains start to emerge over time.



https://www.rockpapershotgun.com/20...ccess-memory-be-the-secret-sauce-of-big-navi/
 

Attachments

  • AMD-Smart-Access-Memory-performance-1212x682.jpg
    AMD-Smart-Access-Memory-performance-1212x682.jpg
    72.8 KB · Views: 0
time stamped


Ever since AMD bought ATI, gamers have asked if there was an intrinsic benefit to running an AMD GPU alongside an AMD CPU. Apart from some of the HSA features baked into previous-generation AMD APUs and a brief period of dual graphics support, the answer was always “No.”

From 2011-2017, AMD simply wasn’t competitive enough in gaming for the company to invest in that kind of luxury concept.

AMD’s RX 6000 GPUs will be the first cards that can specifically take advantage of platform-level features inside the 500-series chipset.


https://www.extremetech.com/gaming/...-rx-6000-series-is-optimized-to-battle-ampere

http://disq.us/p/2ctpmn1
 
My head hurts, the pcie lanes connected to the graphics card go straight to the processor, as does one of the nvme slots. It has nothing to do with the south bridge chipset
 
My head hurts, the pcie lanes connected to the graphics card go straight to the processor, as does one of the nvme slots. It has nothing to do with the south bridge chipset
Same here, hopefully the RDNA2 whitepaper clears it up. Obviously the traffic between the CPU and GPU still goes over PCIe.

If I had to guess it has something to do with buffer swaps between VRAM and system memory. When a CPU needs to read or write to VRAM it first has to copy the data into system memory. With direct access the CPU can read/write directly to VRAM over PCIe.

Hopefully AMD shares more info on how and why this improves fps as typically these transfers are happening in parallel with other work on the GPU anyway.
 
My head hurts, the pcie lanes connected to the graphics card go straight to the processor, as does one of the nvme slots. It has nothing to do with the south bridge chipset
I'm guessing X570 only, because it has more lanes than the budget chipsets. But people with configurations which max or nearly max out the lanes, might run into a situation of choosing between Smart Access Memory and something else, like an extra NVME drive. Whereas their next motherboard platform will have SAM built in as a feature you don't need to choose in potential sacrifice. The consoles have/have had exclusive buses for this stuff, after all.
 
I'm guessing X570 only, because it has more lanes than the budget chipsets. But people with configurations which max or nearly max out the lanes, might run into a situation of choosing between Smart Access Memory and something else, like an extra NVME drive. Whereas their next motherboard platform will have SAM built in as a feature you don't need to choose in potential sacrifice. The consoles have/have had exclusive buses for this stuff, after all.
B550 should work too, it has the same 20 pcie 4 direct lanes to the CPU as X570, only the motherboard lanes are pcie 3 vs 4.
 
I think I understand what the name means, but in practice, how does this improve performance? How do I need to explain this to my 5 year old so they can help me talk the wife into another upgrade?
"Its for school work" always worked for me back in the day....now, if your wife is in I.T....well...that shit ain't gonna fly but, you know, be *crafty*....
 
I would really love some clarification on this. I've read very conflicting information about if this will result in a performance increase for all games, or if it's something where you will ONLY see a performance increase if a game is specifically coded to take advantage of it.

Those slides which show multiple games with a performance increase seems to suggest the former, because I find it hard to believe that 5 different game studios would have already integrated this code into their games not just before the cards are released, but before the cards were even announced.

Or maybe it's a combination, where there is some native performance improvement across the board, with further improvement to be had if a developer codes for it. I guess we will see.

I'm particularly eager because I've already decided that my next upgrade will be when I put a Zen 3 CPU into my existing x570 board. Then some time next year, I will be sitting here with a Ryzen 5000 series CPU and x570 board, trying to decide on a videocard. That would put me in a position to take advantage of this if I went with an AMD videocard, and it could easily be the deciding factor. Until then, I'm quite content to stick with my 2080 RTX and wait for the dust to settle so we can see some real numbers.
 
Those slides which show multiple games with a performance increase seems to suggest the former...Or maybe it's a combination
IIRC they straight-up said it just works in the announcement video, but it can be optimized...lemme go back...

 
I kinda expected this to eventually happen. Here's hoping Nvidia will make a desktop ARM chip that goes well with Nvidia GPU's.
 
I kinda expected this to eventually happen. Here's hoping Nvidia will make a desktop ARM chip that goes well with Nvidia GPU's.
Except we'd see meltdowns about "evil Ngreedia proprietary lockin shutting out AMD" if the roles were reversed and Nvidia GPU's happened to perform better with a particular CPU - either their own, or if lets say they'd partnered with Intel for Ampere. Heads would be exploding.

AMD should be celebrated for innovating and finding a way to leverage synergy between CPU/GPU, but the tribalism is pretty toxic nowadays.
 
Last edited:
I kinda expected this to eventually happen. Here's hoping Nvidia will make a desktop ARM chip that goes well with Nvidia GPU's.
Why would anyone want a desktop ARM chip tho? Has to be a demand to create such a product and without the software support I dont think that will happen.
 
Why would anyone want a desktop ARM chip tho? Has to be a demand to create such a product and without the software support I dont think that will happen.

The software support is already mostly here. ARM CPUs are ubiquitous in phones and tablets, in addition to Chromebooks, and soon Apple computers. For those who already use those devices to do most of their computing, the question is quickly becoming "Why would I want an x86 desktop?". ARM is not starting from scratch here the way it would have been 10+ years ago. Google is using their Android app ecosystem to make ARM Chromebooks viable and Apple will be using it's iOS app ecosystem to make ARM Apple Laptops and Desktops viable. Windows support for ARM is improving constantly and the near future will be one where you can run x86 or ARM software on your Windows computer, regardless of if you have an x86 or ARM CPU, with translation/emulation done transparently by the OS as necessary. ARM CPUs are cheaper, and people love cheap computers, so market share will explode until x86 becomes a small minority (it already is if you include phones, etc). There will of course be a penalty for running x86 software on an ARM CPU, but once ARM market share explodes, desktop and eventually console games will start to be released natively for ARM and it will instead become x86 CPUs having to suffer the emulation/translation penalty, until x86 finally becomes a tiny niche. The writing is already on the wall, unfortunately. I love x86 so I hope I'm wrong.
 
Nvidia will work out how to do this with Intel, and then AMD will let Nvidia do it on AMD hardware. Give it time.
 
The software support is already mostly here. ARM CPUs are ubiquitous in phones and tablets, in addition to Chromebooks, and soon Apple computers. For those who already use those devices to do most of their computing, the question is quickly becoming "Why would I want an x86 desktop?". ARM is not starting from scratch here the way it would have been 10+ years ago. Google is using their Android app ecosystem to make ARM Chromebooks viable and Apple will be using it's iOS app ecosystem to make ARM Apple Laptops and Desktops viable. Windows support for ARM is improving constantly and the near future will be one where you can run x86 or ARM software on your Windows computer, regardless of if you have an x86 or ARM CPU, with translation/emulation done transparently by the OS as necessary. ARM CPUs are cheaper, and people love cheap computers, so market share will explode until x86 becomes a small minority (it already is if you include phones, etc). There will of course be a penalty for running x86 software on an ARM CPU, but once ARM market share explodes, desktop and eventually console games will start to be released natively for ARM and it will instead become x86 CPUs having to suffer the emulation/translation penalty, until x86 finally becomes a tiny niche. The writing is already on the wall, unfortunately. I love x86 so I hope I'm wrong.
That is what Intel thought as well when it made the Itanium and well lets just say that didn't work out. Part of the issue is emulating the compatibility you need is always far slower. We shall see if Apple once again returns to the x86 world after doing their own thing again. Without a software ecosystem that wants to embrace that switch then it just wont happen. Based on what I have seen, programmers seem to have little interest in changing over tons of code to run on new architecture, they only want to tweak their code instead.
 
the whole end of x86 writing has been on the wall is something people have have said since the days of the movie Hackers and powerPC architecture. And we see how that writing stuck.

Anyone trying to kill x86 only seems to be able to write with magic markers.
 
Except we'd see meltdowns about "evil Ngreedia proprietary lockin shutting out AMD" if the roles were reversed and Nvidia GPU's happened to perform better with a particular CPU - either their own, or if lets say they'd partnered with Intel for Ampere. Heads would be exploding.

AMD should be celebrated for innovating and finding a way to leverage synergy between CPU/GPU, but the tribalism is pretty toxic nowadays.

Nvidia is going to have to try harder to compete, maybe bring prices down because this is another advantage for AMD. Hopefully this shows some real world gains. Even if you only like Nvidia, competition is good.

I'm still waiting for real benchmarks but it sounds like AMD is doing better than I expected.
 
Nvidia is going to have to try harder to compete, maybe bring prices down because this is another advantage for AMD. Hopefully this shows some real world gains. Even if you only like Nvidia, competition is good.

I'm still waiting for real benchmarks but it sounds like AMD is doing better than I expected.
Hell, being on time is a big step for AMD. I am glad we do not have to wait for a year for this. With Nvidia being a shit show with supplies, AMD will capitalize this opportunity.
 
the whole end of x86 writing has been on the wall is something people have have said since the days of the movie Hackers and powerPC architecture. And we see how that writing stuck.

Anyone trying to kill x86 only seems to be able to write with magic markers.
Because CISC has a lot of advantages over RISC.
 
That is what Intel thought as well when it made the Itanium and well lets just say that didn't work out.
i worked on Itanic for nearly 2 years. ... #ifdef IA64
it was twice the price and half as fast as the f50
hp gave up its entire line of pa-risc to itanic and was rewarded with...oh yeah they arent around anymore.
 
Except we'd see meltdowns about "evil Ngreedia proprietary lockin shutting out AMD" if the roles were reversed and Nvidia GPU's happened to perform better with a particular CPU - either their own, or if lets say they'd partnered with Intel for Ampere. Heads would be exploding.

AMD should be celebrated for innovating and finding a way to leverage synergy between CPU/GPU, but the tribalism is pretty toxic nowadays.

Fanboyism is the dumbest thing in tech. Unless you’re a significant shareholder or your dad/mom is Jensen Huang/Lisa Su, why get so emotional about it?
 
Why would anyone want a desktop ARM chip tho? Has to be a demand to create such a product and without the software support I dont think that will happen.
Cheaper, low power, possibly faster. There's no good reason now to own one unless you like running X86 applications through emulation, but I could see Nvidia using their leverage for apps to be ported to ARM.
 
the whole end of x86 writing has been on the wall is something people have have said since the days of the movie Hackers and powerPC architecture. And we see how that writing stuck.

Anyone trying to kill x86 only seems to be able to write with magic markers.

x86 is already the minority architecture compared to ARM if you include mobile devices, and the fact is that mobile devices are computers. There are already many who don't even own a traditional computer anymore, and are quite content simply using their mobile devices. No one will have to wait for software to be re-written as 99% of everything people would need is already on an app store and already compatible with ARM. The line between an app on an app store and a traditional program becomes more blurry every day.

No one is claiming that x86 will "die". CPU market share will play out the way Browser market share did, with x86 being like Firefox and ARM being like Chrome. At one point Firefox had the most market share, now it has 4%. It still works, people still use it, nothing died or went away, but Chrome won the war. It will be the same with ARM.
 
I'm curious if Smart Access Memory is beneficial across the board, or if most of the benefit is concentrated at lower resolutions. If it provides as much benefit at 4K as 1080p, it will look that much more tempting.
 
You guys think the perf will still be competitive even without the smart memory thing?

The 6800 XT beat the 3080 without smart memory & power limit boost (Rage "overclock")

The 6900 XT, otoh, needed both to beat the 3090
 
x86 is already the minority architecture compared to ARM if you include mobile devices, and the fact is that mobile devices are computers. There are already many who don't even own a traditional computer anymore, and are quite content simply using their mobile devices. No one will have to wait for software to be re-written as 99% of everything people would need is already on an app store and already compatible with ARM. The line between an app on an app store and a traditional program becomes more blurry every day.

No one is claiming that x86 will "die". CPU market share will play out the way Browser market share did, with x86 being like Firefox and ARM being like Chrome. At one point Firefox had the most market share, now it has 4%. It still works, people still use it, nothing died or went away, but Chrome won the war. It will be the same with ARM.
The programs on mobile aren't compatible with ARM, they are compatible with Android which is an OS designed to be wrapper sitting on a version of Linux. You can install Android on x86, and even in Windows, and run the apps just fine.
 
I would really love some clarification on this. I've read very conflicting information about if this will result in a performance increase for all games, or if it's something where you will ONLY see a performance increase if a game is specifically coded to take advantage of it.

Those slides which show multiple games with a performance increase seems to suggest the former, because I find it hard to believe that 5 different game studios would have already integrated this code into their games not just before the cards are released, but before the cards were even announced.

Or maybe it's a combination, where there is some native performance improvement across the board, with further improvement to be had if a developer codes for it. I guess we will see.

I'm particularly eager because I've already decided that my next upgrade will be when I put a Zen 3 CPU into my existing x570 board. Then some time next year, I will be sitting here with a Ryzen 5000 series CPU and x570 board, trying to decide on a videocard. That would put me in a position to take advantage of this if I went with an AMD videocard, and it could easily be the deciding factor. Until then, I'm quite content to stick with my 2080 RTX and wait for the dust to settle so we can see some real numbers.
From follow up questions done on Gamers Nexus AMD clarified that the games need to be tweaked to work with the tech and that they had worked with a few studios to demo the tech. Though they didn’t make any mention if those tweaks were going to be launched outside the canned demos.
 
The 6800 XT beat the 3080 without smart memory & power limit boost (Rage "overclock")

The 6900 XT, otoh, needed both to beat the 3090
6900XT looks to be a beautiful gaming card, the 3090 is a beast for a workstation. I am so salty that they pinned the 3090 as a “gaming” card.
 
6900XT looks to be a beautiful gaming card, the 3090 is a beast for a workstation. I am so salty that they pinned the 3090 as a “gaming” card.
3090 certainly feels like a product for the whales, I have to agree there.
 
Except we'd see meltdowns about "evil Ngreedia proprietary lockin shutting out AMD" if the roles were reversed and Nvidia GPU's happened to perform better with a particular CPU - either their own, or if lets say they'd partnered with Intel for Ampere. Heads would be exploding.

AMD should be celebrated for innovating and finding a way to leverage synergy between CPU/GPU, but the tribalism is pretty toxic nowadays.
I’m all for the integration, Apple does it, Intel does it for their servers, NVidia with their monitors, I’m curious to see how AMD can make it happen but I will welcome it with open arms should they pull it off. It depends heavily on 3’rd party support but if they can work with the developers and get that tech smoothly integrated into the development environments than power to them.
 
It's not like SMA is equivalent to Apple locking out developers because they feel like it. CPU and GPU hardware and microcode is proprietary. SMA might not even be applicable to Intel CPUs. Anyways, it gives more FPS and I'm all for it. Who isn't? Of Intel, Nv and AMD we all know who has been the most "Open" in developing standards. I'll give you a hint - NOT intel and NOT Nv.
 
Huh? It's just a binned gaming card with lots of RAM. Nvidia doesn't have workstation parts out this gen so far.
Yeah but their studio drivers are recognized by all the software as a workstation part so, it works in Hyper-V, Citrix, Blender, Adobe, Chief Architect, (Software I use) and it dominates over the RTX 8000 even with half the memory. at 1/4 the price.
 
Last edited:
Boy, it would surely suck to learn that after spending all that cash on workstation parts that they only cost extra money because Nvidia gimped the gaming parts in software.

But that's no reason not to save money and buy 3090s.
 
Boy, it would surely suck to learn that after spending all that cash on workstation parts that they only cost extra money because Nvidia gimped the gaming parts in software.
Companies do it all the time, would not surprised at all if it's a software block on capable hardware.
 
Boy, it would surely suck to learn that after spending all that cash on workstation parts that they only cost extra money because Nvidia gimped the gaming parts in software.

But that's no reason not to save money and buy 3090s.
Not quite there are huge support benefits of going with the actual workstation parts if you need the qualified drivers, also for a good number of tasks the Quadro's vastly outperform even the 3090 despite being an older architecture, but if you don't need that precision, support, or your software needs aren't in the category where the Quadro's work better than you can get away with the "gaming" parts if you are willing to put a little work into it. I mean I have a dual Xeons running 4 Titans that runs a lab for 30 kids doing Adobe work. Had to do some tweaking to get the software to recognize everything but it cost a fraction of the Tesla cards I should be running and they are in high school, they don't need the precision or accuracy that the qualified drivers would have provided.
 
Back
Top