AMD’s RX 6000 GPUs to Boost Perf With Ryzen 5000 CPUs via Smart Memory Access

This just makes it seem like Nvidia would be better off making their software more hardware-agnostic. There's way more money in licensing than in hardware, if you're not singling out hardware and jacking up its pricing.

So the way it stands they have hardware that costs, let's say $4K, and gets free software, but only sells one unit per every 100 general-purpose GPUs. If they sold the same software for $100 and only hit 4 percent of the market, they'd break even, and not have to cover any hardware. That's a straight profit, even before optimization.

Now Nvidia can still produce optimized hardware for that software, and if they're smart, they can let AMD do it, too. They can sell expensive parts that do it better, and so can AMD. But only Nvidia will make money on the software, cutting AMD out entirely. Sure, it's possible that AMD will invest literally decades into the same stack, but that's extra-extra long-term losses, and it's not even guaranteed if Nvidia maintains a software advantage.

Just like how it makes sense for AMD to open up smart memory to Nvidia parts, it makes sense for Nvidia to open up software to AMD.

It's just a matter of if Jensen will kowtow to pragmatism.

Su and AMD would. They'd sell plenty of CPUs to Nvidia owners.
 
The 6800 XT beat the 3080 without smart memory & power limit boost (Rage "overclock")

The 6900 XT, otoh, needed both to beat the 3090
If the 6800 XT can beat the 3080 then AMD has won. But we have to consider Ray-Tracing performance which I'm certain Nvidia still has the lead, then AMD may not be better than a 3080. The 3090 is a joke of a graphics card. Double the price for 8% increase. Only a fool would buy one of those.
 
Last edited:
Except we'd see meltdowns about "evil Ngreedia proprietary lockin shutting out AMD" if the roles were reversed and Nvidia GPU's happened to perform better with a particular CPU - either their own, or if lets say they'd partnered with Intel for Ampere. Heads would be exploding.

AMD should be celebrated for innovating and finding a way to leverage synergy between CPU/GPU, but the tribalism is pretty toxic nowadays.

But it's on the same platform from the same company, AMD CPU + AMD GPU, lmao. Nvidia has been doing it for years with their technologies (cuda, ray tracing, g-sync, etc), but suddenly AMD does something only for them (as far as we know) and it's bad?
 
Just like how it makes sense for AMD to open up smart memory to Nvidia parts, it makes sense for Nvidia to open up software to AMD.
Does it? Remember Nvidia is now a a competing CPU vendor i.e. ARM. Why would AMD want to share it's CPU secrets with Nvidia? the opposite angle is would Nv allow AMD access to any of their GPU knowledge to incorporate SMA? Certainly not.
Proprietary G-sync ring a bell? Then AMD made tne open standard Freesync... A typical Scenario and microcosm of each company's philosophy.
 
Last edited:
But it's on the same platform from the same company, AMD CPU + AMD GPU, lmao. Nvidia has been doing it for years with their technologies (cuda, ray tracing, g-sync, etc), but suddenly AMD does something only for them (as far as we know) and it's bad?
That and the fact nV doesn't make CPU's so they dont have the ability to do what AMD can do by making both CPU and GPUs. I see no fault in what AMD is doing. Anyone that does is a hypocrit especially if they currently or ever have owned an nV product.
 
So they can sell more CPUs.
By giving away proprietary secrets? Sounds like a Chinese marketing scheme (market access by disclosing trade secrets). Look where that got everyone. There is absolutely no incentive or logical reason for AMD to give away SMA technology. As it was stated above - this is pure AMD end to end. CPU/GPU Apple does it, Nvidia does it, Intel does it. Everyone does it. Is IOS app store, CUDA or Optane open?
 
Last edited:
Does it? Remember Nvidia is now a a competing CPU vendor i.e. ARM. Why would AMD want to share it's CPU secrets with Nvidia? the opposite angle is would Nv allow AMD access to any of their GPU knowledge to incorporate SMA? Certainly not.
Proprietary G-sync ring a bell? Then AMD made tne open standard Freesync... A typical Scenario and microcosm of each company's philosophy.
VESA’s adaptive Sync was added to the Display Port and HDMI standards back in 2010, AMD basically repackaged them with a few minor tweaks to rebrand it as FreeSync 5 years later and stabilized it in 2017.
FreeSync also still doesn’t have any methods for dealing with Frame collision or Overdriving Pixels, which is where the proprietary NVidia hardware comes in.
 
VESA’s adaptive Sync was added to the Display Port and HDMI standards back in 2010, AMD basically repackaged them with a few minor tweaks to rebrand it as FreeSync 5 years later and stabilized it in 2017.
And then Nvidia started certifying freesync monitors to work with Nv Cards Jan 2019

"Nvidia, however, has steadfastly refused to play ball. It has kept G-Sync in the ecosystem as a halo brand, despite the fact that the capability it once used specialized hardware to provide can now be baked into display timing controllers without additional costs."

both technologies produce the same result, that contest is a wash at this point.

So here is AMD dragging Nvidia kicking and screaming to an open standard.
 
Last edited:
And then Nvidia started certifying freesync monitors to work with Nv Cards Jan 2019

"Nvidia, however, has steadfastly refused to play ball. It has kept G-Sync in the ecosystem as a halo brand, despite the fact that the capability it once used specialized hardware to provide can now be baked into display timing controllers without additional costs."

both technologies produce the same result, that contest is a wash at this point.

So here is AMD dragging Nvidia kicking and screaming to an open standard.
Hardly dragging your own linked article shows point for point that the NVidia Gsync provides better looking more consistent results, just at a higher price point. The Free Sync gives the same results just with a less consistent feature set and mixed bag of compatibility.

While I do think that the GSync licensing should be cheaper their checklist for getting their certification is thorough.
AMD’s FreeSync is something I wish they took more control of and enforced some better QC on in regards to what does and doesn’t make the cut.
 
I know we don't have specifics. But logically. Upgraded 5k CPU IO die and 6k GPU can negotiate point to point pcie between all parties. Allowing things like GPU direct decompression to video memory followed by direct access by the CPUs over inifinity.
 
I know we don't have specifics. But logically. Upgraded 5k CPU IO die and 6k GPU can negotiate point to point pcie between all parties. Allowing things like GPU direct decompression to video memory followed by direct access by the CPUs over inifinity.
There are some pretty good white papers that exist on the tech for the XBox. It would stand to reason that it’s the same technology used in both, as the XBox doesn’t have system ram so the GPU VRAM is partitioned off for the VRAM, OSRam, and GameRam. 10/2.5/3.5 for the full 16. This leverages the direct storage, and smart access, and I hope gets wider support going forward. The API’s for both are being integrated into DX12U so it is something that Intel and NVidia could later integrate into their systems, AMD just get exclusively to them until they do so.

I’m wondering how long it’s going to take for similar API’s to get added to Vulkan, 4K is still hard on these cards so every percent helps for performance and I do like what Vulkan can offer.
 
Hardly dragging your own linked article shows point for point that the NVidia Gsync provides better looking more consistent results, just at a higher price point. The Free Sync gives the same results just with a less consistent feature set and mixed bag of compatibility.
And the conclusion is no disconcernable difference between the two.
The numbers are meaningless if you can't tell the difference. Paying a premium for g-sync is really an exercise in pointless extravagance.
 
And the conclusion is no disconcernable difference between the two.
The numbers are meaningless if you can't tell the difference. Paying a premium for g-sync is really an exercise in pointless extravagance.

I abso agree! And people seem to forget about Freesync premium Pro which is a big difference compared to freesync vanilla.
 
I abso agree! And people seem to forget about Freesync premium Pro which is a big difference compared to freesync vanilla.
But AMD enforces standards on premium pro and does in-house certifications similar to what NVidia does for GSync. I don’t know if AMD charges for that certification though but yes Fresync Premium Pro does a much better job at competing against GSync than vanilla does.
 
And the conclusion is no disconcernable difference between the two.
The numbers are meaningless if you can't tell the difference. Paying a premium for g-sync is really an exercise in pointless extravagance.
Actually in typical Toms Hardware click-bait fashion their conclusion is:
“Ultimately, the more you spend, the better gaming monitor you’ll get. These days, when it comes to displays, you do get what you pay for.”

Which is an utter cop out on a conclusion between the two.

But regardless if you look at what a monitor must support to meet GSync certified requirements than compare them to what it takes to meet FreeSync you will see a fundamental difference in them. If you want a proper comparison between the two techs you need to compare GSync against FreeSync Premium Pro and at that point the cost differences between them are negligible. And basically all the monitors that are on the Premium Pro list are also on the GSync compatible list. So instead of thinking that AMD dragged NVidia down, it’s a better analogy to say that NVidia forced AMD up.
 
But AMD enforces standards on premium pro and does in-house certifications similar to what NVidia does for GSync. I don’t know if AMD charges for that certification though but yes Fresync Premium Pro does a much better job at competing against GSync than vanilla does.
I think there is a fee for Prem Pro that ends up embedded in the price of the display.

Also FS PPro has Low Frame Compensation just like GSYNC. To me Prem Pro is the better solution because it ends up costing less because it doesnt lock you into one proprietary ecosystem. I own one gsync 240 hz TN and a Sam Ody G7 Prem Pro panel. Im never buying Gsync again simply because Im not locking myself into a single company.
 
I think there is a fee for Prem Pro that ends up embedded in the price of the display.

Also FS PPro has Low Frame Compensation just like GSYNC. To me Prem Pro is the better solution because it ends up costing less because it doesnt lock you into one proprietary ecosystem. I own one gsync 240 hz TN and a Sam Ody G7 Prem Pro panel. Im never buying Gsync again simply because Im not locking myself into a single company.
Can I like this twice. I’m in a similar boat, but my GSync monitor died in a lightening strike and the Dell S3220DGF I replaced it with is beautiful, also larger but was like 1/2 the price but sadly doesn’t make the compatibility chart. I can force it but I get issues and it just isn’t enjoyable. Hoping to snag a 6900XT one day and doubly hoping that when AMD announces the 5970x it also gets the smart memory and other fun features. Just because I need the system for work first doesn’t mean I don’t want to also game on it.
 
Can I like this twice. I’m in a similar boat, but my GSync monitor died in a lightening strike and the Dell S3220DGF I replaced it with is beautiful, also larger but was like 1/2 the price but sadly doesn’t make the compatibility chart. I can force it but I get issues and it just isn’t enjoyable. Hoping to snag a 6900XT one day and doubly hoping that when AMD announces the 5970x it also gets the smart memory and other fun features. Just because I need the system for work first doesn’t mean I don’t want to also game on it.

Even though we yet do not know, im bummed the tr40x platform.doesnt have direct mem access as far as we can assume. Im almost tempted to sell my 3960x and get a 5800x until 5960x comes out.
 
By giving away proprietary secrets?
Even if there is some special sauce involved -- which I doubt -- these kinds of secrets don't run very deep. Once you show people what you're doing, tell developers how to implement and take advantage of the system, these people will figure out how the system works on their own.

I wouldn't be surprised to find out that it's something that's already being implemented in other cases, like DirectX projections or console architectures.

To use a metaphor, AMD invented the CPU-GPU sandwich. People are going to try it and figure out that the special sauce is right on the chipset.
 
I think there is a fee for Prem Pro that ends up embedded in the price of the display.

Also FS PPro has Low Frame Compensation just like GSYNC. To me Prem Pro is the better solution because it ends up costing less because it doesnt lock you into one proprietary ecosystem. I own one gsync 240 hz TN and a Sam Ody G7 Prem Pro panel. Im never buying Gsync again simply because Im not locking myself into a single company.
I actually got a gsync compatible/freesync premium pro monitor to replace my gsync one for this reason. If the reviews even remotely hold up to the AMD presentation, I'm pretty certain I'll be picking up a 6900xt.. if I can get my hands on one that is.
 
Last edited:
Even if there is some special sauce involved -- which I doubt -- these kinds of secrets don't run very deep. Once you show people what you're doing, tell developers how to implement and take advantage of the system, these people will figure out how the system works on their own.

I wouldn't be surprised to find out that it's something that's already being implemented in other cases, like DirectX projections or console architectures.

To use a metaphor, AMD invented the CPU-GPU sandwich. People are going to try it and figure out that the special sauce is right on the chipset.
I know it's so easy that there are exactly two or three major CPU and discrete GPU manufacturers in the world. Your metaphor doesn't fly anymore than Intel or Nvidia sharing anything.
 
You guys think the perf will still be competitive even without the smart memory thing?
What? It's not even done yet and all the leaked benchmarks thus far have been without it... why would it not be? Heck, if you look at leaked benches most were with an Intel CPU... so why would this change anything from what we've seen thus far. That said, it will be interesting to see real reviews and see what kind of performance difference you may or may not get and what kind of games it works better with.

Also, the only slide that had that +Smart Access Memory listed was the 6800, it wasn't listed on the 6800xt or 6900xt, so not even sure if it was enabled or not for those benchmarks. We won't really know the full story until it's released, but honestly I don't see anything that says it won't compete well with or without a zen3 chip.
 
What? It's not even done yet and all the leaked benchmarks thus far have been without it... why would it not be? Heck, if you look at leaked benches most were with an Intel CPU... so why would this change anything from what we've seen thus far. That said, it will be interesting to see real reviews and see what kind of performance difference you may or may not get and what kind of games it works better with.

Also, the only slide that had that +Smart Access Memory listed was the 6800, it wasn't listed on the 6800xt or 6900xt, so not even sure if it was enabled or not for those benchmarks. We won't really know the full story until it's released, but honestly I don't see anything that says it won't compete well with or without a zen3 chip.
The 6900XT slide had rage mode and smart access on. It's probably going to be such a small difference overall that it would really only matter for benchmarks.
 
they should trade it for ray tracing and g-sync, make all of them open source like AMD does

To be fair, AMD is making things open source because they don’t have the market share to ram standards down people’s throats. If they had the 80% share that Nvidia and Intel did, they wouldn’t be as open. They need things to be open right now because they can’t convince people to make stuff tailored for them. Nvidia and Intel can, AMD can’t.
 
I hope this tech can be made to work with Ryzen 3000 procs down the road, if at all possible (I've no clue).
 
The 6900XT slide had rage mode and smart access on. It's probably going to be such a small difference overall that it would really only matter for benchmarks.
Won't be a small difference. Based on everything I've read, I think they fudged the last 10-25% of the performance chart vs. stock 3090. The apples to apples benches will be interesting once Rage Mode OC, SmartMem, Zen3 CPU and game-specific coding for SmartMem aren't factored. If Nvidia showed a FPS chart of only DLSS games vs stock AMD, people would be losing their minds.

That said, I'd still lean toward 6900XT purchase since performance-per-dollar will still favor 6900XT. $500 savings is very compelling even though 6900XT won't be purchasable for that price for 4-6 months.
 
Last edited:
Yeah it was late and I messed up my timelines, turns out Vector Displays introduced it for their CRT’s a long time ago 1963, and it was in 2010 when the work with Raster Displays got to the point where the stuff could start being integrated. So yeah official releases in 2014 and 2017 for those interfaces. Late night reading comprehension not my strongest suite.
 
Won't be a small difference. Based on everything I've read, I think they fudged the last 10-25% of the performance chart vs. stock 3090. The apples to apples benches will be interesting once Rage Mode OC, SmartMem, Zen3 CPU and game-specific coding for SmartMem aren't factored. If Nvidia showed a FPS chart of only DLSS games vs stock AMD, people would be losing their minds.

That said, I'd still lean toward 6900XT purchase since performance-per-dollar will still favor 6900XT. $500 savings is very compelling even though 6900XT won't be purchasable for that price for 4-6 months.
SmartMem and Rage Mode account for about 8% boost overall, with Rage Mode being about 2%. I expect the drivers will be improved by the December release and no one will even notice they included it in the benchmarks when the reviews start coming out.
 
I'm more interested in what SMA will do for bridgeless multi GPU.
Multigpu is dead unless they can incorporate multiple chips on the same PCB which requires vastly higher interconnect speeds than current.

Also, I lean heavily nVidia - but anyone cricitizing AMD for an AMD only tech *while* being ok with nVidia is out of their minds. nVidia takes any chance it can to produce a proprietary tech (AMD is not much different, they just market it better). I hate proprietary techs. For wide spread adoption by game devs it helps a ton of it’s universal.

SMA is fine if “it just works” all the time by default. Otherwise I don’t factor it into any decisions.
 
The frame buffer is what moves data from system ram to the VRAM. Right now it is 256Mb. To fill up an 8Gb vram, would take 32 writes from the 256Mb frame buffer.

This tech increases the size of the frame buffer. It's really going to be most helpful on systems with smaller amounts of vram, or use cases where there is a need to make many writes to the VRAM.

I suspect this will have bigger impact on low to midrange graphics. Systems with very low amounts of system ram, say 4Gb or less, it could be a hindrance. Would depend on what the new size of the frame buffer will be. It is goes to 512Mb, it takes half as many writes to get data into vram, but you have .5Gb of system memory tied up for this function...

This buffer size has surely had increases over the years, as both system ram and vram capacities have increased, it makes sense that the buffer size should increase as well. 384Gb, 512Gb seem logical. This is a tradeoff of speed to move data into vram, and consuming system ram to make the buffer. AMD is making it so they can use a larger buffer before it becomes an officially increased specification, but it seems highly likely that this buffer size be reviewed and increased. Users have mentioned x86, seems plausible that this buffer size is a part of the x86 specification. To increase it across the board, Intel and AMD would likely have to implement the change, Microsoft would have some dll update, and the GPU drivers and possibly hardware design would need to take it into account as well. AMD can do most of these changes themselves on the CPU/GPU designs that they make. It's a logical step and will be helpful for larger textures (4k and beyond), GPU's are still getting there when it comes to performance at 4k. Doesn't hurt to make the move to a larger buffer now.

Seems like a change that the rest of the vendors will implement at some point.

For the high end cards, I really doubt it will help much. A good game engine already makes optimized use of the 256Mb buffer size. This is the same datapath(chokepoint) that is "faster" with PCIe4. Yet the reviews on performance of PCIe3 vs PCIe4 showed little to no change. About 1%, and in some cases slower. This is a 2x increase in speed, that increased write speed to the frame buffer but also to the GPU in general. And the impact was negligible. To see the same bandwidth improvement the framebuffer would need to grow to 512Mb minimum. Maybe they are going farther, 1Gb. But taking less time to fill the VRAM, which is already a well optimized pipe, is small increment in overall performance. Much of the time the VRAM doesn't needed large file transfers anyway. Larger buffer will help when a lot of data needs moved and will be equal when less data needs moved.

Don't get your hopes up for 10% faster GPU's from this single change. 10% faster data transfer seems more likely, but that translates into (very) small fps or level loading improvements.
 
Last edited:
Except we'd see meltdowns about "evil Ngreedia proprietary lockin shutting out AMD" if the roles were reversed and Nvidia GPU's happened to perform better with a particular CPU - either their own, or if lets say they'd partnered with Intel for Ampere. Heads would be exploding.

AMD should be celebrated for innovating and finding a way to leverage synergy between CPU/GPU, but the tribalism is pretty toxic nowadays.


Difference is AMD is known for licensing their tech to competitors. So Nvidia could license this from amd and have it on their cards too.
 
Amd was known for licensing when they needed the cash. The last thing they licensed was gpu to intel a couple years ago.I don’t see as much licensing if any in the future. Standard such as freesync- yes.
 
Back
Top