NVIDIA Confirms Ampere Get Its Own Smart Access Memory (SAM) Tech - Works on Both Intel & AMD

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
@GamersNexus


From NVIDIA, re:SAM: “The capability for resizable BAR is part of the PCI Express spec. NVIDIA hardware supports this functionality and will enable it on Ampere GPUs through future software updates. We have it working internally and are seeing similar performance results."

Hard to fit in a tweet, but basically, they're working on enabling the same feature as AMD Smart Access Memory (AMD GPU+CPU=Perf uplift) on both Intel and AMD. No ETA yet. Doesn't look like it'll be ready before RX 6000 launch, but we'll keep an eye on development."


https://wccftech.com/nvidia-confirms-ampere-geforce-rtx-30-gpus-to-get-its-own-smart-access-memory-sam-tech/
 
Last edited by a moderator:
nVidia is in trouble and AMD is going crazy. I was just at my local retailer and they have been selling 95% amd cpus and 5% intel for the last year.

That's exactly what happened in 1999-2005.

AMD was dominating Individual sales, but individual sales were a tiny fraction of the market. Since Intel prevented them from making inroads to the OEM's they essentially never gained enough market share to remain relevant.
 
That's exactly what happened in 1999-2005.

AMD was dominating Individual sales, but individual sales were a tiny fraction of the market. Since Intel prevented them from making inroads to the OEM's they essentially never gained enough market share to remain relevant.

except to my knowledge intel is losing share on the server/commercial side as well due to multiple flubs.
 
Big Navi launch slides had this enabled. If NVIDIA gets the same bump, the situation gets muddier. Really glad to to read this. The prospect of getting punished for "only" having Zen 3/B550 instead of the complete set with Big Navi was really pissing me off.
 
Intel's been selling you 3% for the last ten years...
Lol yeah.
Also AMD mentioned that NO ONE has optimized for SAM yet and yet they are seeing gains.....
I wonder if the gains will be even higher when devs optimize for it in games.
 
  • Like
Reactions: ChadD
like this
People love jerking off to SAM ...

unnamed (1).jpg
 
I think we really deserve to know what the "real" technical requirements are for making this work.

AMD has indicated that this will require a 5000 series (Zen3) CPU and a 500 series chipset motherboard in order for this to work. Why exactly can't it work on Zen2 and/or 400 series motherboards? Why? Does it REQUIRE PCIe 4.0? Because apparently Nvidia's solution doesn't.

Nvidia has indicated that they can make this work with their cards on Intel OR AMD CPUs, and even with PCIe 3.0. But only on Ampere of course... Why not Turing, etc?

I want to know what is being done (or not done) due to technical limitations as opposed to artificial product segmentation purposes.
 
May have limited benefit on pci3 even if it works. Could be still viable with nvidia's compression tech?
 
Ok Nvidia but, why did it take AMD to offer this feature----for you to turn it on?

And there are other questions like----will Nvidia's solution require bios updates for motherboards? Is AMD's bios toggle for compatibility issues or is a bios update needed for the feature to work at all?
 
Last edited:
People love jerking off to SAM and it's what, +3%? I won't complain but....
IIRC, the typical range shown within AMD’s presentation is a 2-6% uplift in performance. But there are instances of 10%+, one of which I think was Forza Horizon 4. I won’t complain about free additional performance.

Interesting that with Nvidia GPUs it’ll work with both Intel and AMD CPUs. As AMD indicated you’ll need a 5000 series AMD CPU and a 500 series motherboard for it to work with their RX 6000 cards. More restrictive than what Nvidia will apparently require.
 
Last edited:
I think it is a situation where they are bringing out a new feature on a limited hardware while the feature is being developed and finalized.

Thats not unusual or sneaky.

They just chose to do testing on the newest, shiny hardware. Where should they have started, with socket 940 opterons and pcie2 motherboards?
 
I think we really deserve to know what the "real" technical requirements are for making this work.

AMD has indicated that this will require a 5000 series (Zen3) CPU and a 500 series chipset motherboard in order for this to work. Why exactly can't it work on Zen2 and/or 400 series motherboards? Why? Does it REQUIRE PCIe 4.0? Because apparently Nvidia's solution doesn't.

Nvidia has indicated that they can make this work with their cards on Intel OR AMD CPUs, and even with PCIe 3.0. But only on Ampere of course... Why not Turing, etc?

I want to know what is being done (or not done) due to technical limitations as opposed to artificial product segmentation purposes.
Perhaps we can reexamine that if Nvidia actually gets it working. :)
I am going to assume its going to require extra PCIe lanes... which not every chipset is awash in. (and those lanes are going to I assume have to be on the CPU for max uplift, and perhaps to work at all) Its also likely it will work in theory with PCIe3... but show zero uplift as those lanes are just plain slower. This might be the best argument case for PCIe4 yet.
 
IIRC, the typical range shown within AMD’s presentation is a 2-6% uplift in performance. But there are instances of 10%+, one of which I think was Forza Horizon 4. I won’t complain about free additional performance.

Interesting that with Nvidia GPUs it’ll work with both Intel and AMD CPUs. As AMD indicated you’ll need a 5000 series AMD CPU and a 500 series motherboard for it to work with their RX 6000 cards. More restrictive than what Nvidia will apparently require.

Wasn’t that beast mode or whatever? Which was sam + oc?
 
Maybe, just maybe, AMD will make this work for all CPUs on 500 series chipsets with 6000 series cards just to have a checkmark and keep up with nVidia.
 
Why? Nvidia has nothing to do with Intel. They even tweeted at AMD when the thread rippers came out or new Epyc parts.

Presumably because Nvidia might want to spitefully try to isolate AMD just like how TWIWMTBP games sometimes had code to make them run much worse than expected on non-Nvidia GPUs. Getting paid under the table by Intel could also be reason as Intel is going to be rather desperate to retain mind share in the gamer market.

Still, I'd wager Nvidia's feature will work with AMD CPUs just because of the probable market realignment for gamers and to try to show themselves as the "good guys" against AMD for once by having the feature be CPU agnostic instead of locked to a single manufacturer... that might be a first for Nvidia!
 
I think it is a situation where they are bringing out a new feature on a limited hardware while the feature is being developed and finalized.

Thats not unusual or sneaky.

How can you claim that something is "sneaky" or not when you are basing that assumption on nothing more than your own speculation?

They just chose to do testing on the newest, shiny hardware.

To be clear, you have absolutely no idea exactly why either AMD or Nvidia is choosing to focus on certain hardware. Feel free to prove me wrong.

Where should they have started, with socket 940 opterons and pcie2 motherboards?

A good place to start would be for them to tell us about the technical aspects of this feature instead of dangling a marketing carrot.
 
Presumably because Nvidia might want to spitefully try to isolate AMD just like how TWIWMTBP games sometimes had code to make them run much worse than expected on non-Nvidia GPUs. Getting paid under the table by Intel could also be reason as Intel is going to be rather desperate to retain mind share in the gamer market.

Still, I'd wager Nvidia's feature will work with AMD CPUs just because of the probable market realignment for gamers and to try to show themselves as the "good guys" against AMD for once by having the feature be CPU agnostic instead of locked to a single manufacturer... that might be a first for Nvidia!
That still makes zero sense. Why would Nvidia limit the customers they can sell to?
 
Lol yeah.
Also AMD mentioned that NO ONE has optimized for SAM yet and yet they are seeing gains.....
I wonder if the gains will be even higher when devs optimize for it in games.

The NVIDIA plans to support resizable BAR as well bodes well. Dev support is waaaay more likely if both AMD and NVIDIA support.

Would love to see what devs can do when GPUs directly load SSD data with Direct Storage and then the CPU having direct access to GPU memory. Wonder if there will be synergistic effects of combining the two direct GPU memory access technologies.
 
I still want to know if SAM is more beneficial at lower resolutions. If it's just as effective at 1440p and 4k that's a major selling point, but AMD didn't really mention anything about that.
 
That still makes zero sense. Why would Nvidia limit the customers they can sell to?
so sell more Nvidia cards instead of AMD ones I guess.
Might have fewer "total sales" but will have potential to have a greater "market share" than AMD
 
How can you claim that something is "sneaky" or not when you are basing that assumption on nothing more than your own speculation?


To be clear, you have absolutely no idea exactly why either AMD or Nvidia is choosing to focus on certain hardware. Feel free to prove me wrong.


A good place to start would be for them to tell us about the technical aspects of this feature instead of dangling a marketing carrot.
Of course it is my opinion. Why would you think otherwise?

But it is the usual way development goes. Believe it or not.
 
Do you need PCIe 4.0 for this to work?
No, the features were added into the PCIe spec back in 2016/2017 so it was designed around PCIe3, would it work better in 4, yes but even at 3.0 speeds, the speed increase will bring it to a point where you are more likely to enter areas of being GPU bound for most stuff anyway. So where using PCIe3 you might get a 2-3% performance increase instead of 4.0's 4-5%. So it would be one of those differences that show up in benchmarks but probably not something the average person is going to notice.
 
May have limited benefit on pci3 even if it works. Could be still viable with nvidia's compression tech?
It has nothing to do with compression, it's just the buffer size for moving data from system RAM to video card VRAM, that's it.

This buffer has increased in size more than once since the advent of pc gaming. I think around 2002 it was 32Mb. Right now it's 256Mb, and future increases were inevitable.

AMD only implementing it on certain chipsets could be a technical limitation in the older chipsets, or it could be an imposed selling point. AMD limiting something on older products?!??!!1 Say it isn't so, I mean AMD has a Halo and harpsicord...
I still want to know if SAM is more beneficial at lower resolutions. If it's just as effective at 1440p and 4k that's a major selling point, but AMD didn't really mention anything about that.
Probably not. Likely why there hasn't been any push or need to increase it prior to 4k gaming becoming a real choice.
 
It has nothing to do with compression, it's just the buffer size for moving data from system RAM to video card VRAM, that's it.

This buffer has increased in size more than once since the advent of pc gaming. I think around 2002 it was 32Mb. Right now it's 256Mb, and future increases were inevitable.

AMD only implementing it on certain chipsets could be a technical limitation in the older chipsets, or it could be an imposed selling point. AMD limiting something on older products?!??!!1 Say it isn't so, I mean AMD has a Halo and harpsicord...

Probably not. Likely why there hasn't been any push or need to increase it prior to 4k gaming becoming a real choice.

My guess is that doing this involves some manual configuration change by users, OS updates, & graphics drivers, but AMD is able to make it work 5xxx cpus + 5xx boards + 6xxx gpus off the shelf without any change by users

It would be interesting to know more about Nvidia's implementation. That should give some answers on how this works.
 
It makes sense that increasing the buffer could require the support of the chipset, the cpu, the gpu's memory management, the OS/DirectX, and maybe even tuning in the game engine to take full advantage. So AMD implementing it concerted on their products would be easy for them to do. And if it is already in the PCIe spec, chances are the OS support is already in place.

I can't find any information regarding how it has happened over the years. But I am sure all of the players agree to make the necessary adjustments over time, and it gets implemented. I actually think the OS and GPU memory management are the two biggest pieces. If support for increases is already built into the pcei3 spec, it really shouldn't be difficult to implement it now, even on older mobos and GPU's if they have sufficient memory bandwidth.
I think the larger they make the buffer, the more likely it becomes that a memory transfer event could hurt performance, if it took too long. (Choosing) the size of the buffer is about balance and tradeoffs. But large system memory amounts (room for the buffer), fast (high bandwidth) VRAM, and fast PCIe make it start to make sense to try increasing the buffer size again.
 
Scott Herkleman (VP AMD) confirms they won’t block Nvidia from implementing resizable BAR with Ryzen systems during an interview on TheFullNerd podcast. Apparently, work is already underway.



Relevant section begins around the 35 min mark.
 
Back
Top