AMD’s RX 6000 GPUs to Boost Perf With Ryzen 5000 CPUs via Smart Memory Access

The only way AMD doesn't make this work with Nvidia is if Microsoft implements something like this in DirectX or some other stack and Nvidia holds out.

AMD wants to sell their CPUs and motherboards to Nvidia users, too. They're not going to cut off their nose to spite their face on this, at least, not in the long run.
 
You keep it in house as an incentive e for customers to buy both your cpu and gpu together. Win for Amd. Lose for nvidia and intel. Maybe they should give it away so Amd doesn’t hurt nvidia and intels feelings and all the crybabies that can’t stand the fact Amd can up with a great IN HOUSE technology that’s exclusive to them.

nvidia and intel both have cpu/gpu. Let them figure it out for themselves.
 
You keep it in house as an incentive e for customers to buy both your cpu and gpu together. Win for Amd. Lose for nvidia and intel. Maybe they should give it away so Amd doesn’t hurt nvidia and intels feelings.

Lol nobody is saying give it away. Licensing is far from a no profit solution. And it won't be cheap. If you read about it amd is doing this hand in hand with Microsoft.
 
Lol nobody is saying give it away. Licensing is far from a no profit solution. And it won't be cheap. If you read about it amd is doing this hand in hand with Microsoft.
Microsoft is not a direct competitor
 
Yea but if it is part of directx then it's a solution available to everyone. If they implement it.
It’s not worth the trade off for an exclusive lock on proprietary gpu/cpu tech that gives an advantage. The amount of crying in this thread proves that.
 
It’s not worth the trade off for an exclusive lock on proprietary gpu/cpu tech that gives an advantage. The amount of crying in this thread proves that.
Time shall tell. Amd is fairly new to being in a leader position. If reviews show they are actually in that position then its incumbent on them to profit from it in a way they as a company culture feel is best. We all know they can snub others if they want. It will be interesting to see if they do.
 
Time shall tell. Amd is fairly new to being in a leader position. If reviews show they are actually in that position then its incumbent on them to profit from it in a way they as a company culture feel is best. We all know they can snub others if they want. It will be interesting to see if they do.
AMD has a duty to shareholders to maximize profit. If you feel keeping their own technology for their own sharholders commercial gains is "snubbing others" - so be it. Business is not about feelings and the cumbaya of sharing everything for the sake of sharing - or even for a few dollars. It's about making money. Lots of it. I thought that was obvious. Pretty expensive licensing would be in order for negating a cpu-gpu purchase lock. Exorbinately so. Besides, shareholders want to see market share increases, not necessarily a few bucks on licensensing.
 
If they licensed it I think the advantage it would give them to sell CPUs and motherboards would be bigger than any disadvantage in the GPU market. I think Nvidia is more likely to be against it than AMD is, partially due to pride and not wanting to promote anything from AMD but also because it would give AMD some leverage over them unless they turned it into more of an open standard.

I also don't see this as any sort of killer feature, it's more of bonus bump in performance if you happen to meet all the requirements. I plan on picking up a ryzen 5000 and radeon 6000 and depending on the boost it might nudge me towards replacing my MB too but only because other factors are in favor of that as well, I really don't think it's going to be a big enough advantage for me to base a decision solely on that(MB or GPU).

It seems like I recall AMD doing something like this with one their phenom chipsets but the hardware wasn't good enough that anyone cared, this time it looks like they have good enough performance on both fronts that an all AMD system is already attractive. It's great that we have some actual competition right now because it makes discussions like this much more interesting.
 
  • Like
Reactions: Epos7
like this
If they licensed it I think the advantage it would give them to sell CPUs and motherboards would be bigger than any disadvantage in the GPU market.
How? Whats the reasoning? If you thing a full brand CPU/GPU/MB sale is less value to a company than a small licensing fee and loosing that, your thinking is flawed.

I think Nvidia is more likely to be against it than AMD is, partially due to pride and not wanting to promote anything from AMD but also because it would give AMD some leverage over them unless they turned it into more of an open standard.
So who cares what Nvidia thinks? AMD is leveraging a CPU/GPU/MB with this versus .. just a GPU from Nvidia. Why exactly help your direct competitor Nvidia and sell less GPUS? Let me be more clear.

LICENSING THIS WON'T COVER THE COST OF A LOST GPU AND/OR CPU SALE

I also don't see this as any sort of killer feature, it's more of bonus bump in performance if you happen to meet all the requirements. I plan on picking up a ryzen 5000 and radeon 6000 and depending on the boost it might nudge me towards replacing my MB too but only because other factors are in favor of that as well, I really don't think it's going to be a big enough advantage for me to base a decision solely on that(MB or GPU).
I don't either, which is why it perplexes me on why anyone would argue this point to license this.
 
Who knows perhaps Intel will add it to their eventual XE cards. /s (only half /s)

Also Nvidia does have eyes on being a CPU competitor as well. Just their play is going to involve a ISA swap. Think I'm kidding... frankly I'm still trying to figure out how their ARM purchase is going to effect gamers or if it ever will.

At this point I just don't think we know enough about smart memory to determine exactly how it can be implemented by Intel or Nvidia... does it need GPU and CPU support or do they both just have to support a open/DX standard. At some point AMD is going to talk about their tech more. Then we can figure out of its something that is a platform thing that everyone will have in a year or two, or if its going to be a AMD AMD thing forever.
 
  • Like
Reactions: Epos7
like this
How? Whats the reasoning? If you thing a full brand CPU/GPU/MB sale is less value to a company than a small licensing fee and loosing that, your thinking is flawed.


So who cares what Nvidia thinks? AMD is leveraging a CPU/GPU/MB with this versus .. just a GPU from Nvidia. Why exactly help your direct competitor Nvidia and sell less GPUS? Let me be more clear.

LICENSING THIS WON'T COVER THE COST OF A LOST GPU AND/OR CPU SALE


I don't either, which is why it perplexes me on why anyone would argue this point to license this.
I see the main advantage to be marketing leverage, if Nvidia GPUs can take advantage of it then it helps sell their new CPUs and MBs. I think that outweighs the lost advantage to GPU sales but I could be completely wrong and maybe even the opposite is true, either way I think it could be help them more if they can increase adoption. The licensing fees don't really factor in as much I don't think but it would basically be pure profit.

The main reason I don't think it would ever happen is that Nvidia wouldn't want them to have any leverage over them no matter how slight and it wouldn't be to AMDs benefit to make it open source.

Edit: I should add that the reason I think it's more an advantage to sell MB/CPUs is that people tend to keep them through a couple cards so if that feature works with both brands then it's still helps them if they go green on their next GPU.
 
I see the main advantage to be marketing leverage, if Nvidia GPUs can take advantage of it then it helps sell their new CPUs and MBs. I think that outweighs the lost advantage to GPU sales but I could be completely wrong and maybe even the opposite is true, either way I think it could be help them more if they can increase adoption. The licensing fees don't really factor in as much I don't think but it would basically be pure profit.

The main reason I don't think it would ever happen is that Nvidia wouldn't want them to have any leverage over them no matter how slight and it wouldn't be to AMDs benefit to make it open source.
Marketing Leverage? AMD now has GPUs on par with Nvidia. That's marketing leverage - combining that with a AMD CPU and MB that's real money. Your logic escapes me.
 
Marketing Leverage? AMD now has GPUs on par with Nvidia. That's marketing leverage - combining that with a AMD CPU and MB that's real money. Your logic escapes me.
Marketing leverage for the reason I edited in(not sure if you saw that) and since they can compete on both fronts right now and increased adoption of it helps them more(IMO).

It's all academic anyways because I really don't think nvidia(or intel) would want to adopt this.
 
Who knows perhaps Intel will add it to their eventual XE cards. /s (only half /s)

Also Nvidia does have eyes on being a CPU competitor as well. Just their play is going to involve a ISA swap. Think I'm kidding... frankly I'm still trying to figure out how their ARM purchase is going to effect gamers or if it ever will.

At this point I just don't think we know enough about smart memory to determine exactly how it can be implemented by Intel or Nvidia... does it need GPU and CPU support or do they both just have to support a open/DX standard. At some point AMD is going to talk about their tech more. Then we can figure out of its something that is a platform thing that everyone will have in a year or two, or if its going to be a AMD AMD thing forever.

It is the amount of GPU memory that the CPU can access at a single time. Typically the VRAM accessible to to the CPU is a small window, SMA give the GPU access to all the VRAM at once. Required is also a 500 series MB, 6k GPU and 5k CPU. This would indicate that this technology is deeply baked into the AMD SMA solution.
 
I don't think licensing is the right call. Once people build Zen 3 computers, now they have a good reason to buy an AMD GPU.

Not only that, people tend to keep their CPU/Mobo for several years (or longer), meaning they are effectively locked into AMD GPUs for multiple generations (similar to how G-Sync locks people in).

So I doubt AMD wants to give up this competitive advantage. That said, once they explain how it works, Intel/Nvidia have smart people that could implement their own version, should they desire.
 
It is the amount of GPU memory that the CPU can access at a single time. Typically the VRAM accessible to to the CPU is a small window, SMA give the GPU access to all the VRAM at once. Required is also a 500 series MB, 6k GPU and 5k CPU. This would indicate that this technology is deeply baked into the AMD SMA solution.
Or its a open standard that they are simply first to implement. We don't know if its priority AMD tech... or it if was developed with MS (and perhaps) Sony to be a standard. (That both MS and Sony have some version of it leads me to believe its more a AMD tech but who knows right now for sure)

But yes I agree with you at this point... Bios settings tend to lead us to believe its very likely AMD tech and ya why share. MS and Sony are happy to have it... and if its exclusive to AMD on the PC platform they won't be complaining.

It will be interesting to see the Intel reaction more then Nvidias imo. Nvidia doesn't really have any control over AMD or Intel CPU features. If Intel comes up with their own version are they going to open it to Nvidia.... as NV GPU sales at least have an equal chance of going Intel. lol I was being sarcastic when I said Intel could add it to XE but ya.... they clearly need to rethink XE for desktop if there not planning to give up on it. Perhaps its not to late to build something into Alder Lake.
 
Or its a open standard that they are simply first to implement. We don't know if its priority AMD tech... or it if was developed with MS (and perhaps) Sony to be a standard. (That both MS and Sony have some version of it leads me to believe its more a AMD tech but who knows right now for sure)

But yes I agree with you at this point... Bios settings tend to lead us to believe its very likely AMD tech and ya why share. MS and Sony are happy to have it... and if its exclusive to AMD on the PC platform they won't be complaining.

It will be interesting to see the Intel reaction more then Nvidias imo. Nvidia doesn't really have any control over AMD or Intel CPU features. If Intel comes up with their own version are they going to open it to Nvidia.... as NV GPU sales at least have an equal chance of going Intel. lol I was being sarcastic when I said Intel could add it to XE but ya.... they clearly need to rethink XE for desktop if there not planning to give up on it. Perhaps its not to late to build something into Alder Lake.
AMD has been doing this kind of stuff for awhile now. PS4 had simultaneous access to the shared RAM and a dedicated bus for CPU/GPU communications. But with PS4, AMD took existing parts and added such customizations. And unfortunately, the PS4 did not use a CPU based on a relevant desktop part. So we never saw any of that stuff in their desktop chips.

This time, R&D for the consoles coincided with R&D for AMD's next PC CPUs and Graphics. And Sony and MS were willing to go bigger this time, as well (sell hardware at a loss). So, AMD was able to bring some of the console requested features to their next generation of desktop CPUs and GPUs, because that's what is also in the consoles. The separation this time is very slim. And apparently they didn't need to also release a new mobo chipset. PCI-E 4.0 probably has enough bandwidth they didn't need to release a new chipset with an extra bus. Or maybe you just lose a PCI-E lane in actuality and they just haven't mentioned it yet...
 
Or its a open standard that they are simply first to implement. We don't know if its priority AMD tech... or it if was developed with MS (and perhaps) Sony to be a standard. (That both MS and Sony have some version of it leads me to believe its more a AMD tech but who knows right now for sure)

But yes I agree with you at this point... Bios settings tend to lead us to believe its very likely AMD tech and ya why share. MS and Sony are happy to have it... and if its exclusive to AMD on the PC platform they won't be complaining.

It will be interesting to see the Intel reaction more then Nvidias imo. Nvidia doesn't really have any control over AMD or Intel CPU features. If Intel comes up with their own version are they going to open it to Nvidia.... as NV GPU sales at least have an equal chance of going Intel. lol I was being sarcastic when I said Intel could add it to XE but ya.... they clearly need to rethink XE for desktop if there not planning to give up on it. Perhaps its not to late to build something into Alder Lake.
I can’t find anything about it being an open standard but I have managed to find out more about how it works. It seems that the AMD “Infinity cache” is a shared resource that the CPU can access directly and when paired with the direct access API in DX12U it lets the entire system technically access it directly with out technically violating the PCIE interface standards. I don’t fully understand it so if my interpretation is incorrect I welcome corrections.
 
Marketing Leverage? AMD now has GPUs on par with Nvidia. That's marketing leverage - combining that with a AMD CPU and MB that's real money. Your logic escapes me.
On par* with a big asterisk. We still haven't seen apples to apples benches.
 
On par* with a big asterisk. We still haven't seen apples to apples benches.
Even with a decent amount of marketing fuckery, it is safe to say that they are damned close even if they are 5-10% behind, if AMD can have decent availability and keep their prices in check than it is a massive win. If parity or in some edge cases superiority is reached by using their rage mode and smart memory access and other platform advantages then they have managed to launch a compelling product stack that is a massive win.

Granted there are a lot of IF's in there but it puts AMD in a strong position regardless.
 
Even with a decent amount of marketing fuckery, it is safe to say that they are damned close even if they are 5-10% behind, if AMD can have decent availability and keep their prices in check than it is a massive win. If parity or in some edge cases superiority is reached by using their rage mode and smart memory access and other platform advantages then they have managed to launch a compelling product stack that is a massive win.
Absolutely. Availability and pricing won't be what people are hoping though. Insta sold out and then on eBay.

But Nvidia isn't doing me any favors, as I won't pay above MSRP for 3080, thus I will buy whatever I can get soonest at MSRP- 3080 or 6800XT.
 
Why would anyone want a desktop ARM chip tho? Has to be a demand to create such a product and without the software support I dont think that will happen.
Because ARM laptops (and possibly desktops) may turn out to be materially better than x86 ones.
Industy veteran Jean-Louis Gassée pointed it out a while back: https://mondaynote.com/apple-silicon-the-passing-of-wintel-79a5ef66ad2b
That is what Intel thought as well when it made the Itanium and well lets just say that didn't work out. Part of the issue is emulating the compatibility you need is always far slower.
Digital FX!32 more than 20 years ago already achieved 40% of native speed by using binary translation of x86 to Alpha code. Modern implementations are around 60% speed. And you need it mostly for legacy software that is no longer actively maintained. The most performance critical application in most people's computers is the browser nowadays.

Of course if you are a gamer and want to run traditional AAA games, then that does not apply to you.
the whole end of x86 writing has been on the wall is something people have have said since the days of the movie Hackers and powerPC architecture. And we see how that writing stuck.

Anyone trying to kill x86 only seems to be able to write with magic markers.
There is one key difference that was not true back then, but is true today: We are 10 years past Peak PC. The mass market is now mobile, almost entirely ARM, which is where most hardware and software development resources are focused now, while the PC market is in terminal decline. (But don't worry, the PC won't go away for many years. PC-like form factors are however going to increasingly use repurposed mobile silicon.)
Screen-Shot-2015-04-14-at-4-14-10.55.36-AM-620x348.png
Source: Asymco
Because DP is better than HDMI, I think it is, why do premo TVs never have the connector?
Most of the big TV companies are part of HDMI LLC patent pool, so will gain revenues from every HDMI device sold. They have no interest in establishing DisplayPort as contender to HDMI among home entertainment products.
 
Is this strictly a hardware benefit or is it dependent on driver? I'm wondering if the same performance increases come when using Linux. If so, are the proprietary drivers a requirement?
 
So which is it? See the benches or nope? Change like the wind on consecutive posts within 5 mins.
Which is what? Both statements are true, although the second is my opinion based on deduction- 1) we haven't seen independent benches, 2) AMD's presentation slides showed cherrypicked best case scenarios with all the AMD optimizations and boosts enabled, versus a stock Nvidia card - and they were essentially neck and neck. This gives me the sense AMD is not going to "beat" Nvidia in "raw power" if all that was needed to essentially tie.

My definition of "raw power" is performance that could reasonably be expected typical for any or at least most games without a specific accompanying CPU, or specific overclock, or game-specific software optimizations. Back in 2014, AMD brought us Mantle support (the precursor to Vulkan) in a number of titles, and it gave an FPS boost in those games with certain AMD cards - Battlefield 4 for example. The boost was cool, but that didn't mean AMD was suddenly beating Nvidia in "raw power" in all games or even most games. This situation is somewhat reminiscent, but how much so we'll have to wait for independent benches to determine.

That said, even without the benefit of more benchmarks, it seems clear AMD will handily beat Nvidia this round for what to me is the most important metric of all: performance-per-dollar, at least at MSRP. No contest.
 
Last edited:
We'll see what the reviews say. I would guess only a 2-5% uplift in performance at best.

Hell of a marketing pitch though.
 
Which is what? Both statements are true, although the second is my opinion based on deduction- 1) we haven't seen independent benches, 2) AMD's presentation slides showed cherrypicked best case scenarios with all the AMD optimizations and boosts enabled, versus a stock Nvidia card - and they were essentially neck and neck. This gives me the sense AMD is not going to "beat" Nvidia in "raw power" if all that was needed to essentially tie.

My definition of "raw power" is performance that could reasonably be expected typical for any or at least most games without a specific accompanying CPU, or specific overclock, or game-specific software optimizations. Back in 2014, AMD brought us Mantle support (the precursor to Vulkan) in a number of titles, and it gave an FPS boost in those games with certain AMD cards - Battlefield 4 for example. The boost was cool, but that didn't mean AMD was suddenly beating Nvidia in "raw power" in all games or even most games. This situation is somewhat reminiscent, but how much so we'll have to wait for independent benches to determine.

That said, AMD looks like it will still handily beat Nvidia this round for what I consider the most important metric of all: performance-per-dollar. No contest.
Does this mean that you don't count any games that say NVIDIA in the splash screen since they were optimized specifically for nvidia cards? I do understand not counting the AMD 5x000 + 6x00xt specific memory optimizations though, but really a lot of other things are near impossible to factor out. Look for example at the 3080/3090 where "boost frequency" is set to some low number and most cards boost well over.. but not all of them do. It's hard to tell if you're really comparing nvidia 3080 BASE or FE, or a card that clocks higher than standard. One thing is for sure, AMD has at least come out swinging for the top end as opposed to what we saw last cycle, so more competition at the high end will hopefully be a win for us.
 
  • Like
Reactions: DPI
like this
Does this mean that you don't count any games that say NVIDIA in the splash screen since they were optimized specifically for nvidia cards? I do understand not counting the AMD 5x000 + 6x00xt specific memory optimizations though, but really a lot of other things are near impossible to factor out. Look for example at the 3080/3090 where "boost frequency" is set to some low number and most cards boost well over.. but not all of them do. It's hard to tell if you're really comparing nvidia 3080 BASE or FE, or a card that clocks higher than standard.
Hell yes Nvidia misleads and embellishes. The slides in their Ampere presentation come to mind as vague and ambiguous, with the "3080 is 2X the performance" slide making it seem like they meant 2X the 2080Ti, when they were talking about 2X the 2080; the list goes on.

One thing is for sure, AMD has at least come out swinging for the top end as opposed to what we saw last cycle, so more competition at the high end will hopefully be a win for us.
Absolutely a strong AMD lifts the entire space, and benefits even entrenched "I wouldn't buy AMD even if I'd get more power for the same money" Nvidia zealots, whether they appreciate or realize even it or not.
 
Which is what? Both statements are true, although the second is my opinion based on deduction- 1) we haven't seen independent benches, 2) AMD's presentation slides showed cherrypicked best case scenarios with all the AMD optimizations and boosts enabled, versus a stock Nvidia card - and they were essentially neck and neck. This gives me the sense AMD is not going to "beat" Nvidia in "raw power" if all that was needed to essentially tie.

My definition of "raw power" is performance that could reasonably be expected typical for any or at least most games without a specific accompanying CPU, or specific overclock, or game-specific software optimizations. Back in 2014, AMD brought us Mantle support (the precursor to Vulkan) in a number of titles, and it gave an FPS boost in those games with certain AMD cards - Battlefield 4 for example. The boost was cool, but that didn't mean AMD was suddenly beating Nvidia in "raw power" in all games or even most games. This situation is somewhat reminiscent, but how much so we'll have to wait for independent benches to determine.

That said, even without the benefit of more benchmarks, it seems clear AMD will handily beat Nvidia this round for what to me is the most important metric of all: performance-per-dollar, at least at MSRP. No contest.
So by your logic, any benchmark using DLSS doesn't count because it is game specific and is most definitely vendor specific.
 
Back
Top