Why does Ryzen 7 1800X performs so poorly in games?

From Ryzen: Strictly Technical thread @ Anand:





Clearly shows that there is a performance penalty when threads are on different CCX. Second video shows that Windows scheduler just assigns the threads "randomly" and why we are seeing such a varied results from game benchmarks.

I'm 100% sure this doesn't happen on intel 6-8 core processors because they don't have this CCX interconnect bw limitation as they are one monolithic die. It's no wonder we are seeing quite big gains with faster ram on ryzen because this "Infinity Fabric" (HyperTransport with a fancy name) which communicates between CCX is tied to ram speed.

The real question is that can it be fixed so that windows scheduler tries to keep threads which have dependency on each other on the same CCX. Maybe just make it behave like NUMA

That defeats the point of an 8 core CPU if I need to keep process locked to 4 logical core to ensure I don't get communication issues across the board. If this is true the 6 core Ryzen is in a world of hurt as that would be 3 cores per CCX and games generally scale well to 4 cores. I'm a bit doubtful that is the issue though because it doesn't seem to effect other applications outside of games that will spread across alll 8 cores.
 
I see people are still fighting. It looks like Ryzen isn't Sandy Bridge.
 
That defeats the point of an 8 core CPU if I need to keep process locked to 4 logical core to ensure I don't get communication issues across the board. If this is true the 6 core Ryzen is in a world of hurt as that would be 3 cores per CCX and games generally scale well to 4 cores.

Right, but in the other hand the top 6C Ryzen seems to have more L3 cache per core if rumors are correct

4MB L2 + 16MB L3 for 1800X

3MB L2 + 16 MB L3 for 1600X

How this combination of less-core-more-cache will affect performance will depend of the requirements of each individual title, but I guess 1600X Ryzen would play games ~7% worse than 1800X in overall sense.
 
That defeats the point of an 8 core CPU if I need to keep process locked to 4 logical core to ensure I don't get communication issues across the board. If this is true the 6 core Ryzen is in a world of hurt as that would be 3 cores per CCX and games generally scale well to 4 cores. I'm a bit doubtful that is the issue though because it doesn't seem to effect other applications outside of games that will spread across alll 8 cores.

I don't think it defeats the purpose, but rather illustrates the design goals. It was clearly designed for render-farm type parallelism where you have multiple cores working on essentially the same data sets.
When you have cores running very independent tasks (and data), it doesn't work as well.

This is why I'm skeptical of "just optimize for Ryzen" as a solution. Asking devs to manually setup threads to improve cache locality seems like a hell of a longshot. We're still trying to just get them to use more cores at all.
 
That defeats the point of an 8 core CPU if I need to keep process locked to 4 logical core to ensure I don't get communication issues across the board. If this is true the 6 core Ryzen is in a world of hurt as that would be 3 cores per CCX and games generally scale well to 4 cores. I'm a bit doubtful that is the issue though because it doesn't seem to effect other applications outside of games that will spread across alll 8 cores.

But there's big difference with games and some applications vs all the other applications that are multi-threaded like Cinebench, Blender, etc. Their threads are not dependent on each other like games are. Cinebench doesn't need the data from each of the 16 frame its rendering until one of them is ready and assings the next frame to the free logical core. Game threads are higly dependent on each other and even the API comes to play. What if we have situation where the main game engine thread is at one CCX, lots of secondary engine threads on the second CCX and API thread also on the second CCX.

I have a feeling that 4 core Ryzen is going to be quite bit faster in games than the 8 core at the same clockspeed if it uses only one CCX core complex and not 2 CCX with 2 core disabled. Also those 6 core ryzens might be actually the worst of the lineup for games.

Somebody really needs to test games with only one CCX activated vs both activated to see what kind of difference it makes.
 
That defeats the point of an 8 core CPU if I need to keep process locked to 4 logical core to ensure I don't get communication issues across the board. If this is true the 6 core Ryzen is in a world of hurt as that would be 3 cores per CCX and games generally scale well to 4 cores. I'm a bit doubtful that is the issue though because it doesn't seem to effect other applications outside of games that will spread across alll 8 cores.

I think this is where a Numa scheduler fix would fall into place. One application's threads would stay with one CCX unless it needs to spill over or the program is numa aware and is properly handling it's own scheduling to minimize this delay. This wouldn't be a silver bullet but it would be the best of both worlds. Programs that have limited threads and need clockspeed and throughput through the cores it does use would get a boost from being contained within a ccx. Programs needing more cores would naturally spill over and while not a perfect world would still benefit from those cores compared to a 7700. This would also make sure whatever capture, streaming, and other background tasks aren't causing game threads to flop back and forth.
 
I have a feeling that 4 core Ryzen is going to be quite bit faster in games than the 8 core at the same clockspeed if it uses only one CCX core complex and not 2 CCX with 2 core disabled. Also those 6 core ryzens might be actually the worst of the lineup for games.

Ask yourself what is more probable: to have four defective cores in the same CCX or to have two defective cores in one CCX and two defective cores in the other CCX.
 
Ask yourself what is more probable: to have four defective cores in the same CCX or to have two defective cores in one CCX and two defective cores in the other CCX.

There is no need to even ask, that's why I put the if there.
 
Ask yourself what is more probable: to have four defective cores in the same CCX or to have two defective cores in one CCX and two defective cores in the other CCX.

There is no need to even ask, that's why I put the if there.

Or it turns into a silicon library where some are two CCX solutions and others are one with any number combinations: 3 cores on one CCX 1 on another. Just does not seem like a design meant to scale down lower then 8 cores.
 
Or it turns into a silicon library where some are two CCX solutions and others are one with any number combinations: 3 cores on one CCX 1 on another. Just does not seem like a design meant to scale down lower then 8 cores.
I would say wrong there. A.) Over half the lineup is 4c parts. B.) The APU's are going to all be 4c parts. AMD wouldn't do that if they couldn't scale down to 1 CCX. That said I am sure that there will be some cases specially with the R5's of AMD disabling a CCX (if production is great) or disabling 2 cores a CCX (I really doubt AMD will sell a CPU with unbalanced CCX's). But as it stands the only CPU I think we can garuntee is a disabled core variation is the 1600x. The 1400 and 1500 could very well have multiple cores disabled. Wouldn't surprise me with an early Q2 release instead of 2H like Raven Ridge and the R3's.
 
Or it turns into a silicon library where some are two CCX solutions and others are one with any number combinations: 3 cores on one CCX 1 on another. Just does not seem like a design meant to scale down lower then 8 cores.

The existence of different engineering samples and measurable performances between the 4+0, 3+1 and 2+2 configurations was confirmed time ago. But the last rumor was that the 3+1 chips don't pass to commercial stage. It remains to be confirmed if all the quads will be 2+2 or if some models will be 4+0.
 
Or it turns into a silicon library where some are two CCX solutions and others are one with any number combinations: 3 cores on one CCX 1 on another. Just does not seem like a design meant to scale down lower then 8 cores.


Well if that is the case, I expect the 2+2 or 3+1 to be put as a sku under the 4+0 ones.
 
You don't need to say me stuff I know and have mentioned before. In post #687 I gave a BF1 bench showing a FX-8370 on pair with a R7-1800X at 4K. In the same post I demonstrated how parity disappear when the GPU bottleneck is removed, and I have explained in multiple other posts what this mean for RyZen.



Before launch we had official claims, demos, and leaks promising us that AMD would be a beast on gaming. On launch an overwhelmed consensus was achieved among reviews:

Ars: "an excellent workstation CPU, but it doesn't game as hard as we hoped."

PCWorld: "multithreaded monster with one glaring weakness"

PCGamer: "plenty of power, but underwhelming gaming performance"

Extremetech: "amazing workstation chip with a 1080p gaming Achilles heel"

...

Instead admitting that "Ryzen Doesn't Quite Live Up To The Hype", a vocal group of people in forums rejected the conclusions of reviewers, did make unfounded claims of bias, started to submit links to alternative 'reviews' and youtube videos (from people that pretends that 99% load on a GPU is not a bottleneck), started to post lame excuses about (non-existent) bugs affecting RyZen performance, and accompanied those excuses with promises about how RyZen would start to game once the "bugs" are fixed. First a "BIOS update" was going to fix everything. Then it changed to a "scheduler patch" will fix everything. Now the last-minute try coming from overclocker forums is that faster memory fixes gaming performance... Their 'proof' consists on giving GB3 scores and ARMA III scores for RyZen using 3400 memory. What they don't mention is that non-RyZen chips also see huge performance gains in the same benches when using faster memory.

Meanwhile the Ryzen issue has been identified (even AMD agrees). It is a fundamental design flaw and it will be not solved until Zen+, if it is solved finally.

I said before launch that Ryzen has a latency problem and it was going to affect games. People in forums and twitter attacked and insulted me.

Reviews confirmed the existence of the problem and how it is affecting games. The same people attacked and insulted the reviewers,

Reviewers have continued analyzing this issue. PcPer has confirmed no scheduler will solve this. If you check the comment section of their article they are receiving insults and attacks: "You're literally showing to everyone around the world your true face, an ugly mug of a completely biased and undoubtedly bribed Intel shekel mongler."

Anyone that doesn't say what they want to heard is being attacked and/or insulted.
My, my you sure do talk a lot and say absolutely nothing. You keep deflecting from the real subject as to what any of these reviews mean in real world applications. Again I am not saying that 720p has no merit but rather you need CONTEXT (for the love of all that is holy LEARN that word, it will go a long way to making your post more appealing to the masses) for what they do in fact mean.

How about speaking to the owners who so far have yet to complain of the performance. Kyle from this site which a great deal of the members here claim "pulls no Punches" speaks to it in one of his posts about a VR game he was playing and how it had no adverse affects on his gameplay. You have yet once spoken to real world results with these 720p benches. You just keep parroting this doom and gloom scenario that thus far has affected few to none in real world gameplay.

And sorry but no matter the review or subject matter there will always be those that scream and yell and occasional issue vague empty threats of death. All of which has not a damn thing to do with how this CPU games. You seem quite fearful that some part of the populace might actual desire the Ryzen CPUs over the beloved 7700k and that the 720p game benches might be the only solace you have. But beating a rather putrid and dead horse here will gain you nothing.

I don't really care what Intel does as far as performance, so the 7700k, 6900k, or Intel next series will have little to no interest of mine. However looking at the leap in performance over the FX/ construction core CPUs, even with whatever amount of negative performance the issues they have given, is quite astounding and should be applauded. So they didn't beat Intel! Was it necessary? No. But they did get within arms reach which by all accounts is impressive and welcomed.

Now maybe I missed it, reading your posts is like fingernails across a chalk board, but I haven't seen you talk of the SMT verses HT results. You see when one wants others to believe his analysis is sincere one must give the pros and cons of the particular subject matter. In your case ALL I have seen is the constant parroting of the CCX/Memory/scheduler/ whatever issue as if it is the only findings or the totality of the reviews, ie: only the negatives. So do you have any opinions and findings of your own in regards to this?
 
My, my you sure do talk a lot and say absolutely nothing. You keep deflecting from the real subject as to what any of these reviews mean in real world applications. Again I am not saying that 720p has no merit but rather you need CONTEXT (for the love of all that is holy LEARN that word, it will go a long way to making your post more appealing to the masses) for what they do in fact mean.

How about speaking to the owners who so far have yet to complain of the performance. Kyle from this site which a great deal of the members here claim "pulls no Punches" speaks to it in one of his posts about a VR game he was playing and how it had no adverse affects on his gameplay. You have yet once spoken to real world results with these 720p benches. You just keep parroting this doom and gloom scenario that thus far has affected few to none in real world gameplay.

And sorry but no matter the review or subject matter there will always be those that scream and yell and occasional issue vague empty threats of death. All of which has not a damn thing to do with how this CPU games. You seem quite fearful that some part of the populace might actual desire the Ryzen CPUs over the beloved 7700k and that the 720p game benches might be the only solace you have. But beating a rather putrid and dead horse here will gain you nothing.

I don't really care what Intel does as far as performance, so the 7700k, 6900k, or Intel next series will have little to no interest of mine. However looking at the leap in performance over the FX/ construction core CPUs, even with whatever amount of negative performance the issues they have given, is quite astounding and should be applauded. So they didn't beat Intel! Was it necessary? No. But they did get within arms reach which by all accounts is impressive and welcomed.

Now maybe I missed it, reading your posts is like fingernails across a chalk board, but I haven't seen you talk of the SMT verses HT results. You see when one wants others to believe his analysis is sincere one must give the pros and cons of the particular subject matter. In your case ALL I have seen is the constant parroting of the CCX/Memory/scheduler/ whatever issue as if it is the only findings or the totality of the reviews, ie: only the negatives. So do you have any opinions and findings of your own in regards to this?
Who is testing at 720? I keep seeing this pop up and have only seen a small handful of 720 testing. Also this "I don't really care what Intel does as far as performance, so the 7700k, 6900k, or Intel next series will have little to no interest of mine. " Why don't you care? Is this blind allegiance to AMD just because, or is there an actual objective reason that will make sense? I have always been of the mind go with the best performance I can afford whether that be AMD or Intel. This allegiance to corporations is mind boggling. Consumerism is becoming a religion where everyone pays tribute to a particular corporation damn the facts and becomes defensive and tribalistic protecting the reputation of said brands.
 
And sorry but no matter the review or subject matter there will always be those that scream and yell and occasional issue vague empty threats of death. All of which has not a damn thing to do with how this CPU games. You seem quite fearful that some part of the populace might actual desire the Ryzen CPUs over the beloved 7700k and that the 720p game benches might be the only solace you have. But beating a rather putrid and dead horse here will gain you nothing.

Oh so you are ok with people that behave that way? What would you do if someone came to your home and threatened to kill you man? Just say oh there are people that are out there like him, let it be?

I don't care who that person is, or what their view points are, they should be in a psych ward or locked up. Can't be understanding of people like that, they have fuckin mental problems lol.

I don't really care what Intel does as far as performance, so the 7700k, 6900k, or Intel next series will have little to no interest of mine. However looking at the leap in performance over the FX/ construction core CPUs, even with whatever amount of negative performance the issues they have given, is quite astounding and should be applauded. So they didn't beat Intel! Was it necessary? No. But they did get within arms reach which by all accounts is impressive and welcomed.

Now that I agree with, but still doesn't matter, AMD will not gain marketshare in this manner, they will get more margins and that will help. Everything is in Intel's hands, nothing AMD can do about it unless Intel keeps going on their stagnant CPU upgrade cycles.
Now maybe I missed it, reading your posts is like fingernails across a chalk board, but I haven't seen you talk of the SMT verses HT results. You see when one wants others to believe his analysis is sincere one must give the pros and cons of the particular subject matter. In your case ALL I have seen is the constant parroting of the CCX/Memory/scheduler/ whatever issue as if it is the only findings or the totality of the reviews, ie: only the negatives. So do you have any opinions and findings of your own in regards to this?

SMT performance right now only shows up in rendering and encoding, 2d, 3d work in all other aspects are on Intel's side, so if rendering, that kinda limits Ryzen doesn't it, cause before rendering all the other 3d stuff being done is better on Intel. That leaves encoding, which streaming is part of, kinda limiting the target audience don't you think?
 
Who is testing at 720? I keep seeing this pop up and have only seen a small handful of 720 testing. Also this "I don't really care what Intel does as far as performance, so the 7700k, 6900k, or Intel next series will have little to no interest of mine. " Why don't you care? Is this blind allegiance to AMD just because, or is there an actual objective reason that will make sense? I have always been of the mind go with the best performance I can afford whether that be AMD or Intel. This allegiance to corporations is mind boggling. Consumerism is becoming a religion where everyone pays tribute to a particular corporation damn the facts and becomes defensive and tribalistic protecting the reputation of said brands.
You see here this is the difference: I grew up as computers did. Intel was kind of a bit lacking in merit and ethics, as we all know now after the litany of lawsuits over the years. I refuse to ever support such actions or companies that do so. A PERSONAL choice. I have never advised others based on my own personal decisions. Many a time I have advised the purchase of Intel over the AMD equivalent (in nominal terms, loosely) if they had no preference. Even now I would say get what you want. You want the 7700k get it. You want the 1700, get it. Will either of those be worse than the other for MOST of the populace? No. That is the real world advice and that is what I speak to. As such you don't see me posting in Intel nor Nvidia forums, I may have 2-3 posts max in Nvidi and maybe 1 in Intel, only to offer information, never to bash or dump on the products.
 
You see here this is the difference: I grew up as computers did. Intel was kind of a bit lacking in merit and ethics, as we all know now after the litany of lawsuits over the years. I refuse to ever support such actions or companies that do so. A PERSONAL choice. I have never advised others based on my own personal decisions. Many a time I have advised the purchase of Intel over the AMD equivalent (in nominal terms, loosely) if they had no preference. Even now I would say get what you want. You want the 7700k get it. You want the 1700, get it. Will either of those be worse than the other for MOST of the populace? No. That is the real world advice and that is what I speak to. As such you don't see me posting in Intel nor Nvidia forums, I may have 2-3 posts max in Nvidi and maybe 1 in Intel, only to offer information, never to bash or dump on the products.


So you never were around with the 8088's 86's then? When AMD was in the same position as they are in now, irrelevant? Or when AMD started making clones for Intel, when Intel gave them the opportunity.

Don't mix in business, and ethics with consumers, cause business is business, AMD set up preorders and locked down reviewers with an NDA on purpose. Don't tell me AMD didn't know about their CCX problems or all the other issues via the platforms by their board manufactures. Yet they did things on PURPOSE so pre order purchases wouldn't know of the issues till after.

Companies are companies they want our money how they get it, they don't give a shit if they can get away with something they will.

Do you think there will be a lawsuit about the CCX issues? I guarantee you there will be, and this lawsuit will have merit unlike the BD one.
 
You see here this is the difference: I grew up as computers did. Intel was kind of a bit lacking in merit and ethics, as we all know now after the litany of lawsuits over the years. I refuse to ever support such actions or companies that do so. A PERSONAL choice. I have never advised others based on my own personal decisions. Many a time I have advised the purchase of Intel over the AMD equivalent (in nominal terms, loosely) if they had no preference. Even now I would say get what you want. You want the 7700k get it. You want the 1700, get it. Will either of those be worse than the other for MOST of the populace? No. That is the real world advice and that is what I speak to. As such you don't see me posting in Intel nor Nvidia forums, I may have 2-3 posts max in Nvidi and maybe 1 in Intel, only to offer information, never to bash or dump on the products.
I am not jumping around AMD forums bashing AMD every chance I get either. This particular thread has to do with AMD and gaming as that is mostly what I do with my personal PC at home this picked my interest as I was looking at possibly upgrading to AMD, but it looks like from an architectural perspective the chip is designed poorly for gaming. Now I did not say it was a bad CPU but for the purpose of gaming it has issues that aren't likely to get fixed for quite some time as this is the topic of the thread I don't see a problem posting here.
 
Oh so you are ok with people that behave that way? What would you do if someone came to your home and threatened to kill you man? Just say oh there are people that are out there like him, let it be?

I don't care who that person is, or what their view points are, they should be in a psych ward or locked up. Can't be understanding of people like that, they have fuckin mental problems lol.



Now that I agree with, but still doesn't matter, AMD will not gain marketshare in this manner, they will get more margins and that will help. Everything is in Intel's hands, nothing AMD can do about it unless Intel keeps going on their stagnant CPU upgrade cycles.


SMT performance right now only shows up in rendering and encoding, 2d, 3d work in all other aspects are on Intel's side, so if rendering, that kinda limits Ryzen doesn't it, cause before rendering all the other 3d stuff being done is better on Intel. That leaves encoding, which streaming is part of, kinda limiting the target audience don't you think?
Not what I was saying. I believe that such actions should warrant harsh disciplinary actions but to act as if this is the only time this has happened to a reviewer is short sighted and grossly misleading. Besides in his case he was just deflecting the issue because he has no real response outside of lies.

Also another issue I have with how you guys are posting is the finality of the moment. In other words each of you act as if nothing will get better, no amount of code nor optimizations. Granted I like to look at it like that because if you are ok with what you got then any advancements are just extra goodness. That is why I generally don't say wait till... or in a month or 12... because obviously if doing so means you aren't content with the current performance. Although in this case I think we have to as far as memory with MoBo manufacturers take the wait till approach. It is quite common with ALL MoBo to have numerous bios update with the lions share of them being for Ram speed and stability.
 
Not what I was saying. I believe that such actions should warrant harsh disciplinary actions but to act as if this is the only time this has happened to a reviewer is short sighted and grossly misleading. Besides in his case he was just deflecting the issue because he has no real response outside of lies.

Also another issue I have with how you guys are posting is the finality of the moment. In other words each of you act as if nothing will get better, no amount of code nor optimizations. Granted I like to look at it like that because if you are ok with what you got then any advancements are just extra goodness. That is why I generally don't say wait till... or in a month or 12... because obviously if doing so means you aren't content with the current performance. Although in this case I think we have to as far as memory with MoBo manufacturers take the wait till approach. It is quite common with ALL MoBo to have numerous bios update with the lions share of them being for Ram speed and stability.

He ain't lying, he is showing his side of the story and when people that give reasons, excuses for AMD, those people use that to harangue review sites.

Hey I have stated the memory issues and SMT issues will be resolved, but that isn't everything that is like 10% of the 25% there. So still kinda left with 15% differences, maybe 10%, which can't be resolved. This doesn't bode well for the lower end Ryzen's either.

End of it all, They are still behind, I was hoping to see with SMT's over HT increased performance with more cores, Ryzen will perform like skylake, that is not going to happen in most cases. That is not good for AMD period, cause Intel has crappy 10-15% IPC improvements will put them back in their place in under a year!

So pricing all the pro consumer side of things won't really happen, now we have to wait for Zen +.
 
So you never were around with the 8088's 86's then? When AMD was in the same position as they are in now, irrelevant? Or when AMD started making clones for Intel, when Intel gave them the opportunity.

Don't mix in business, and ethics with consumers, cause business is business, AMD set up preorders and locked down reviewers with an NDA on purpose. Don't tell me AMD didn't know about their CCX problems or all the other issues via the platforms by their board manufactures. Yet they did things on PURPOSE so pre order purchases wouldn't know of the issues till after.

Companies are companies they want our money how they get it, they don't give a shit if they can get away with something they will.

Do you think there will be a lawsuit about the CCX issues? I guarantee you there will be, and this lawsuit will have merit unlike the BD one.
I don't see any merit in this one either. It has to be based on some inability to do what they said it could. And not the least of which you would have to wait to see what the final outcome with these fixes are. I cant see how one could sue on the basis of engineering so far as not causing deaths. It would have to be one steep assumption to even make a case much less a good one.

And trust me I am old enough that I was there with the first computers so I know about AMDs start and their climb to a CPU company.

As far as preorders it isn't unusual in todays markets. There were a number who bought TitanXps before NDA lifted on reviews, so sight unseen. Not exactly the same setup but same principal. Also where does this CCX issue really flatten the performance so much that it is so catastrophic as you seem to allude. If the assumptions are correct the design is Server based and scaled down to desktop hence having the issue and in all likelihood an unavoidable one at that.
 
I don't see any merit in this one either. It has to be based on some inability to do what they said it could. And not the least of which you would have to wait to see what the final outcome with these fixes are. I cant see how one could sue on the basis of engineering so far as not causing deaths. It would have to be one steep assumption to even make a case much less a good one.

And trust me I am old enough that I was there with the first computers so I know about AMDs start and their climb to a CPU company.

As far as preorders it isn't unusual in todays markets. There were a number who bought TitanXps before NDA lifted on reviews, so sight unseen. Not exactly the same setup but same principal. Also where does this CCX issue really flatten the performance so much that it is so catastrophic as you seem to allude. If the assumptions are correct the design is Server based and scaled down to desktop hence having the issue and in all likelihood an unavoidable one at that.


Its not unusual to hide a massive problems like the CCX problem that is UNFIXABLE?

Are you kidding me.


That is BS dude, you know what it is, another excuse.

Why don't you try not to rationalize what AMD did, instead tell us exactly what they did.

If they get sued for this, they will probably lose, because the 8 core chip is not and cannot function like an 8 core chip in games.
 
If they get sued for this, they will probably lose, because the 8 core chip is not and cannot function like an 8 core chip in games.
Eh, there was more merit to BD lawsuit. 8 core chip does function like an 8 core chip in games, i.e. faster than 6 or 4 core chip.



What is funny is that Ryzen has some serious issues in 3dmark's combined test, in spite of having decent graphics and physics results. At this point it looks like the fabric simply can't handle too much stuff that well, leading to all the issues exaggerated by Win10 scheduler being non-CCX aware (Linux scheduler is).
 
You see here this is the difference: I grew up as computers did. Intel was kind of a bit lacking in merit and ethics, as we all know now after the litany of lawsuits over the years. I refuse to ever support such actions or companies that do so. A PERSONAL choice. I have never advised others based on my own personal decisions. Many a time I have advised the purchase of Intel over the AMD equivalent (in nominal terms, loosely) if they had no preference. Even now I would say get what you want. You want the 7700k get it. You want the 1700, get it. Will either of those be worse than the other for MOST of the populace? No. That is the real world advice and that is what I speak to. As such you don't see me posting in Intel nor Nvidia forums, I may have 2-3 posts max in Nvidi and maybe 1 in Intel, only to offer information, never to bash or dump on the products.

I am with you on that. I decided with my 2 systems (Wrok/Game and Browse/Game/Portable) to go Intel because for me the gap was too far for me to make the call to get an AMD setup for either of them. But I grew up as computers became a commodity rather than a business choice. I got a Celeron A 366 @550 as my first system at 17 years old. I jumped onto Thunderbird and had gotten AMD primarily since. Part of that was performance. But starting back with the Athlon Slot A release I saw both first hand and read all the things Intel did unethically to keep from dealing with competition. The original Athlon release only had like 3 boards. 2 from manufactures that were so small they weren't worried about competition. Asus though offered the best in the K7M (my dad had that board). But they couldn't sell it branded at first and it was just a white box special. Intel announce "troubles with chipset production and shortages" just as the Athlon was coming out. Basically it was a thinly veiled threat that if they made Athlon boards then they would see their allotments decreased. Asus used the time with the unbranded white box sales to ramp up VIA and ALi based boards for Intel CPU's to put the pressure back on Intel that they were only going to hurt their own chipset sales if they kept it up. Eventually everyone started doing the same. The list just grew from that point on, between threats of undelivered CPU's, massive rebates for loyalty, hell even refusing to negotiate with Nvidia on the chipset license, knowing that Nvidia unable to sell Intel chipsets would force them to pull out of the chipset business killing AMD's best chipset partner.

I am not blind to performance differences. I can see where the strengths and weaknesses lie. I also choose to vote with my wallet when I can.
 
Eh, there was more merit to BD lawsuit. 8 core chip does function like an 8 core chip in games, i.e. faster than 6 or 4 core chip.



What is funny is that Ryzen has some serious issues in 3dmark's combined test, in spite of having decent graphics and physics results. At this point it looks like the fabric simply can't handle too much stuff that well, leading to all the issues exaggerated by Win10 scheduler being non-CCX aware (Linux scheduler is).


Well yeah I see your point, but still BD it was well know what its problems were right from the first review till it was available. With Ryzen we have up to 1 million people that bought the platform and chips without knowing.......

I don't think its the fabric in this case, L3 cache is not part of the fabric is it? Also Linux gaming mirrors Windows 10 gaming too when it comes to the scheduling portion just not as severe, so leaves the question why is happening on multiple OS's were it shouldn't be.

Back to the L3 and fabric, fabric is only there to communicate with the memory if need be, and this is where the memory frequencies and timings come into play, so breaking it down to the individual components, the CPU shouldn't be held back by using its L3 cache over the two CCX modules. Just shouldn't be happening.
 
Last edited:
Also Linux gaming mirrors Windows 10 gaming too when it comes to the scheduling portion just not a severe, so leaves the question why is happening on multiple OS's were it shouldn't be.
That's why i am convinced it is not scheduling. Sure, it may account for like 5-10% of it, but that clearly does not cover the distance.
I don't think its the fabric in this case, L3 cache is not part of the fabric is it?
Within 1 CCX it works just fine. It all falls apart when it either accesses different CCX (involves fabric), memory (involves fabric) or I/O (involves fabric)
 
If they get sued for this, they will probably lose, because the 8 core chip is not and cannot function like an 8 core chip in games.

Do you really really really believe this? That's almost insulting. Why because there is a some frame loss when swapped between CCX's? That's pretty close idiotic. What you are seeing is when 4 or less threads get spread around two CCX's. But what will happen when 5 or more major threads are being processed (you know when actual 8 core gaming happens). It will pull right ahead of the 7700. Again what we are seeing it poorly handled 4 thread or less gaming. Even if it doesn't get resolved (maybe it can't) is that really a frame loss (and a minimal one at that) or is just an avenue of optimization for enthusiasts to do themselves. Just because a CPU isn't a 100% optimized for every scenario doesn't make defective. I don't remember Intel getting sued with the Pentium 4, C2Q, and so on. Even-though the performance drop on those would be even more significant.

This is a side effect of design and not a defect. It's less optimal than people might like. But unless they fudged numbers on any commercials, demonstrations, or lied to their stockholders. I don't see how they could get sued and how they would lose. Though it's obvious that this generation really is the entitlement generation. Me personally I am looking forward to getting my Taichi. If for no other reason (besides having an AMD chip worth having once again) because this platform looks like it might be fun to get up and running and stable again. I loved back in the day to actually having to find and implement little tricks to get the best performance. Lately it's been a little to turnkey for me.
 
Do you really really really believe this? That's almost insulting. Why because there is a some frame loss when swapped between CCX's? That's pretty close idiotic. What you are seeing is when 4 or less threads get spread around two CCX's. But what will happen when 5 or more major threads are being processed (you know when actual 8 core gaming happens). It will pull right ahead of the 7700. Again what we are seeing it poorly handled 4 thread or less gaming. Even if it doesn't get resolved (maybe it can't) is that really a frame loss (and a minimal one at that) or is just an avenue of optimization for enthusiasts to do themselves. Just because a CPU isn't a 100% optimized for every scenario doesn't make defective. I don't remember Intel getting sued with the Pentium 4, C2Q, and so on. Even-though the performance drop on those would be even more significant.

This is a side effect of design and not a defect. It's less optimal than people might like. But unless they fudged numbers on any commercials, demonstrations, or lied to their stockholders. I don't see how they could get sued and how they would lose. Though it's obvious that this generation really is the entitlement generation. Me personally I am looking forward to getting my Taichi. If for no other reason (besides having an AMD chip worth having once again) because this platform looks like it might be fun to get up and running and stable again. I loved back in the day to actually having to find and implement little tricks to get the best performance. Lately it's been a little to turnkey for me.


If a court was able to hear the BD case which was due to architecture and design, sure this will be worse.

This is not the effect of design man,

AMD's caching is set up extremely similar to the way Intel's current chips are set up. Something is not happening correctly to see a 3 fold increase in L3 cache latencies when sharing data across the the CCX's.

Let me ask you this how does cache latency increase beceause of this type of sharing of data?

Cache is just a storage area right?

So why does the latency increase?

Because its kinda obvious isn't it. There are two possibilities

1) the same cache is being locked when written to and read from

2) the unit sending the info to cache is locked when doing so

Its can't be any thing other then these two things to explain the latency increase.

Intel's units don't have this behavior, they can read and write at the same time.
 
All this talk about waiting for ABC for Ryzen. It's really ridiculous. If you buy something, it should be usable right now. Be it the CPu is at 90% load or 40% load makes zero difference. The frame rate is what counts now and neither I or anyone has the ability to enforce some patch on any developer for AMD Ryzen CPUs.
What is happening with the Ryen CPUs is simply a teething issue that's come about due to being rushed by the powers (the money) that be. AMD missed Q4 2016 and they'd not be able to push it forward to Q2 2017. Right now, AMD already has 2nd and 3rd revision CPUs without all the nonsense and the unfortunate thing is if you bought this round, you'll likely be stuck with with performance. It is something that AMD could have and has mitigate don their end without requiring any aid from developers. The dev kits sent out will help of course, but if you're anyone hoping that an "investment" in Ryzen will help in future, you're sadly mistaken.
Both Skylake-X and the newer revision AMD CPUs are already ahead. AMD's execution was faulty here, had nothing to do with partners and board vendors. As a proof of concept this is fantastic, but it's so much better with the newer CPUs.

Unlucky for you if you pre ordered or bought up front. I got burned out the gate with X99 this is why I was prepared to wait a while before owning a AM4.
 
Brute force option is to treat each CCX as a distinct processor to eliminate cache thrash.


Then you only have a 4 core chip period. That doesn't solve the issue. And we have seen this when parking cores, the performance drop of doing so of losing 4 cores is worse than the CCX issues.
 
Brute force option is to treat each CCX as a distinct processor to eliminate cache thrash.
You still have to deal with intercommunication with processes spread over it unless the scheduler is smart enough to only spread independent processes.
 
Gosh... I'm going to say an unpopular thing. But it needs to be said...

No one in this thread, no reviewer, and not even AMD know what the real performance potential of Zen is. This is all preliminary BS.

In order to really know what the performance is, the following things need to happen:

1. Compiler optimizations for the Zen architecture. That's not happened.

2. Applications need to be compiled with those optimizations.

3. New processor drivers.

4. Changes to thread scheduling.

5. Optimization of the cache usage.

6. Video card driver optimizations that take advantage of the new Zen architecture.

If all of that is done, and done properly, maybe in a best case scenario you could get a 15-20% performance increase if the developer was really invested in getting the most out of a processor. But that's unlikely- since software bloat is driven by higher level languages that produce bloated slow code.

So maybe the best case scenario is a 10% increase in the next 12 months for software.

But what really bothers me about the discussion are a couple of ideas which are pretty incorrect:

1. The idea that frame rates above 60fps are meaningful: they're not. It's a placebo affect. In fact gaming is smoother when the frame rate does not vary. V-Sync is an absolute godsend for these purposes. I'd not enter a competitive situation without it. The variable sync technologies are simply hyped technologies which get customers excited about a new (but meaningless) feature. In a sense the entire gaming "industry" is rooted by ideas pushed out by marketing departments rather than real improvements in image quality or lower latency.

2. Running at a full out frame rate, where either the proc or card is pushing 100%- is hard on your hardware. It also might increase latency in a way that lags the game- but you don't notice because the game appears "smooth" under a variable sync.

That being said, my perspective is that of a gamer, but also a workstation user and systems engineer. If you are a person who only uses your rig for gaming- I would consider that a waste of money honestly.
 
1. The idea that frame rates above 60fps are meaningful: they're not.
[citation needed]
V-Sync is an absolute godsend for these purposes. I'd not enter a competitive situation without it.
...
The variable sync technologies are simply hyped technologies which get customers excited about a new (but meaningless) feature.
Let's get it straight: do you know what variable refresh rate is for?
 
If a court was able to hear the BD case which was due to architecture and design, sure this will be worse.

This is not the effect of design man,

AMD's caching is set up extremely similar to the way Intel's current chips are set up. Something is not happening correctly to see a 3 fold increase in L3 cache latencies when sharing data across the the CCX's.

Let me ask you this how does cache latency increase beceause of this type of sharing of data?

Cache is just a storage area right?

So why does the latency increase?

Because its kinda obvious isn't it. There are two possibilities

1) the same cache is being locked when written to and read from

2) the unit sending the info to cache is locked when doing so

Its can't be any thing other then these two things to explain the latency increase.

Intel's units don't have this behavior, they can read and write at the same time.

Or its really simple. And yes something they learned from BD. I am not sure you fully understand the design layout of Ryzen or Zen as an arch.

Like the CMT modules that AMD implemented in BD. AMD designed a scalable arch designed around the idea of a module that the could scale based on requirements. This allows them to easily implement different silicon's with minimal changes to design for different markets. This is why I doubt that the R3's will be core disabled chips. The idea actually goes farther back than BD and actually owes a lot to their GPU designs around the time of the 4870. What AMD learned is that they can't rely on people coding specifically for their stuff if it wasn't up to snuff on legacy code as it was. The amount of effort it would take to make the BD arch competitive developer wise would have been massive and just that. It would have taken a lot of recoding and changes in thought processes just to get it up to snuff.

But I digress. So you have these modules. The cores have their own L1 and L2 cache. The L3 cache for these cores are baked into the modules. That means if they add another module, that is another pool of L3. Since it wasn't designed to be single monolithic die where the cores and the cache are all created together and therefore able to address everything AMD developed an interconnect to help with the scalability of this arch. So when they are having the die set and fabbing the chip the interconnect would link these modules together. The problem is that the L3 pools are separate between the two modules in case of the R7. Now back in the day configurations like this would basically require that separate L3 pools would basically have the same information. But that would actually hinder the CPU's performance heavily threaded tasks unless the amount was basically doubled up on. For this reason and again to make sure that these modules could easily scale up and down without changes AMD has the cache interconnected and that the data is refreshed into the new module when the thread moves. The communication for the cores on this process are limited to currently around 20GB/s.

Basically its not one pool of cache. It is two completely separate pools of cache that carry the same information. So latency is a up. But that really isn't even the issue in games. It's the cross talk between CPU's on different modules that have to communicate through the interconnect (Infinity Fabric). Which makes lantency and frame drop between the two CCX's a side affect of design decision that allows them to create on scalable option that could compete on the low end (and iGPU) all the way up to massive 2s 64c128t 1u and 2u servers.
 
Or its really simple. And yes something they learned from BD. I am not sure you fully understand the design layout of Ryzen or Zen as an arch.

Like the CMT modules that AMD implemented in BD. AMD designed a scalable arch designed around the idea of a module that the could scale based on requirements. This allows them to easily implement different silicon's with minimal changes to design for different markets. This is why I doubt that the R3's will be core disabled chips. The idea actually goes farther back than BD and actually owes a lot to their GPU designs around the time of the 4870. What AMD learned is that they can't rely on people coding specifically for their stuff if it wasn't up to snuff on legacy code as it was. The amount of effort it would take to make the BD arch competitive developer wise would have been massive and just that. It would have taken a lot of recoding and changes in thought processes just to get it up to snuff.

But I digress. So you have these modules. The cores have their own L1 and L2 cache. The L3 cache for these cores are baked into the modules. That means if they add another module, that is another pool of L3. Since it wasn't designed to be single monolithic die where the cores and the cache are all created together and therefore able to address everything AMD developed an interconnect to help with the scalability of this arch. So when they are having the die set and fabbing the chip the interconnect would link these modules together. The problem is that the L3 pools are separate between the two modules in case of the R7. Now back in the day configurations like this would basically require that separate L3 pools would basically have the same information. But that would actually hinder the CPU's performance heavily threaded tasks unless the amount was basically doubled up on. For this reason and again to make sure that these modules could easily scale up and down without changes AMD has the cache interconnected and that the data is refreshed into the new module when the thread moves. The communication for the cores on this process are limited to currently around 20GB/s.

Basically its not one pool of cache. It is two completely separate pools of cache that carry the same information. So latency is a up. But that really isn't even the issue in games. It's the cross talk between CPU's on different modules that have to communicate through the interconnect (Infinity Fabric). Which makes lantency and frame drop between the two CCX's a side affect of design decision that allows them to create on scalable option that could compete on the low end (and iGPU) all the way up to massive 2s 64c128t 1u and 2u servers.


err you don't know that Intel uses a mesh for communication too? Rzyen is set up similar to Intel's current chips man.

That problem should not exist, it should not being using infinity fabric to communicate with L3 cache.

Intel designed their chips the same was/is as Rzyen its core or pairs of cores for them, have their own L1 and shared L2 and shared L3 cache, not between the CCX's. The only time infinity fabric should be used in this case is when it needs to communicate with off chip resources.

With AMD's increased L2 cache sizes current programs should have no problem at all fully utilizing Intel optimized programs when cache is concerned, granted with the scheduler issues, we won't know till that is solved.
 
Last edited:
The idea that frame rates above 60fps are meaningful: they're not. It's a placebo affect. In fact gaming is smoother when the frame rate does not vary. V-Sync is an absolute godsend for these purposes. I'd not enter a competitive situation without it. The variable sync technologies are simply hyped technologies which get customers excited about a new (but meaningless) feature. In a sense the entire gaming "industry" is rooted by ideas pushed out by marketing departments rather than real improvements in image quality or lower latency.
This is simply not true. For one, there are now (and have existed for years) monitors that can refresh higher than 60Hz. I've gamed on 120Hz or a while, I have a 144Hz monitor now, and there is a huge improvement to motion clarity at these higher speeds. VSync is good when you are getting solid FPS higher than the refresh rate, but can introduce latency. VSync can, however, result in judder or stutter when framerates drop (even briefly) below the refresh rate. VSync off gives lowest latency and doesn't wait for the VBlank (so relatively smooth), but results in screen tearing. Variable refresh (like G-Sync) will update whole frames as soon as they are ready (up to the physical refresh rate) which eliminates tearing and results in an overall smoother experience, even at lower FPS as you don't get VBlank related stutter. I have a G-Sync monitor and I can tell you for a fact it is not marketing BS. It's a substantial improvement in the experience. With G-Sync, and I'm getting around 90fps+, the motion is very smooth and worlds better than 60Hz VSync. You can probably ask anyone here who owns a high refresh and/or adaptive sync monitor and the response is overwhelming positive.
 
This is simply not true. For one, there are now (and have existed for years) monitors that can refresh higher than 60Hz. I've gamed on 120Hz or a while, I have a 144Hz monitor now, and there is a huge improvement to motion clarity at these higher speeds. VSync is good when you are getting solid FPS higher than the refresh rate, but can introduce latency. VSync can, however, result in judder or stutter when framerates drop (even briefly) below the refresh rate. VSync off gives lowest latency and doesn't wait for the VBlank (so relatively smooth), but results in screen tearing. Variable refresh (like G-Sync) will update whole frames as soon as they are ready (up to the physical refresh rate) which eliminates tearing and results in an overall smoother experience, even at lower FPS as you don't get VBlank related stutter. I have a G-Sync monitor and I can tell you for a fact it is not marketing BS. It's a substantial improvement in the experience. With G-Sync, and I'm getting around 90fps+, the motion is very smooth and worlds better than 60Hz VSync. You can probably ask anyone here who owns a high refresh and/or adaptive sync monitor and the response is overwhelming positive.

Your brain can only perceive changes at 13 to 16 hz. Very few people can see the flicker of an incandescent bulb at 60hz.

It's placebo. Above about 20fps you do not get any better as a gamer. You can't see more. It's a garbage marketing gimmick. Unless you are doing near field VR with lenses 3 inches from your eyes- high refresh rates have no meaning.

In a blind test- no one could tell the difference between a constant 60fps and 120fps average with a variable sync system. It's science. Look it up.
 
Back
Top