Why does Ryzen 7 1800X performs so poorly in games?

no that was just the last straw. the 3.5 vs 4 was the issue.


No it wasn't, it still worked as a 4 gb card,

Just that the latency of the last .5 gb was greater, so you couldn't get the full performance of that portion of the bandwidth.

https://topclassactions.com/lawsuit...tx-970-graphics-card-class-action-settlement/

(1) operate with a full 4 gigabytes of video random access memory, (2) have 64 render output processors, and (3) have an L2 cache capacity of 2megabytes, or omitted material facts to the contrary.”

The two other parts of that lawsuit, were never part of the original paper work, cause if you remember, the guy that found out about the partitioning of the Ram, ever knew anything about it, other than the ram was going slower.
 
It is difficult to speculate on how a phantom chip at this point will and will not perform, the R5 strikes the best value for money of the entire SKU database and lets you pocket money towards a better graphics card which makes most of the difference.
It is not really that difficult per AMD these are the same chips with cores disabled and different frequencies all you have to do would be to disable cores on a 1800x and lock the clock speed.
 
Just like the 970 memory lawsuit,

There was no merit for that lawsuit for its original grounds it was filed for (it was filed for partitioning of the ram slow 512 mb), It was a 4 GB card and functioned like one too albeit a portion of the ram was slower due to guess what Latency increases lol. But during discover, the ROP amounts were found out not be accurate, and that is why nV settled, cause they knew they couldn't win that part of it.
The marketing team did lie on that one and it halso had fewer rops than advertised. So it was technically not frivolous. I think this idea there will be a lawsuit over Ryzen's performance is a bit far fetched mate. And I would know I got my $60 settlement lol
 
no point in arguing with a know it all, you "win" I'm done.

now can we please close this stupid pointless thread...
 
The marketing team did lie on that one and it halso had fewer rops than advertised. So it was technically not frivolous. I think this idea there will be a lawsuit over Ryzen's performance is a bit far fetched mate.


We shell see, hopefully not but ambulance chasing lawyers, many of them out there lol.

Yeah and that was why nV couldn't win no matter what, they were forced to settle.
 
no point in arguing with a know it all, you "win" I'm done.

now can we please close this stupid pointless thread...
There are plenty of other conversations going on here that it does not need closed. if you don't like the conversations go someplace else.
 
The marketing team did lie on that one and it halso had fewer rops than advertised. So it was technically not frivolous. I think this idea there will be a lawsuit over Ryzen's performance is a bit far fetched mate.

There won't be a lawsuit over Ryzen. Let's be honest here, the majority of the people purchasing Ryzen are likely AMD fans, which wouldn't dream of suing AMD. In that aspect they're similar to Applle fans. How else can anyone explain reviewers getting death threats because of Ryzen reviews reporting lackluster gaming on this chip?
 
There won't be a lawsuit over Ryzen. Let's be honest here, the majority of the people purchasing Ryzen are likely AMD fans, which wouldn't dream of suing AMD. In that aspect they're similar to Applle fans. How else can anyone explain reviewers getting death threats because of Ryzen reviews reporting lackluster gaming on this chip?


Shit death threats its probably going to be actual retaliation lol.
 
its not a conversation, its turned into a internet knowledge pissing contest.
Then let us piss all over ourselves. If you say something then I say something back then you reply and we continue this little dance back and forth then yes it is a conversation. Maybe not a productive conversation, but still a conversation.

On Topic though Tim Sweeny just had a GDC interview pop up on PCgamer and he seems convinced multi-core gaming is coming this time and mostly due to the desire of real simulated humans in games with real physics and bone structures. Even so I am not yet convinced then Ryzen is the multi core CPU to get since it performs worse in games compared to nearly any Broadwell-E part. Skylake X is coming this fall so if you want a multicore gaming chip it might be the one to get as we all know haw great the Skylake architecture is.
 
I just re-read the guru3d review. I don't see why people are so disappointed. It is slower than the broadwell-e stuff by 15% max for the most part and that depends on the particular broadwell you are comparing it to and in which game. Sometimes the gap is much closer than that.

IMO if you buy a R7 and don't like it then sell it and get something else. We spend the moneys because hobby. This is part of the fun.
 
There won't be a lawsuit over Ryzen. Let's be honest here, the majority of the people purchasing Ryzen are likely AMD fans, which wouldn't dream of suing AMD. In that aspect they're similar to Applle fans. How else can anyone explain reviewers getting death threats because of Ryzen reviews reporting lackluster gaming on this chip?

super cool generalisation there, if you were aiming to punt other peoples fanboyism but in turn implicitely painted your own then you nailed it. what do they say, trying to take the spek out of anothers eye when you have a log in yours.
 
People who claim there is no difference between 30/60/144Hz should be perma banned. That 's the amount of stupid I'm not willing to deal with.
 
super cool generalisation there, if you were aiming to punt other peoples fanboyism but in turn implicitely painted your own then you nailed it. what do they say, trying to take the spek out of anothers eye when you have a log in yours.

Sorry, I wasn't aware you were online. Figured you were benchmarking CB and handbrake ad nauseum. It seems to be what Ryzen fans tend to do. I mean, why wouldn't they just use GPU acceleration (in which case Ryzen doesn't seem to do any better than Intel's offerings) to help render faster?

I'm kidding. But in all seriousness, if you don't like my generalizations which are rooted in truth or they somehow hurt your efeelings; by all means ignore my posts and continue on your merry way.
 
Sorry, I wasn't aware you were online. Figured you were benchmarking CB and handbrake ad nauseum. It seems to be what Ryzen fans tend to do. I mean, why wouldn't they just use GPU acceleration (in which case Ryzen doesn't seem to do any better than Intel's offerings) to help render faster?

I'm kidding. But in all seriousness, if you don't like my generalizations which are rooted in truth or they somehow hurt your efeelings; by all means ignore my posts and continue on your merry way.

1) i have intel and nvidia setups, much like i had athlon 64 and 1900XT back when it was best value. So again generalisation.

2) rooted in truth? From a neutral you exhibit the equal and opposing fanboyism akin to WCCFTECH, the compulsive need to troll threads like anyone really gives.

So if Ryzen is not as good as intel top end, what do you achieve out of this, does it matter to you?, Does it make your day less tedious? Or is it just trolling and epeen to go with it.

There is one 7700K thread, about 2 pages long, go parade with the rest of the power rangers over there, can marvel at your at the collective circlejerk, maybe argue about who has the best intel
 
A lot of CRT refresh much faster than a LCD and have zero latency compared to a LCD, so that could very well be why. Also on a side note a lot Pro or hardcore CSGO players play with low resolutions because the larger the pixels are on your screen the larger the hit box.

I know I realize all of that. My point was for people who think they game better because they have a refresh rate above 60hz, are wrong.
 
I know I realize all of that. My point was for people who think they game better because they have a refresh rate above 60hz, are wrong.
Depends. Obviously technology does not grant skill, but it certainly can augment it. If you are allready good a high refresh rate can make it easier to be better, plus there are benefits in overall smoothness and experience.
 
This is exactly how BD law suit started, and the entail suit is a fishing suit, they are looking for ways to find problems, the phases of a civil case.

There is no need for burden of proof in a civil case.

Once the case has started off

There is a discovery and fact finding phase where everyone involved shares information. At this point AMD will have to fully disclose what is going on.

Guys if you watch too much TV, this is not a criminal case, it doesn't work on proof up front, it works completely differently.

Granted the outcome is up in the air, but it still can be done.

Do you even know what the Bulldozer lawsuit is supposed to cover? Probably not since you think it is performance. No the challenge is that they advertise it as an 8 core processor. Now sure if it performed well it's CMT setup wouldn't have raised any questions. But the point is the civil lawsuit is about false advertising and in the end the courts will end up setting a precedence of what a core is. But it has nothing to do with being slower than Intel legally. That would be like Chevy owners suing GM because the Z28 Camaro isn't as fast as the Mustang 350GT-R. What they are getting sued for is closer to the Tesla Lawsuit for providing a HP rating based on both motors max out put and not the the max HP at max draw to both engines at the same time (which was significantly less).

I am still dumbfounded that the idea that IF while a bottleneck was such a terrible decision worthy of your scorn if the CPU even with this inept implementation does so well in so many tasks (this includes gaming). It might prevent it taking the top spot here and there and generally probably drags it down a bit more than people would like in games. But it not an anchor around the CPU, seems more like an ankle weight. Also I have stated why AMD developed it and why it was implemented the way it was. It's a seamless interconnect that will ease the mating of the GPU to the CCX in the APU's, allows for easy scaling for Server CPU's, and more importantly it is the actual interconnected that will be used to mate CU's on AMD's next GPU. The implementation on the CPU's probably is a three way compromise between "enough performance" on desktop, how it could scale up to the GPU requirements and still be compatible, and most importantly, transistor count, which would make a bigger difference on Naples and Vega.
 
Do you even know what the Bulldozer lawsuit is supposed to cover? Probably not since you think it is performance. No the challenge is that they advertise it as an 8 core processor. Now sure if it performed well it's CMT setup wouldn't have raised any questions. But the point is the civil lawsuit is about false advertising and in the end the courts will end up setting a precedence of what a core is. But it has nothing to do with being slower than Intel legally. That would be like Chevy owners suing GM because the Z28 Camaro isn't as fast as the Mustang 350GT-R. What they are getting sued for is closer to the Tesla Lawsuit for providing a HP rating based on both motors max out put and not the the max HP at max draw to both engines at the same time (which was significantly less).

I am still dumbfounded that the idea that IF while a bottleneck was such a terrible decision worthy of your scorn if the CPU even with this inept implementation does so well in so many tasks (this includes gaming). It might prevent it taking the top spot here and there and generally probably drags it down a bit more than people would like in games. But it not an anchor around the CPU, seems more like an ankle weight. Also I have stated why AMD developed it and why it was implemented the way it was. It's a seamless interconnect that will ease the mating of the GPU to the CCX in the APU's, allows for easy scaling for Server CPU's, and more importantly it is the actual interconnected that will be used to mate CU's on AMD's next GPU. The implementation on the CPU's probably is a three way compromise between "enough performance" on desktop, how it could scale up to the GPU requirements and still be compatible, and most importantly, transistor count, which would make a bigger difference on Naples and Vega.


BD doesn't perform like an 8 core CPU that is what its premise is, but that won't work everyone that knew about the architecture knew it can't function like an 8 core chip because of the shared FPU, it was well know since the first review of it.

No you are saying AMD's engineers are inept because they wouldn't have know that using fabric to do L3 communication wouldn't be a problem!

Yes by your own view AMD engineers suck. Sorry but there is no two ways around it.

Anyone on this forum knows that 22gb/s is not going to cover cache performance AT ALL.

I just feel stupid for myself to have a discussion with someone that would even try use a crappy excuse like that.
 
PCPer has done an extensive vid that covers all their earlier testing and the behaviour of latency with threads local and across CCX, worth watching as they cover it well and why it can impact games more so than applications/benchmark tools, and IMO it is well done and explained.
Only 20mins in myself, but they do explain possible partial workarounds (some simple and some not) in the latter half after 20mins.

 
You will get on topic, and address the topic at hand. Addressing the poster will get you banned.
 
PCPer has done an extensive vid that covers all their earlier testing and the behaviour of latency with threads local and across CCX, worth watching as they cover it well and why it can impact games more so than applications/benchmark tools, and IMO it is well done and explained.
Only 20mins in myself, but they do explain possible partial workarounds (some simple and some not) in the latter half after 20mins.


AMD also just released a statement saying the scheduler in Win10 is fine https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1
My takeaway is that future games will probably be better optimized but don't expect miraculous improvements for already released games.
 
AMD also just released a statement saying the scheduler in Win10 is fine https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1
My takeaway is that future games will probably be better optimized but don't expect miraculous improvements for already released games.

Yeah no surprise.
I feel for PCPer and Allyn as they took a lot of stick with their thinking outside of the box testing and analysis with the conclusion the issue is not technically the Scheduler.
Read some pretty nasty attacks on them on some sites and on their own.
However the Scheduler may be able to be improved to be a bit more dynamic (not simple) to help out the CCX design.
One primary point Allyn and Ryan say, this should had been discussed 6 months ago between AMD and Microsoft, and while PCPer will not assign blame as they do not know what has happened between AMD/Microsoft one of them has failed in this regard.
Agree with them there.
Cheers
 
AMD also just released a statement saying the scheduler in Win10 is fine https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1
My takeaway is that future games will probably be better optimized but don't expect miraculous improvements for already released games.

That would have been my probable take away. There is a certainly a performance penalty when spanning CCX's and maybe a tailored numa scheduler would have helped. But really there is a chance something like that could other adverse affects on other application and job types whereas this performance penalty was already accounted for in development. It'll probably be on the enthusiasts to figure out what tools and solutions they want to use to maximize performance in whatever application type they are using.
 
That would have been my probable take away. There is a certainly a performance penalty when spanning CCX's and maybe a tailored numa scheduler would have helped. But really there is a chance something like that could other adverse affects on other application and job types whereas this performance penalty was already accounted for in development. It'll probably be on the enthusiasts to figure out what tools and solutions they want to use to maximize performance in whatever application type they are using.
Worth watching the PCPer vid, they explain that NUMA may not actually work as intended because of the design of Ryzen core/CCX/cache/DRAM is not correct for that type of approach, not only that but it would also require a lot of coding by developers not just in gaming but also office/rendering/etc apps - even some well known apps they use for PCPer office related work (rendering and other stuff) do not support NUMA as they have noticed it with their 2S Xeon server.
They raise it in the last part of the vid, after 20mins.
 
PCPer has done an extensive vid that covers all their earlier testing and the behaviour of latency with threads local and across CCX, worth watching as they cover it well and why it can impact games more so than applications/benchmark tools, and IMO it is well done and explained.
Only 20mins in myself, but they do explain possible partial workarounds (some simple and some not) in the latter half after 20mins.




Now when did I say this?

https://hardforum.com/threads/leaked-amd-ryzen-benchmarks.1920876/page-40#post-1042854841

Wow that was March 3rd, right after I saw enough reviews to see what was going on.
 
Worth watching the PCPer vid, they explain that NUMA may not actually work as intended because of the design of Ryzen core/CCX/cache/DRAM is not correct for that type of approach, not only that but it would also require a lot of coding by developers not just in gaming but also office/rendering/etc apps - some apps they use for PCPer work do not support NUMA as they have noticed it with their 2S Xeon.
They raise it in the last part of the vid, after 20mins.


It won't work, NUMA has to be programmed for and games are not NUMA aware, damn most applications are not NUMA aware.
 
It won't work, NUMA has to be programmed for and games are not NUMA aware, damn most applications are NUMA aware.
I know that is what I am saying :)
Some of the well known large apps they use for their own internal work at PCPer are not even NUMA aware (they use a 2S Xeon for all their office-studio work), let alone gaming and let alone the architecture is not correct for NUMA :)
 
I know that is what I am saying :)
Some of the well known large apps they use for their own internal work at PCPer are not even NUMA aware, let alone gaming and let alone the architecture is not correct for NUMA :)

Shit Ryzen benchmarks look exactly like when I try to play games on my dual xeon workstation lol, it was pretty obvious for me what was going on!
 
In terms of the scheduling question and other questions, AMD have responded on that.

https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1

Thread Scheduling
We have investigated reports alleging incorrect thread scheduling on the AMD Ryzen™ processor. Based on our findings, AMD believes that the Windows® 10 thread scheduler is operating properly for “Zen,” and we do not presently believe there is an issue with the scheduler adversely utilizing the logical and physical configurations of the architecture.
 
...
My takeaway is that future games will probably be better optimized but don't expect miraculous improvements for already released games.

Just coming back to this, oh man it could be a right can of worms going forward for gaming devs because of the nature of DX12 (high thread behaviour) and also that devs not only need to optimise for AMD/Nvidia GPUs but now also optimise high thread count next gen games for AMD and Intel as they will require different thread approach to get the most out of both.
I wonder if AMD has just impacted the PC gaming market for 6-Core and 8-core CPUs going forward with devs who will just code for 4 cores+smt to keep this simple for a while longer anyway or devs just ignore the requirement difference on thread structure now required for CCX.
Last point is not being critical of AMD, but one I am wondering if it will have repercussions beyond Ryzen with devs approach to games and multi-threading, potentially another level of complexity and cost to game development.

Would be better if AMD can get Microsoft to re-design the scheduler to help as much as possible (could be more 'intelligent'-weighted how assigned threads though to emphasise it was fine before Ryzen and such an approach is not really resolving the issue fully) but that will require extensive development/testing-QA with a lot of CPUs and discussions with not just AMD but Intel.
Sounds like AMD did not talk to Microsoft about its needs around the CCX.
Cheers
 
Last edited:
In terms of the CCX issue it just raises more questions with the lower parts. How are they binned?

Is a 6 core 3+3 or 4+2? What about a quad, can it end as 4+0, 2+2 or 3+1?
 
In terms of the CCX issue it just raises more questions with the lower parts. How are they binned?

Is a 6 core 3+3 or 4+2? What about a quad, can it end as 4+0, 2+2 or 3+1?


If they are binned like that, its going to be a disaster, at least the APU versions coming out later this year won't have this problem.
 
At this point i am positive that while scheduler and CCX have some minor impact.

The real problem lies not with that, but with uncore in general.
 
If MS does anything about this issue at all it will be because of Naples if they even need this done for that unit.

It's good enough on desktop, but likely not good enough on server is my guess. If we get anything out of a change it will be because of that trickle down. But I doubt it will add up to much with only two clusters on Ryzen.
 
Just coming back to this, oh man it could be a right can of worms going forward for gaming devs because of the nature of DX12 (high thread behaviour) and also that devs not only need to optimise for AMD/Nvidia GPUs but now also optimise high thread count next gen games for AMD and Intel as they will require different thread approach to get the most out of both.
I wonder if AMD has just impacted the PC gaming market for 6-Core and 8-core CPUs going forward with devs who will just code for 4 cores+smt to keep this simple for a while longer anyway or devs just ignore the requirement difference on thread structure now required for CCX.
Last point is not being critical of AMD, but one I am wondering if it will have repercussions beyond Ryzen with devs approach to games and multi-threading, potentially another level of complexity and cost to game development.

Would be better if AMD can get Microsoft to re-design the scheduler to help as much as possible (could be more 'intelligent'-weighted how assigned threads though to emphasise it was fine before Ryzen) but that will require extensive development/testing-QA with a lot of CPUs and discussions with not just AMD but Intel.
Sounds like AMD did not talk to Microsoft about its needs around the CCX.
Cheers
I think this comes down to how much money and assistance AMD throws at developers. If AMD is willing to hold Devs hands over the next few years to ensure performance parity with Intel then we could see this work out in the end, but if AMD is not willing or simply can't afford to do this it could be an issue.
 
Back
Top