Why does Ryzen 7 1800X performs so poorly in games?

If MS does anything about this issue at all it will be because of Naples if they even need this done for that unit.

It's good enough on desktop, but likely not good enough on server is my guess. If we get anything out of a change it will be because of that trickle down. But I doubt it will add up to much with only two clusters on Ryzen.


The only way MS can do something about this is to make a NUMA like system for AMD's CPUs, I highly doubt they will do something like this because it will increase complexity of programming for it to an extreme level.

NUMA programming is not easy because workloads have to be sufficient enough to saturate each NUMA node, games in particular are not good with this, as distributing the main threads is quite complex with all the interdependence requirements.
 
There is no change for Windows due to Ryzen/Naples. That's certain. AMD also says there is no issue on the software side. As in its a hardware issue.
 
I think this comes down to how much money and assistance AMD throws at developers. If AMD is willing to hold Devs hands over the next few years to ensure performance parity with Intel then we could see this work out in the end, but if AMD is not willing or simply can't afford to do this it could be an issue.


That is what its going to come down to. Developers have to ensure their code, each thread is set up for certain affinities of cores. And PCPer is correct, DX 12, LLAPI's in general are not a good mix for that.
 
Worth watching the PCPer vid, they explain that NUMA may not actually work as intended because of the design of Ryzen core/CCX/cache/DRAM is not correct for that type of approach, not only that but it would also require a lot of coding by developers not just in gaming but also office/rendering/etc apps - even some well known apps they use for PCPer office related work (rendering and other stuff) do not support NUMA as they have noticed it with their 2S Xeon server.
They raise it in the last part of the vid, after 20mins.

Yeah I read their TLDR. But not the full article or the video. I am watching it now actually. Again not surprising, you don't want to lock an application into a CCX if it needs more threads and it would take a lot more thinking in a scheduler than you want. I think if the reports are right and Scorpio will have basically have an R7 and a vega GPU, we will see more games tailored to Ryzen on the desktop. That should help. I am still of the mindset that I still haven't really seen anything that impactful outside 144Mhz 1080p gaming.
 
Mr Sweeney's perspective http://www.pcgamer.com/tim-sweeney-...gital-humans-will-push-pc-technology-forward/

PC Gamer: With your insight, where do you see CPU utilization for games right now, and where it might be going in the next few years? Over the past 5-7 years games have been very slow demanding more from CPUs. People were still on dual-core CPUs that, if you weren't playing Battlefield or something, were probably still doing okay. The i5-2500k is still giving pretty solid performance in most games. What do you see changing there, if anything?

Tim Sweeney: It's improving quickly. You know, engines took a long time to become scalable to an arbitrary number of cores. The first major step we made in multithreading was happening a separate rendering thread so dual-core CPUs could get full advantage of gameplay and rendering work occurring concurrently. The challenge now is to scale up to a lot more cores. There are going to be eight core consumer CPUs that are incredibly high performance and high capability.

We've been, over the last three years, building Unreal Engine systems that are more and more scalable to that. For all parts of the engine, all the different tasks and rendering, to scale to multiple CPUs. And loading and other parts of the process. So it's getting better and better, and I think you're going to measure a bigger and bigger difference in overall performance between a 2-core, a 4-core and an 8-core CPU.
 
Yeah I read their TLDR. But not the full article or the video. I am watching it now actually. Again not surprising, you don't want to lock an application into a CCX if it needs more threads and it would take a lot more thinking in a scheduler than you want. I think if the reports are right and Scorpio will have basically have an R7 and a vega GPU, we will see more games tailored to Ryzen on the desktop. That should help. I am still of the mindset that I still haven't really seen anything that impactful outside 144Mhz 1080p gaming.
That is what its going to come down to. Developers have to ensure their code, each thread is set up for certain affinities of cores. And PCPer is correct, DX 12, LLAPI's in general are not a good mix for that.
I think we will move back to better abstractions devs never wanted LLAPI to begin with
 
I think we will move back to better abstractions devs never wanted LLAPI to begin with


Well the benefits of LLAPI's, Dev's wanted that, specially the AAA development teams, because there are many things that can be done with them that just can't be done with legacy API's when it comes to performance. But for indie dev's just have to forget them for now. It takes time to understand hardware and what programming techniques suit different hardware types. When there is a team of 10 programmers yeah they can sit down and figure things out pretty quickly, for a team of 2 or 3 programmers that have a hard deadline to keep their shop going, that is though.
 
Well the benefits of LLAPI's, Dev's wanted that, specially the AAA development teams, because there are many things that can be done with them that just can't be done with legacy API's when it comes to performance. But for indie dev's just have to forget them for now. It takes time to understand hardware and what programming techniques suit different hardware types. When there is a team of 10 programmers yeah they can sit down and figure things out pretty quickly, for a team of 2 or 3 programmers that have a hard deadline to keep their shop going, that is though.
AAA games have teams into the 100s. This effects even smaller devs who are not Indy anyone who can't afford a massive team like Activision or EA. That being said they can use engines optimized to the newer API like Unreal 4, Sweeney explicitly states that but he kind of has too lol.
 
AAA games have teams into the 100s. This effects even smaller devs who are not Indy anyone who can't afford a massive team like Activision or EA. That being said they can use engines optimized to the newer API like Unreal 4, Sweeney explicitly states that but he kind of has too lol.


True was just throwing out numbers in general :), funny thing is Iendra stated that a while back maybe about a year ago, middleware engines will become the API of the future lol.
 
The only way MS can do something about this is to make a NUMA like system for AMD's CPUs, I highly doubt they will do something like this because it will increase complexity of programming for it to an extreme level.

NUMA programming is not easy because workloads have to be sufficient enough to saturate each NUMA node, games in particular are not good with this, as distributing the main threads is quite complex with all the interdependence requirements.

Any resemblance between possible scheduler change to NUMA awareness is superficial. It'll just be a some quick math that let's lets them figure out if parking one cluster and loading the other is better than distributing across the fabric and generally it won't be better, any gains will be minimal. It's actually not worth working on, imo.
 
In terms of the CCX issue it just raises more questions with the lower parts. How are they binned?

Is a 6 core 3+3 or 4+2? What about a quad, can it end as 4+0, 2+2 or 3+1?

I'm curious about that myself. If the CCX issue is why game performance is meh (and to be fair, that's just speculation at this point), it seems like the issue would only be exacerbated in quads. And that would be an awful, awful position to be in. The 8 cores can get away with so-so gaming because they're still an excellent option for productivity. The 6 cores can maybe get away with that. The 4 cores need to be good at gaming.
 
Yeah no surprise.
I feel for PCPer and Allyn as they took a lot of stick with their thinking outside of the box testing and analysis with the conclusion the issue is not technically the Scheduler.
Read some pretty nasty attacks on them on some sites and on their own.
However the Scheduler may be able to be improved to be a bit more dynamic (not simple) to help out the CCX design.
One primary point Allyn and Ryan say, this should had been discussed 6 months ago between AMD and Microsoft, and while PCPer will not assign blame as they do not know what has happened between AMD/Microsoft one of them has failed in this regard.
Agree with them there.
Cheers

Now that AMD officially agrees the scheduler is not the problem, PCPer has had some fun ridiculing the people that attacked them

https://webcache.googleusercontent....Defame-Own-Products+&cd=1&hl=en&ct=clnk&gl=ca

Check the comment section. It is hilarious, the same people that attacked them in the past still insists that there is a problem with the scheduler, that it is "Microsoft fault" and that Microsoft is already working on a patch to fix RyZen gaming! :eek:
 
Just like people were attacking us here,

article-2525602-1A2B2A3600000578-553_634x408.jpg
 
In terms of the CCX issue it just raises more questions with the lower parts. How are they binned?

Is a 6 core 3+3 or 4+2? What about a quad, can it end as 4+0, 2+2 or 3+1?

The info I have for the commercial chips is 3+3 for the six-core and 4+0 and 2+2 for the quad-core because valid configurations are either symmetric (same number of active cores in each CCX) or one entire CCX disabled.
 
Now that AMD officially agrees the scheduler is not the problem, PCPer has had some fun ridiculing the people that attacked them

https://webcache.googleusercontent....Defame-Own-Products+&cd=1&hl=en&ct=clnk&gl=ca

Check the comment section. It is hilarious, the same people that attacked them in the past still insists that there is a problem with the scheduler, that it is "Microsoft fault" and that Microsoft is already working on a patch to fix RyZen gaming! :eek:


PCPer, Gamer Nexus, and everyone that got flamed about this, we should all make a list of user names and IPs and post em so everyone can see and any damn person that keeps on going with this on different accounts, different email addresses and what not, everyone will know who the true trolls are

Boo hoo, so we talked about something that made AMD look bad, well tough luck guys AMD f'ed up.
 
Now that AMD officially agrees the scheduler is not the problem, PCPer has had some fun ridiculing the people that attacked them

https://webcache.googleusercontent....Defame-Own-Products+&cd=1&hl=en&ct=clnk&gl=ca

Check the comment section. It is hilarious, the same people that attacked them in the past still insists that there is a problem with the scheduler, that it is "Microsoft fault" and that Microsoft is already working on a patch to fix RyZen gaming! :eek:
I can understand why they took that post down as it certainly looked mean spirited, but they should have left it up.
 
PCPer, Gamer Nexus, and everyone that got flamed about this, we should all make a list of user names and IPs and post em so everyone can see and any damn person that keeps on going with this on different accounts, different email addresses and what not, everyone will know who the true trolls are

Boo hoo, so we talked about something that made AMD look bad, well tough luck guys AMD f'ed up.

I don't think AMD made a mistake with the architecture but once they had it out in the world they certainly learned what to improve. I think people going on about how awful it is, when it really isn't awful have made a mistake. My 8320 is awful. My Pentium D was truly awful. These 1700's aren't awful.

When the IPC number hit and we knew the max frequency, there was never any illusion that AMD was taking the gaming title and that was before launch. Folks fooled themselves. That's what happens when the breaks come off on a fandom train. It derails.

Anyone who's ever seen a new arch launch from anywhere can see the usual launch issues. No stock, sucky motherboards, BIOS issues with RAM, chipsets that kill drives and so on blah blah blah. They all do it. It's like buying a hot to the market phone and finding out there's a one in a zillion chance it will blow a hole in your leg at some point.

It's a CPU, it offers x features for x dollars, buy it or don't. This isn't a moon flight we're purchasing. Folks are acting like AMD shot their dog and then ate their cat while they were forced to watch.
 
I don't think AMD made a mistake with the architecture but once they had it out in the world they certainly learned what to improve. I think people going on about how awful it is, when it really isn't awful have made a mistake. My 8320 is awful. My Pentium D was truly awful. These 1700's aren't awful.

When the IPC number hit and we knew the max frequency, there was never any illusion that AMD was taking the gaming title and that was before launch. Folks fooled themselves. That's what happens when the breaks come off on a fandom train. It derails.

Anyone who's ever seen a new arch launch from anywhere can see the usual launch issues. No stock, sucky motherboards, BIOS issues with RAM, chipsets that kill drives and so on blah blah blah. They all do it. It's like buying a hot to the market phone and finding out there's a one in a zillion chance it will blow a hole in your leg at some point.

It's a CPU, it offers x features for x dollars, buy it or don't. This isn't a moon flight we're purchasing. Folks are acting like AMD shot their dog and then ate their cat while they were forced to watch.


While I wouldn't classify Ryzen as awful like what you stated. I do think something went wrong, the latency of the CCX communications is way too high to have just happened. AMD would have know this in design phase well before even testing on software. We aren't talking about 5% or 10% or even 25%, we are talking about 2.5 x, or 250% latency increase. Just running emulations, you will see that kind of latency difference.
 
While I wouldn't classify Ryzen as awful like what you stated. I do think something went wrong, the latency of the CCX communications is way too high to have just happened. AMD would have know this in design phase well before even testing on software. We aren't talking about 5% or 10% or even 25%, we are talking about 2.5 x, or 250% latency increase. Just running emulations, you will see that kind of latency difference.

I actually think they knew exactly what the limitations of the CCX were and went ahead anyhow. I think they figured the the primary 4 core market wouldn't be affected by it and the high core market wasn't gamers for the most part. I'm pretty sure the degree of hatred caught them off guard, but I don't think they entered into this blind if for no other reason than they didn't talk much about gaming and went carefully to full thread loads in their demos.

A 4C/8T APU or Ryzen at 4Ghz (at OC if possible, no idea if it is) isn't going to be a terrible chip for hobbyists. By the time they get here the first adapters will have beat the hell out of their BIOS glitches. Motherboard makers might even have several revs out by then.

The competition is good, It sure beats thinking ...omg all I can afford is a sucky FX 61xx or a neutered Pentium but of course those choices won't be here until the 5 and 3 series are.

I know I wouldn't mind using a 4C/8T Ryzen as my daily driver personally but I'm not the gaming market.
 
I actually think they knew exactly what the limitations of the CCX were and went ahead anyhow. I think they figured the the primary 4 core market wouldn't be affected by it and the high core market wasn't gamers for the most part. I'm pretty sure the degree of hatred caught them off guard, but I don't think they entered into this blind if for no other reason than they didn't talk much about gaming and went carefully to full thread loads in their demos.

A 4C/8T APU or Ryzen at 4Ghz (at OC if possible, no idea if it is) isn't going to be a terrible chip for hobbyists. By the time they get here the first adapters will have beat the hell out of their BIOS glitches. Motherboard makers might even have several revs out by then.

The competition is good, It sure beats thinking ...omg all I can afford is a sucky FX 61xx or a neutered Pentium but of course those choices won't be here until the 5 and 3 series are.

I know I wouldn't mind using a 4C/8T Ryzen as my daily driver personally but I'm not the gaming market.


Well you can have your opinion but too many things point to a problem that came up later on. If they knew about it, they would have talked to MS before hand to see if they can minimize it some how, instead of blame Windows scheduler right off the bat then eat their own words lol, Just makes them look foolish right? I think they hoped people wouldn't look to deep and just take AMD's word for it, developers have to do some work on their end to fix the issues. Too many times AMD tried to make it look like its someone else]s problem is what I'm I'm trying to say.
 
Well you can have your opinion but too many things point to a problem that came up later on. If they knew about it, they would have talked to MS before hand to see if they can minimize it some how, instead of blame Windows scheduler right off the bat then eat their own words lol, Just makes them look foolish right? I think they hoped people wouldn't look to deep and just take AMD's word for it, developers have to do some work on their end to fix the issues. Too many times AMD tried to make it look like its someone else]s problem is what I'm I'm trying to say.

And here AMD is saying it's not a Microsoft problem. They can say no one has optimized for their CPU and that would be a really fair statement. Probably really few apps will go out of their way to do so in the future too. I believe they knew perfectly well how the CPU was going to bench and how deeply people would look.

Sure if if you could segment threads to a cluster you might stop some latency problems on 1-4 threads but the gains are going to be minimal. It just is what it is, learning why is fun, but it really won't change anything. That doesn't make it broken just because it's not Intels design. It just makes it slower.
 
I think we are kinda on the same track, what I'm saying is by design the problem should not have been there, but by the time they saw the issue, it was too late, they couldn't fix it, and they knew about about well before the 2nd and 3rd spin of the chip.
 
This should mean that the 4 core parts should look better if working on one CCX. I am happy with the performance though. I don't use my PC just for gaming so its a win for me. Its balanced is how I see it.
 
No, that's not true. Look at this review showing Ryzen paired with the 1080Ti well above the 7700K in both min and avg performance.
http://www.eteknix.com/nvidia-gtx-1080-ti-cpu-showdown-i7-7700k-vs-ryzen-r7-1800x-vs-i7-5820k/4/

at least I've heard of that page :D
BTW
there's no difference between 1700 and 1800x, well as long as you overclock them
which they did

though I feel I would've loved seeing a 6900k at 4Ghz thrown into for comparison
but the 1700 and 7700k have the same price point

http://www.legitreviews.com/cpu-bot...ed-on-amd-ryzen-versus-intel-kaby-lake_192585

Both systems have Corsair Hydro series CPU water coolers and are running the same exact Corsair Vengeance LPX 16GB of DDR4 memory kit at 2933 MHz with CL14 timings.

We also gave the AMD Ryzen 7 1700 a head start by overclocking it from 3.0GHz (3.7GHz Turbo) stock clock speeds all the way up to 4GHz on all 8-cores. The Intel Core i7-7700K ‘Kaby Lake’ processor was left at stock

The 1440P scores also showed that the average frame rate was 22% higher on the Intel platform

That said, the vast majority of people buying the NVIDIA GeForce GTX 1080 Ti ($699) will likely be gaming at 1440P or higher...

Seeing virtually no average frame rate increase in GTA V on the Ryzen 7 1700 with the GeForce GTX 1080 Ti at 1440P was a bit of a shock, but the numbers don’t lie.

I just love how a page I know of comes to a very different conclusion
 
I can understand why they took that post down as it certainly looked mean spirited, but they should have left it up.

It is archived forever

http://web.archive.org/web/20170313...-Sheckels-Renews-Contract-Defame-Own-Products

I think we are kinda on the same track, what I'm saying is by design the problem should not have been there, but by the time they saw the issue, it was too late, they couldn't fix it, and they knew about about well before the 2nd and 3rd spin of the chip.

CanardPC tested second gen enginering sample and they found a similar conclusion than the reviews of commercial chips found now: an i7 in rendering/encoding and an i5 in games. Recall also that before launch I said that RyZen has a latency problem and affecting games. Does anyone really believe that AMD wasn't aware of the performance of its own chip? It is difficult to believe that AMD choosing 4K for the game demos was fortuitous...

And here AMD is saying it's not a Microsoft problem.

If you believe the official explanation will close threads in forums, then you are being so naive as I was. I believed this was going to finish, then just checked AT forums yesterday. Certain people rejects the official explanation given by AMD, continues claiming that the problem of RyZen gaming performance is the W10 scheduler and offer this new conspiracy theory: Microsoft plainly said: "we aren't going to fix it for you, AMD". So the marketing department and customer service give another answer to us because AMD is preferring "to work with the specific game companies whose engines have issues rather than dealing with Wintel". :rolleyes:
 
Now that AMD officially agrees the scheduler is not the problem, PCPer has had some fun ridiculing the people that attacked them

I still see posts even to PCPer since the AMD statement from several that still do not understand that AMD has designed something that is outside the scope of the scheduler and is a mix of approaches (driven by a cost consideration IMO but also with very fast cores on same CCX for threads in context of latency and good L1 and L2 but weak when crossing CCX), something AMD CPU team must surely had known any changes would need a lot of work from Microsoft and also involve Microsoft having to discuss changes to the scheduler with Intel.

In summary and to add, just because the scheduler could be improved with a fair amount of work does not mean it is Microsofts fault and responsibility to change in response to Ryzen/Naples, after all it does not sound like AMD has worked with Microsoft on this subject and that would had needed to happen at least 6 months ago due to all parties it encompass.
Just need to see how this relates to the uncore/fabric/switch and if that is exacerbating this or part of a further underlying limitation, with IBM and Intel we have a greater understanding on their limits and spec with what they have presented.
I wonder if part of the reason they made L3 victim only is down to limitations *shrug*, neither Intel nor IBM have taken that route.
And before anyone responds it is not, later slides show the design as victim only L3 and Anandtech spoke to AMD at the presentation day for clarification on this, earlier slides did not help with 'mostly victim L3'.

Cheers
 
Last edited:
Certain people, Anand tech forums home of many many trolls, claims but no proof to contradict things . Unfounded statements from Microsoft that have no basis in any reality (they did fixes for Bulldozer and now they won't!!!!!!!!). You peddle so much shit on any forum you visit.


Hey the problem was there before launch and he was one the first people to mention it, yet a few others here made shit up about him, that ain't cool, now you are going to try it now too?

Reality is this problem is not fixable. AMD already stated there is no problem with the MS scheduler, you know why, cause they and MS looked into it and MS told AMD, sorry no problems here.

Do you remember the other thread when I stated what I stated, I was called a troll. They said I had no proof no one else is talking about like I am about the CCX. Or there is no other memory test that is working properly to measure latency? Where are those people now, yeah they can't speak up, or acknowledge I was right, I was right because I know what I'm talking about and have experience with these kind of things. Not a wild guess.

Jurnga and a few others, got their information well before hand and knew something was wrong, exactly where they didn't know, but the problem was there ok?

Now do you understand who the real trolls are?

they aren't the guys trying to figure out the problem, or already understand where the problem is, its the people that don't want others to talk about by calling them trolls, or asking for more proof when to some its common sense because of experience in the nature of such an issue. The reason to ask for proof in such a manner is not because they don't think I didn't know what was talking about, its because I thought AMD could do no wrong so, I was wrong! But time and time again, its the other way around Pieter.

I have stated this many times I don't BS, if I don't know something, I go and ask or look things up and figure it out and in my posts I will say I'm not sure.

Even in this case there were things I didn't know about, and a person on this board said it was unfixable, the first statement to him was, I'm not sure if its not because I don't know exactly where the problem is coming from, but when I looked into it, yeah its unfixable.
 
Last edited:
Nevermind. Edit, and Ryzen does not do poorly in games, just saying. This and this is not the AMD of old. (As in, the original Phenom and Bulldozer Marketing days.)
 
throw enough darts and eventually one will stick. For now it seems Windows 7 seems to fix this issue that exists in windows 10.
 
For now it seems Windows 7 seems to fix this issue that exists in windows 10.

http://techreport.com/news/31579/amd-says-ryzen-cpus-are-performing-as-expected-after-launch

Just a little under two weeks from launch, AMD says that Ryzen is working just fine for the most part, and that no major changes should be expected in Windows or elsewhere to correct perceived performance issues—especially those observed by some testers in Windows 7 versus Windows 10. Instead, the company says it's working with developers to deliver "targeted optimizations" for software that "can better utilize the topology and capabilities of [AMD's] new CPU."
 
The issues really only appear in games, so logically it is a game optimization issue from here on out. It is likely games are just more sensitive to timing and the latency in the crosstalk of the CCX units coupled with dumping scheme of the L3 cache are going to take specific programing tricks to get around. What I think AMD really needs is to get statements from top tier game publishers not named Bethesda to commit to optimizing for their platform they need the two big player Activision/Blizzard and EA.
 
The issues really only appear in games, so logically it is a game optimization issue from here on out. It is likely games are just more sensitive to timing and the latency in the crosstalk of the CCX units coupled with dumping scheme of the L3 cache are going to take specific programing tricks to get around. What I think AMD really needs is to get statements from top tier game publishers not named Bethesda to commit to optimizing for their platform they need the two big player Activision/Blizzard and EA.
That's a ton of manpower to commit from studios that sometimes have a hard time already keeping the lights on, for a less than 20% market share cpu architecture, that so far many gamers are hesitant to pick up.

Game companies frequently use data related to gamers hardware, to choose what to optimize for. I would not hold my breath and think most of these studios are going to devote that kind of man power to Ryzen.
 
That's a ton of manpower to commit from studios that sometimes have a hard time already keeping the lights on, for a less than 20% market share cpu architecture, that so far many gamers are hesitant to pick up.

Game companies frequently use data related to gamers hardware, to choose what to optimize for. I would not hold my breath and think most of these studios are going to devote that kind of man power to Ryzen.
I don't think ACtivision\Blizzard and EA are having a hard time keeping the lights on and this is literally what AMD has to do.
 
I don't think ACtivision\Blizzard and EA are having a hard time keeping the lights on and this is literally what AMD has to do.
You mean AMD could have done? When you are producing million dollar AAA games that do not always sell well, especially on pc hardware that has to be software coded for, every bit counts. That's why these studios are still in business. And supporting hardware that gamers specifically are hesitant about, may not happen. Time will tell with hardware adoption statistics we do not have yet.

I do not think Ryzen is a flop or failure, but I doubt we will see many changes this generation with games. And AMD has implied no changes will be made for current titles.
 
You mean AMD could have done? When you are producing million dollar AAA games that do not always sell well, especially on pc hardware that has to be software coded for, every bit counts. That's why these studios are still in business. And supporting hardware that gamers specifically are hesitant about, may not happen. Time will tell with hardware adoption statistics we do not have yet.

I do not think Ryzen is a flop or failure, but I doubt we will see many changes this generation with games. And AMD has implied no changes will be made for current titles.
No I literally meant what I said. if AMD wants gamers to use their CPU they need to get commitments from the big name publishers to optimize for their hardware, getting Bethesda was cute, but the games not made by id are un-optimized bug riddled messes on PC.
 
No I literally meant what I said. if AMD wants gamers to use their CPU they need to get commitments from the big name publishers to optimize for their hardware, getting Bethesda was cute, but the games not made by id are un-optimized bug riddled messes on PC.

It was no secret to AMD. If they wanted a solid release they would and could have reached out to publishers. Not working for AMD, I have no idea why not.

Its a moot point either way that has been made by almost every legit review site when referring to Ryzen and gaming, especially in light of recent admissions by AMD.
 
Cool you can link articles that in fact say Ryzen works better in windows 7. So either Windows 10 has a issue or Microsoft intended to cause a 10% penalty in gaming.

The article says that all those that pretended that there is a fundamental problem with W10 scheduler and that Microsoft is working in a patch to 'fix' RyZen performance were wrong:

Just a little under two weeks from launch, AMD says that Ryzen is working just fine for the most part, and that no major changes should be expected in Windows or elsewhere to correct perceived performance issues

perceived != real
 
Cool you can link articles that in fact say Ryzen works better in windows 7. So either Windows 10 has a issue or Microsoft intended to cause a 10% penalty in gaming. Guess will see if the game developers push out Ryzen aware patches or not.
Here's an interesting thought experiment if the issue is in Win 10 why does it only effect Ryzen and why did AMD say there is no Win 10 issue just developers needing to code specifically for Ryzen's quirks?
 
The article says that all those that pretended that there is a fundamental problem with W10 scheduler and that Microsoft is working in a patch to 'fix' RyZen performance were wrong:



perceived != real

Well most of us were going by the fact that Windows 7 is 10% faster then Windows 10 so we figured it was a scheduler issue (we call that real world). Makes me wonder if Intel would show the same speed increase, someone would need to try it. We all know AMD and Intel took a different approach to hyperthreading so not surprising to me the new version is having issues on some things.
 
Back
Top