Why does Ryzen 7 1800X performs so poorly in games?

The reason game developers haven't bothered with greater than 4-core chips is because gamers didn't have them. I mean, I have a 5960x but the vast majority of the market isn't buying $1,000+ chips.

In any case, looking back I probably would have been better off with a faster 4-core, as I see benchmarks of cheaper chips doing better. But if Ryzen takes off (and if Intel counters with mainstream 8-cores) then we will see games take advantage.
 
I think it looks like a great CPU. It would definitely benefit streamers and content creators. It is pretty much 95% of a 6900k at half the price. Who games at 1080p that buys a $500 CPU anyways?
Being able to push out more frames at lower resolution might not matter at 4K today where a GTX 1080 performs roughly the same with a Ryzen, 6900K or 7700K, but the Ryzen chip isn't going to get any faster a year from now when we have new GPUs that handle 1440p and 4K like the GTX 1080 does today.

edit: How much of the stuff AMD is showing Ryzen besting Intel CPUs for content creation is done on CPUs as opposed to GPUs? And do streamers use software encoding for game streaming? I think OBS supports GPU encoding?

I believe that game developer will adopt >4 core faster than you might think. The processor of the newest generation consoles like XB1 S or PS4 Pro are 8 core processor much like Ryzen 7. These processors are even slower than Ryzen. So I would predict that in the future, the once the problems with Windows scheduler and EFI are solved, Ryzen will do as goog job as Kaby Lake or even better as devs are more experienced with 8-cores processors.
The Jaguar cores in the consoles are AMD's Atom equivalent. They are clock for clock less than half as fast and they're only clocked half as fast as Ryzen.
 
Last edited:
Being able to push out more frames at lower resolution might not matter at 4K today where a GTX 1080 performs roughly the same with a Ryzen, 6900K or 7700K, but the Ryzen chip isn't going to get any faster a year from now when we have new GPUs that handle 1440p and 4K like the GTX 1080 does today.

edit: How much of the stuff AMD is showing Ryzen besting Intel CPUs for content creation is done on CPUs as opposed to GPUs? And do streamers use software encoding for game streaming? I think OBS supports GPU encoding?

The Jaguar cores in the consoles are AMD's Atom equivalent. They are clock for clock less than half as fast and they're only clocked half as fast as Ryzen.

Dear sir. Games get more demanding too as GPUs get faster. It's an even scale, your statement is only true if games stayed at same graphics level and required same level of gpu hardware 2 years from now. Yea for older games you will be getting over a 100 frames. You still keep going in circles at that point and there will be new iteration of zen. Games will likely scale better as well so can't expect everything to stay stationary.
 
I'm not sure what you're talking about, but I was talking about the relevance of testing CPU performance in games where the GPU isn't the bottleneck (i.e. at 1080p). If the Intel chip can spit out 40% more frames at lower resolution, but is roughly tied at 4K, then a faster GPU with the Intel CPU will perform better than the Ryzen CPU with a faster GPU which will still perform roughly the same. The consoles are 8-core x86 designs and have been out for years. There are games that take advantage of >4 cores, but they're still not the norm.

GPUs have been getting faster at a much greater rate than CPUs for years now. I'm not sure what the fastest single card GPU was when the Sandy Bridge i7 was the undisputed fastest gaming CPU available, but I kind of doubt anyone who says that an overclocked Sandy Bridge is good enough for current games is saying the same about that GPU. We know they're not because nobody has heard of this venerable GPU and Sandy Bridge gets brought up constantly.
 
All I wanted was for it to equal a 6900k in games and then it would have been great. We all knew beforehand that it would lose to a 7700k in lightly threaded games. That's not the point, the point is it's doing significantly worse than an Intel chip with the same number of cores and threads as well as a lower clock speed. This is why I'm concerned.
 
All I wanted was for it to equal a 6900k in games and then it would have been great. We all knew beforehand that it would lose to a 7700k in lightly threaded games. That's not the point, the point is it's doing significantly worse than an Intel chip with the same number of cores and threads as well as a lower clock speed. This is why I'm concerned.

I'm happy if it hits 90%. We all have degrees of happiness.
 
Seen things like this before. It's either AMD design isn't quite as good (we are splitting hairs over a few percent in a some games here), or there is a patch/compiler/driver/bios/whatever fix needed.
I remember when the compiler wars were going on towards the end of AMDs' CPU reign (yes a five year CPU reign over Intel which people like to forget about), you'd see some pretty big disparities between each camp.
Actually AMD's reign in price to performance ratio was much much longer then 5 years. It's just that it ended 11 yrs ago.
 
Actually AMD's reign in price to performance ratio was much much longer then 5 years. It's just that it ended 11 yrs ago.
Excellent point. I was more meaning high end dominance which was about that period. But yes as a value proposition even the later K6s were trading blows and beating pentium opponents in some cases already... athlon was next and that was a pretty damn competitive if not dominating when it came to max OC vs OC till the northwoods.
 
All I wanted was for it to equal a 6900k in games and then it would have been great. We all knew beforehand that it would lose to a 7700k in lightly threaded games. That's not the point, the point is it's doing significantly worse than an Intel chip with the same number of cores and threads as well as a lower clock speed. This is why I'm concerned.

Significantly lower? Source pls? Ruben seems to be on par with 6900 on both productivity workload and gaming. With few exceptions of course.
AMD fans are AMDs worst enemy because they are so god damn toxic and try to delegitimize any result that doesn't agree with their worship of AMD. GN gave a very honest review and I appreciated that fact, the true unbiased tech enthusiasts will recognize that fact.


There are certainly trolls out there.

GN did a very poor job of reviewing Ryzen. I don't imply that he was being dishonest in any way. The title is misleading and in some way, completely wrong. The title was "Ryzen, i5 in gaming, i7 in productivity". Yet in about half of gaming benchmarks he omit to test i5 along with other processors because it was "too much work for him." (Oh, professionalism.) Also he seems to miss the fact that, in minimum fps, Ryzen destroys i5's. It is true that Ryzen is not as good as i7 7700k but it is certainly better than all i5's. If I apply this logic to i7 6900k - which isn't as good as 7700k in gaming - , then I could also say it is i5 in gaming, i7 in productivity which is clearly bullshit.

So, at best, he screw up in the logic in his review, or at worse, he intended to trash Ryzen because of whatever reason.
 
Significantly lower? Source pls? Ruben seems to be on par with 6900 on both productivity workload and gaming.


There are certainly trolls out there.

GN did a very poor job of reviewing Ryzen. I don't imply that he was being dishonest in any way. The title is misleading and in some way, completely wrong. The title was "Ryzen, i5 in gaming, i7 in productivity". Yet in about half of gaming benchmarks he omit to test i5 along with other processors because it was "too much work for him." (Oh, professionalism.) Also he seems to miss the fact that, in minimum fps, Ryzen destroys i5's. It is true that Ryzen is as good as i7 7700k but it is certainly better than all i5's. If I apply this logic to i7 6900k - which isn't as good as 7700k in gaming - , then I could also say it is i5 in gaming, i7 in productivity which is clearly bullshit.

So, at best, he screw up in the logic in his review, or at worse, he intended to trash Ryzen because of whatever reason.


So [H] got called out for Kyle's article about AMD and its problems right? So whats the difference here with gamer's nexus? Guess what, nothing. Kyle's article came pretty much 100% true lol. So you think Gamers Nexus is making anything up in their review?

Question. Do you think AMD's responses to Gamer's Nexus's questions were apropos? If you don't think they were then you are correct they weren't, those weren't answers they were giving they were excuses and ways for reviewers to show them in a better light by changing testing methodologies that will no longer show what the full potential of the CPU is.
 
So [H] got called out for Kyle's article about AMD and its problems right? So whats the difference here with gamer's nexus? Guess what, nothing.

Question. Do you think AMD's responses to Gamer's Nexus's questions were apropos?

Which response please.
 
testing @ 4k and so on? The three things they told Gamers Nexus to do.
They asked him to also include those resolutions so people who own monitors of the resolutions can also see what Ryzen can do which is totally fine to me.

The fishy one here, IMO, isn't AMD. it's Steve. Why would he record a private discussion if he didn't intend to use it?
 
They asked him to also include those resolutions so people who own monitors of the resolutions can also see what Ryzen can do which is totally fine to me.

The fishy one here, IMO, isn't AMD. it's Steve. Why would he record a private discussion if he didn't intend to use it?


I don't think it was private, I think he told the person he was recording, did some say that that really happened?
 
I don't think it was private, I think he told the person he was recording, did some say that that really happened?

That's just my assumption.

Anyways, AMD certainly didn't forbid anyone from review on 1080p or lower.
 
they didn't forbid, not saying they did, but they answered a straight question with a sideways answer.
 
I have a feeling that there's a flaw in Zen microarchitecture and that's why it performs so poorly in games.

It's not a fatal one unlike the one in Bulldozer.

AMD knows what it is, but it won't be corrected until Zen+, or possibly as early as Raven Ridge.

In fact, the reason that quad-core Ryzen won't be release until H2 2017 is probably because it will be based on Raven Ridge.

Until then, AMD will make BS excuses such as: games haven't been optimized for Ryzen yet.
 
Excellent point. I was more meaning high end dominance which was about that period. But yes as a value proposition even the later K6s were trading blows and beating pentium opponents in some cases already... athlon was next and that was a pretty damn competitive if not dominating when it came to max OC vs OC till the northwoods.
I remember the price differences in those chips. You could spend so much less and overclock an AMD back then. K6 K6-II K6-III Athlon Thunderbird that cost vs final performance domination of Intel lasted for so very long. Infact even the earlier 286 and 386 and 486 and K5's were cost to performance competitive. Then the Socket 939's came out and were just amazing and then shortly after Intel released the core series and overclock price to performance ran away from AMD for so very long.
I would never have believed it would be 11 yrs before they would be back in the mix.
 
  • Like
Reactions: N4CR
like this
I have a feeling that there's a flaw in Zen microarchitecture and that's why it performs so poorly in games.

It's not a fatal one unlike the one in Bulldozer.

AMD knows what it is, but it won't be corrected until Zen+, or possibly as early as Raven Ridge.

In fact, the reason that quad-core Ryzen won't be release until H2 2017 is probably because it will be based on Raven Ridge.

Until then, AMD will make BS excuses such as: games haven't been optimized for Ryzen yet.

I haven't seen many cpu's in my lifetime perform well in all synth but not in games.

It may not even be a flaw, the biggest flaws we've seen in performance have usually been Software related over the years
Don't take me wrong here, I ain't supporting anything here but lets time be the judge This year instead of calling BS.
People saying Intel is the bad guy for compiler should shut up, one cannot compile for something no-one knows anything about.

I've seen Pentium 4's and A64's have this issue, hell even the Core2's had start up issues where their performance was subpar in certain tests.

BS excuses they've made are completely justified as there are games out there that perform good on all platforms and do perform very great on a Ryzen platform, in fact...sometimes better than An latest and greatest I7.
As some find, issues with SMT whereas Intel does not have issues proves the point, 5% is 5%, If it was working properly maybe we have 6% or 7% and suddenly with some Uefi\bios changes we see a 9-10% total, that's Big in a CPU world,'
and finding 10% without changing silicone is benefits for us.

It will never rival I7's in it's current silicone form.

Even if they only find 4-5% the 1700 and 1700X is still a hell of a cpu for the money, just as the 7700K is but for different tasks namely gaming which is the minority in the world.
 
I have a feeling that there's a flaw in Zen microarchitecture and that's why it performs so poorly in games.

It's not a fatal one unlike the one in Bulldozer.

AMD knows what it is, but it won't be corrected until Zen+, or possibly as early as Raven Ridge.

In fact, the reason that quad-core Ryzen won't be release until H2 2017 is probably because it will be based on Raven Ridge.

Until then, AMD will make BS excuses such as: games haven't been optimized for Ryzen yet.
No useful arguments here. Just "feelings" and unsupported speculations, or in other words, bullshit.
 
No useful arguments here. Just "feelings" and unsupported speculations, or in other words, bullshit.

I agree and diasagree with you. There is a flaw in the IMC that can be ameliorated but not fully corrected until zen+ or zen1. There is also the issue with threads being sent from one ccx to the other which causes latency issues. That also impairs performance. But turning off ccx in single thread and low thread games does marj=kedly improve performance as well as not trying to use all 4 dimm slots and not going over 16 GB of memory. Do I lke these limitations? No their not ideal but they do raise performance to a competitive level. Meanwhile some bios fixes and game optimizations will greatly help gaming performance and they are coming soon. I trust the next major imrovement zen plus or as some call it zen1 these 2 flaws will be healed altogether. I am optimistic. The chip is NOT an embarrassment or a dud as head lynch mob leader Shintai asserts. It is a vast improvement over the best of the FX series in most applications and some games shines above Intel. The price pperformance fora high performance 8 vcore chip has never been lower. I love when monopolists like Intel have tp scurry to adjust when the emperor has no clothes.
 
Doesn't Windows or the UEFI just need an update so games' threads are scheduled across physical cores better? I would think that Ryzen's logical cores are enumerated to the OS C0T0, C0T1, C1T0, C1T1, C2T0, etc as Intel's are or is that not the case? AMD's SMT seems to work for general threaded applications, so maybe games are only spreading their workloads among different physical cores by checking if it's "[AMD's] competitor" and then checking for the number of cores and Hyper-threading support and assuming if it's an AMD chip it doesn't have SMT because none of their previous products did?

Isn't there a way to make a Windows shortcut, batch file, or Powershell script start a program with the Affinity set to only use specific cores? That would seem like an effective workaround in the short term until things get ironed out.
 
Doesn't Windows or the UEFI just need an update so games' threads are scheduled across physical cores better? I would think that Ryzen's logical cores are enumerated to the OS C0T0, C0T1, C1T0, C1T1, C2T0, etc as Intel's are or is that not the case? AMD's SMT seems to work for general threaded applications, so maybe games are only spreading their workloads among different physical cores by checking if it's "[AMD's] competitor" and then checking for the number of cores and Hyper-threading support and assuming if it's an AMD chip it doesn't have SMT because none of their previous products did?

Isn't there a way to make a Windows shortcut, batch file, or Powershell script start a program with the Affinity set to only use specific cores? That would seem like an effective workaround in the short term until things get ironed out.


It can be done, but you will effectively cut an 8 core Ryzen down to a 4 core one by doing so in those specific programs. Also power consumption and voltages will go up because those 4 cores will be pressured more. So end result its a trade off and not an ideal one.
 
testing @ 4k and so on? The three things they told Gamers Nexus to do.

and here i am 4k monitor and vr in hand and no one (cept kyle in vr) gave a shit about me on review day cause im not a 1080peasant.

fuck me right.
 
and here i am 4k monitor and vr in hand and no one (cept kyle in vr) gave a shit about me on review day cause im not a 1080peasant.

fuck me right.


doesn't matter once you see a 1080 p benchmark that is CPU limited, you know full well were the CPU lands, the 4k stuff, you know that too, cause you already looked up becnhmarks about your graphics card and know a game is GPU limited so you also know you can probably use a 2 core cpu and get the same frame rates lol

So why don't you take your 4k monitor, what ever graphics card you have and pair it with an i3? You get the same frame rates anyways! So why would Ryzen matter then. Would you do that, sell of your current computer get a 2 core i3 system and put in your graphics card attach your monitor and start playing? You might get a couple hundred bucks in your pocket doing that.

Is that what you would recommend other 4k monitor owners and 500 buck and up cards to do?

When reviewers test for something in particular, in this case CPU performance, they are looking for that particular thing, not for something else.
 
doesn't matter once you see a 1080 p benchmark that is CPU limited, you know full well were the CPU lands, the 4k stuff, you know that too, cause you already looked up becnhmarks about your graphics card and know a game is GPU limited so you also know you can probably use a 2 core cpu and get the same frame rates lol

So why don't you take your 4k monitor, what ever graphics card you have and pair it with an i3? You get the same frame rates anyways! So why would Ryzen matter then.

i guess we'll never know.

because no one even bothered to do any. weird eh?

oh and here's your 2 core cpu.

gta-v-cpu-4k-vh.jpg


look at it go.
 
i guess we'll never know.

because no one even bothered to do any. weird eh?

oh and here's your 2 core cpu.

gta-v-cpu-4k-vh.jpg


look at it go.

So you take a benchmark were there is something wrong that causes the game to fail as proof?


Just look at the i5 to to the i7 do you see any major change in frame rates? so why buy the i5

Shit you have a 8 core old AMD part in there too, get that one it keeps up with the 4790k who needed it? Does that sound like AMD's marketing from their BD release? Yes it was exactly that.

Trade your current setup for that or for an Athlon 760k, there is only 1 FPS difference AVG. for that and a 4970k.

You can't see the actually performance when its GPU limited and that is the whole point.

Edit, just tell me this, how do you purchase your graphics cards?

Do you look for a CPU limited scenario to see graphics performance?

Would you like to have reviewers do that from now on when testing graphics cards?

I think just for you ask Kyle or other reviewers to do that for you and see what their response is.
 
Last edited:
So you take a benchmark were there is something wrong that causes the game to fail as proof?


Just look at the i5 to to the i7 do you see any major change in frame rates? so why buy the i5

Shit you have a 8 core old AMD part in there too, get that one it keeps up with the 4790k who needed it?

Trade your current setup for that or for an Athlon 760k, there is only 1 FPS difference AVG. for that and a 4970k.

As for Intel's G3258, a dual-core Pentium solution for ultra-budget systems, we were unable to achieve stability for longer than a few minutes at a time. The two times that a test pass was completed, the 1% low and 0.1% low FPS outputs hit a dismal 7 and 5FPS (respectively), effectively making for an unplayable experience. Minutes into our burn-in (part of the low-confidence parity check), the game froze as the CPU failed to keep up with demand. It just isn't powerful enough and is too thread-limited for GTA V.

their words.

and GTA V is the most stable and well optimized game ever released for PC.

you made the statement

the 4k stuff, you know that too, cause you already looked up becnhmarks about your graphics card and know a game is GPU limited so you also know you can probably use a 2 core cpu and get the same frame rates lol

obviously not ture as defined by the benchmark of a game that works beautifully shows.

that was just for fun though.

but there are some subtle differences in how a game plays that an average frame chart don't show which also covered by the writer

Decent FPS Doesn't Necessarily Mean 'Playable' – The Caveats
A final caveat: We took note of some visual artifacting that occurred during benchmarking, something that is only observable to an onlooker and won't be measured by FPS monitoring utilities.

The lower-end 760K CPU – an APU with the IGP disabled – exhibited occasional “flickering” and green flashes, indicating a performance limitation on the CPU. Although the CPU did push playable FPS by many definitions of the term, users of this CPU may observe annoying visual artifacts at times.

the 760k for example plays ok but flickers with the igpu disabled.

without him benchmarking it at 4k we'd never had known there is flickering.
 
I believe that game developer will adopt >4 core faster than you might think. The processor of the newest generation consoles like XB1 S or PS4 Pro are 8 core processor much like Ryzen 7. These processors are even slower than Ryzen. So I would predict that in the future, the once the problems with Windows scheduler and EFI are solved, Ryzen will do as goog job as Kaby Lake or even better as devs are more experienced with 8-cores processors.
The PS3 and Xbox 360 where multicore machines as well. We've had these consoles on the market for nearly 4 years now and have not seen it really trickle down to PC gaming. It is very likely even though those consoles are essentially X86 machines there are still significant enough differences between programming for a console and a PC that we are not seeing an inverse benefit.
 
their words.

and GTA V is the most stable and well optimized game ever released for PC.

you made the statement


IS that an i3? if that is the processor I think it is its a pentium D. edit:

ok its a Haswell, well something went really wrong then cause its running like crap compared to the Athlon 4 core, it should be running at around the same performance.


obviously not ture as defined by the benchmark of a game that works beautifully shows.

that was just for fun though.

but there are some subtle differences in how a game plays that an average frame chart don't show which also covered by the writer

I agree with that you wont' get absolute details you need but you will get an idea.


the 760k for example plays ok but flickers with the igpu disabled.

without him benchmarking it at 4k we'd never had known there is flickering.

You are talking about something that is ephemeral to the crux of the discussion and pulling other things in. No you can't test for CPU performance with GPU limited scenarios.

If you haven't seen the edited post

tell me this, how do you purchase your graphics cards?

Do you look for a CPU limited scenario to see graphics performance?

Would you like to have reviewers do that from now on when testing graphics cards?

I think just for you ask Kyle or other reviewers to do that for you and see what their response is.
 
Last edited:
I personally don't put any weight into "it needs to be optimized" or "will get better with time" crap. I base my decisions on reality - here and now. Right now I'd go with Intel.

Good to see AMD is within a stone's throw though.
 
Last edited:
IS that an i3? if that is the processor I think it is its a pentium D.




I agree with that you wont' get absolute details you need but you will get an idea.




You are talking about something that is ephemeral to the crux of the discussion and pulling other things in. No you can't test for CPU performance with GPU limited scenarios.

you said

the 4k stuff, you know that too, cause you already looked up becnhmarks about your graphics card and know a game is GPU limited so you also know you can probably use a 2 core cpu and get the same frame rates lol

an i3 is 2 HYPERTHREADED cores.

a g3258 is a 2 core cpu.

semantics but, say what you mean.

also you're full of shit when you say


No you can't test for CPU performance with GPU limited scenarios.

if that were true than this wasn't possible.

http://www.anandtech.com/bench/CPU/1337

why is the i3 4330 @ 3.5ghz doing so badly?

i mean it's the same 2 cores found in the haswell 4770k @3.5ghz architecture and yet it's slower.

seems like a CPU bottle neck at 4k. weird that.
 
you said



an i3 is 2 HYPERTHREADED cores.

a g3258 is a 2 core cpu.

semantics but, say what you mean.

also you're full of shit when you say




if that were true than this wasn't possible.

http://www.anandtech.com/bench/CPU/1337

why is the i3 4330 @ 3.5ghz doing so badly?

i mean it's the same 2 cores found in the haswell 4770k @3.5ghz architecture and yet it's slower.

seems like a CPU bottle neck at 4k. weird that.

So then you pick a game that isn't GPU limited? Err thought you were looking for GPU limited games and settings? This is clearly not GPU limited.

Again ask Kyle or any other reviewer just for you make a set of reviews for a graphics card that you want to upgrade to that are CPU limited.

PS your GM3258 is there too and its actually doing almost as good as an i 7, sandy bridge, a 2 core 2 thread chip keeping up with a 4 core 8 thread chip, who the hell needs an i 7!
 
So then you pick a game that isn't GPU limited? Err though you were looking for GPU limited games and settings? This is clearly not GPU limited.

Again ask Kyle or any other reviewer just for you make a set of reviews for a graphics card that you want to upgrade to that are CPU limited.

4k on ultra shadow of mordor? which brought the 970 to it's knees?

ok bro.

Q6hPmz.gif
 
Each year there will be a new iteration of Ryzen, on the AM4 platform. New intel cpu? New MB too. Total system cost for AMD, and the lower barrier for future upgrades that will almost certainly scale to higher clocks with better ipc gains is vastly superior on the AMD side.
So you're saying there's a lot of room for improvement on Socket AM4? I'm trying to follow what you're saying here.
 
4k on ultra shadow of mordor? which brought the 970 to it's knees?

ok bro.

Q6hPmz.gif

Wtf are we talking about man, benchmarks that are GPU limited vs Benchmarks that are CPU limited and what they show, if you can't understand that, please ask then we can go from there.

if you BUY A GRAPHICS CARD DO YOU WANT TO SEE CPU LIMITED BENCHMARKS FOR THE CARD YOU WANT TO BUY?

Answer that question

you ain't my bro, so don't call me that, ever!

goal posts moving for you too, Haswell 2 core putting a fight with an i 7 sandy bridge within 1 fps, yet your the first table you linked it was flattering, yet you try to make beacon with eggs. Try that again BRO.....
 
Back
Top