Why does Ryzen 7 1800X performs so poorly in games?

The SMT issue begs some questions.
I've heard that when intel first implemented hyperthreading they also had some teething issues where hyperthreading actually offered less performance than that feature disabled in some games. With time the issues were sorted out and it no longer caused performance decreases. But what about performance increases? Is hyperthreading a clear performance boost in most games? How many?

There are many factors when dealing with this stuff, that's why to see finger pointing after a day is pretty shilly and shitty. New OP + thread with controversial title to flood the forums? Narrative pushing 101.
Same shit with AMD GPUs even if they're competitive. OH MY GOD IT USES 30W MORE!11!! THE PCI SLOTS! lmfao.

Wait for drivers/fixes if stuff doesn't look right, even immortally faultless Nvidia fucks up launch drivers too, not game ready for some recent games + W10 anyone?

Yes when the P4s came out there were more than a few cases where it would impact performance. Can't remember if it was gaming also but definitely stuff like big database/tables stuff and something to do with SQL DB applications, as I can remember discussing it with a mate who was running Intel at the time.
 
https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/

Interesting reading and results. They also feel microcode or similar updates will yield better per-core performance in some loads. But of course, most casual people will only see the negative shit, remember the launch having not as good per-core and stumble on to drool at the 1090Ti while claiming AMD sucks.

850 points in Cinebench 15 at 30W is quite telling. Or not telling, but absolutely massive. Zeppelin can reach absolutely monstrous and unseen levels of efficiency, as long as it operates within its ideal frequency range.

Can't wait to see what it does in productivity orientated laptops. Apple will love this.
 
Hmmm maybe, but I think instead of talking about how their competitor has fixed their problem they should probably talk about their timeline to implement their fix.
It's just damage control I feel. We all know they are working on it, and while yes some sort of time frame would help, I think it's pretty safe to assume that the only reason she brought it up in the first place was to point out that "Hey, don't worry, this isn't a flaw in our architecture, nor is it a die snafu."
 
I ended up cancelling my preorder before it went through today to wait on a bit more information. I live stream on twitch and use a dual pc setup so my encoding/rendering is currently done using an 8 core amd and a gtx 1050 without any issues. My current gaming pc is a 8320 @ 4.5ghz, 24gb of memory, SSD for boot and another ssd for games all running along side a gtx 1070. I was excited for the possibility of an AMD upgrade that will future proof and perform in games for a long time. I am now torn between going ahead and getting the ryzen chip and hoping for optimization, or getting a 7700k which has proven in game performance that will surely last for years.

Ugh the struggle of deciding....I feel like my 8320 is pushed to it's limits and is bottlenecking my 1070 like crazy.
 
For people who can't get to reddit at work here is what AMD CEO Lisa Su said (already..);

Thanks for the question. In general, we've seen great performance from SMT in applications and benchmarks but there are some games that are using code optimized for our competitor... we are confident that we can work through these issues with the game developers who are actively engaging with our engineering teams.

Seems like the same old compiler stuff happening which was sorted out relatively quickly last time.
 
It's just damage control I feel. We all know they are working on it, and while yes some sort of time frame would help, I think it's pretty safe to assume that the only reason she brought it up in the first place was to point out that "Hey, don't worry, this isn't a flaw in our architecture, nor is it a die snafu."
I agree, it is damage control. It's just frustrating because it seems like they want to push it off. I'm hoping their implementation doesn't take a year.
 
I ended up cancelling my preorder before it went through today to wait on a bit more information. I live stream on twitch and use a dual pc setup so my encoding/rendering is currently done using an 8 core amd and a gtx 1050 without any issues. My current gaming pc is a 8320 @ 4.5ghz, 24gb of memory, SSD for boot and another ssd for games all running along side a gtx 1070. I was excited for the possibility of an AMD upgrade that will future proof and perform in games for a long time. I am now torn between going ahead and getting the ryzen chip and hoping for optimization, or getting a 7700k which has proven in game performance that will surely last for years.

Ugh the struggle of deciding....I feel like my 8320 is pushed to it's limits and is bottlenecking my 1070 like crazy.


The Ryzen chips are obviously good performers up against your 8320. Not to say an 8320 is junk.. it's certainly not, it just doesn't hold up performance wise against todays offerings. Even compared to a 7700K, Ryzen still performs pretty damn good.

If I was in your position. I'd grab an X370 board and the Ryzen 1700 with some decently fast memory. I feel the 1700x and 1800x aren't really worth the money.
 
  • Like
Reactions: N4CR
like this
The Ryzen chips are obviously good performers up against your 8320. Not to say an 8320 is junk.. it's certainly not, it just doesn't hold up performance wise against todays offerings. Even compared to a 7700K, Ryzen still performs pretty damn good.

If I was in your position. I'd grab an X370 board and the Ryzen 1700 with some decently fast memory. I feel the 1700x and 1800x aren't really worth the money.

I love my 8320(running 990fx chipset) for it's age it is very fast especially after watercooling it and oc'ing. I am more concerned with it not holding up to the single threaded performance, being on older ddr3, pcie 2.0, and creating a bottleneck for the 1070.
I had preordered the 1700x initially, I actually already purchased 32gb Team Nighthawk ddr4 3000(meant to pick up the 3200 but I noticed today that wasn't what I was shipped lol) and a 120gb Corsair nvme ssd and have them on my desk. I intended on just migrating my current gaming pc down to be the streaming pc and giving my father my current streaming pc since it would be more than he would ever need for his business(which badly needs an upgrade from the core2duo laptop he uses as he is too hard headed to buy a desktop)

Thankfully at this point the parts I already have can be used in either amd or intel builds no problem so they arent going to collect dust. I'm sure even with the current limitations ryzen has shown it will be a nice noticable upgrade from my 8320, but will I be looking to upgrade again in a year or so if they don't end up performing in newer games as they continue to come out?
 
I just got done playing 2 hours on my Rift and touch.... not a single issue. It games better than my 3930K did.

These chips are not meant soley for gaming. Want that you need to wait on the 4 and 6 core Zens or get a 7700K. I am tired of this "Its not a good gaming chip" b.s. argument sigh
 
Thankfully at this point the parts I already have can be used in either amd or intel builds no problem so they arent going to collect dust. I'm sure even with the current limitations ryzen has shown it will be a nice noticable upgrade from my 8320, but will I be looking to upgrade again in a year or so if they don't end up performing in newer games as they continue to come out?


Well.. as far as "future proofing" who knows. Intel could have something up their sleeve. Hell, even AMD could release something towards the end of the year. In my opinion (take it for what its worth) the world of CPUs seem to have hit a performance ceiling. It's not like the old days where each new generation made a huge leap in performance.


I am tired of this "Its not a good gaming chip" b.s. argument sigh

Agreed. It seems a lot of people are blinded by FPS numbers. Ryzen IS a good chip. Does it necessarily outperform Intels offerings? No but, does it keep up to be competitive? Certainly.

But I guess it all comes down to personal needs/expectation. A lot of people would say my 880K is junk and I should throw it in the trash but, it does everything "I" need and does it well. Ryzen would make a nice upgrade for me but, I wouldn't be able to make use of 8 cores so I'll wait for the 4/6 core models to come out.
 
Last edited:
I just got done playing 2 hours on my Rift and touch.... not a single issue. It games better than my 3930K did.

These chips are not meant soley for gaming. Want that you need to wait on the 4 and 6 core Zens or get a 7700K. I am tired of this "Its not a good gaming chip" b.s. argument sigh

It's quite hilarious - I've seen the critique of 'AMD always has to wait for future performance' often since it launched, meanwhile they bag it with 'Ryzen fails muh 640p' (but fine at 4k lol) and justify that with it being relevant, because of future gpu power? By the time that disparity is an issue, it'll either be mostly fixed as a compiler issue, or you'll be upgrading the fucking rig anyway due to old ports and out of date subsystems.


Sorry but the circular logic is hilarious.
 
Something to remember about Hyper-Threading: if you don't code for Intel's specific implementation, you get worse performance. So yeah, everything that's properly coded for Hyper-Threading is coded for Intel's version.

If AMD's version differs in some meaningful way, whether it's the assignments of virtual cores or workloads work differently or something else, that's going to cause issues and leave performance on the table.

We can't yet say whether the issue is fixable in currently shipping steppings, but if it is, we're likely to see a part that's a bit more competitive for certain workloads.
 
  • Like
Reactions: N4CR
like this
Something to remember about Hyper-Threading: if you don't code for Intel's specific implementation, you get worse performance. So yeah, everything that's properly coded for Hyper-Threading is coded for Intel's version.

If AMD's version differs in some meaningful way, whether it's the assignments of virtual cores or workloads work differently or something else, that's going to cause issues and leave performance on the table.

We can't yet say whether the issue is fixable in currently shipping steppings, but if it is, we're likely to see a part that's a bit more competitive for certain workloads.

SMT is very different on AMD. From the outside looking in, its the same old thing, but the inside looking out .. totally different monster than Intel's hyperthreading algorithms.
 
Something to remember about Hyper-Threading: if you don't code for Intel's specific implementation, you get worse performance. So yeah, everything that's properly coded for Hyper-Threading is coded for Intel's version.

If AMD's version differs in some meaningful way, whether it's the assignments of virtual cores or workloads work differently or something else, that's going to cause issues and leave performance on the table.

We can't yet say whether the issue is fixable in currently shipping steppings, but if it is, we're likely to see a part that's a bit more competitive for certain workloads.

You dont code for SMT. Its simply there as logical cores and the OS knows it.
 
Seems like SMT has better performance or benefit then HT when it works as well.

As for turning off SMT you are taking away performance that the cpu can deliver. I would use affinity to set the cpu configuration for any problematic program keeping SMT active so the OS and other programs would be able to use that increase processing power and that would also allow the program set with affinity to work faster since other programs would have more assets or cpu ability to work with vice tapping into the problematic program maybe a game. So I would expect if SMT was left on, game set to use logical cores only, maybe even a single CCX I would get better performance than with SMT off.

I will have to wait to test this out when I get my setup and then running.
 
Also of note; how is the graphics drivers working with the new Arch? - Is Nvidia drivers/AMD drivers RyZen aware and using logical or virtual cores or both? Drivers themselves could come into play for game performance dealing with how they use the RyZen cpu.
 
Also of note; how is the graphics drivers working with the new Arch? - Is Nvidia drivers/AMD drivers RyZen aware and using logical or virtual cores or both? Drivers themselves could come into play for game performance dealing with how they use the RyZen cpu.

Its handled by the OS and its aware.
 
You dont code for SMT. Its simply there as logical cores and the OS knows it.

Sure, that's part of it, but:

Same as always. The blame game is in motion and it cant be AMDs fault or the Red Team viral arm that infest forums and sites.

...when a game isn't coded to account for SMT/Hyperthreading, you wind up assigning like workloads to both a physical and virtual core, and this degrades performance. Current SMT/HT implementations bank on having different workloads, i.e. one FP and one integer/logic thread executing at once. Put two FP threads on a single physical core by assigning the second to the virtual core, and you'll just slow both threads down.
 
Uhh? have you ever programmed a thread before? I have.

The BIOS tells the OS about the cores. Windows will schedule tasks on cores 0-7 first that will be physical cores. And first after that from 8 to 15 it will use the SMT cores.

Sure, that's part of it, but:

...when a game isn't coded to account for SMT/Hyperthreading, you wind up assigning like workloads to both a physical and virtual core, and this degrades performance. Current SMT/HT implementations bank on having different workloads, i.e. one FP and one integer/logic thread executing at once. Put two FP threads on a single physical core by assigning the second to the virtual core, and you'll just slow both threads down.

As I said, Windows will schedule to the cores before using SMT. Its no different than when an i3 uses SMT while a i7 would still use "real cores" before going to SMT cores. You dont code for that.

When you disable SMT you release resources in the CPU that's shared. And that's on a pure CPU level.
 
I'm more curious how the R5 and R3 will fare.
Unless it clocks significantly higher than the 1800x or there actually is some massive software issue to be fixed it doesn't look well.
Anyway Ryzen seems like an amazing deal for productivity tasks, but a disappointment otherwise, at least as the benchmarks currently stand.
 
I'm more curious how the R5 and R3 will fare.
Unless it clocks significantly higher than the 1800x or there actually is some massive software issue to be fixed it doesn't look well.
Anyway Ryzen seems like an amazing deal for productivity tasks, but a disappointment otherwise, at least as the benchmarks currently stand.

They clock lower as stock. The 1800X is the fastest clocked.

df5c790c1108_thm.jpg
 
FYI, to this day MS recommends disabling HT for Exchange servers and maybe SQL.

Take that for what its worth
 
The BIOS tells the OS about the cores. Windows will schedule tasks on cores 0-7 first that will be physical cores. And first after that from 8 to 15 it will use the SMT cores.



As I said, Windows will schedule to the cores before using SMT. Its no different than when an i3 uses SMT while a i7 would still use "real cores" before going to SMT cores. You dont code for that.

When you disable SMT you release resources in the CPU that's shared. And that's on a pure CPU level.
Is that happening? Does not look like it. If the OS is controlling it as you say then you would not see the performance bump with SMT off. I guess we see what AMD/Microsoft/Developers say and what they come up with.

I will test with affinity if keeping SMT on but setting game that is affected just to use logical cores if the performance is the same better or worse with SMT off. Like to see real data in other words. Which will be over a week due to motherboard shipment.

This will probably take a few months to settle out before we really know the real limitations besides the rather big wall it appears at 4.1ghz.
 
The BIOS tells the OS about the cores. Windows will schedule tasks on cores 0-7 first that will be physical cores. And first after that from 8 to 15 it will use the SMT cores.


As I said, Windows will schedule to the cores before using SMT. Its no different than when an i3 uses SMT while a i7 would still use "real cores" before going to SMT cores. You dont code for that.

When you disable SMT you release resources in the CPU that's shared. And that's on a pure CPU level.

Good job not answering the question.
 
Did any sites test games at 720p? I really want to see AOTS, Watch Dogs 2, and BF1 at 720p.
 
I've seen some chatter that power usage was abnormally low in some games where Ryzen did poorly. Could be a case of the clock/voltage/boost not working as expected which potentially could be improved by a BIOS update.

I'd love to see some real world gameplay testing done with maybe power/temp recorded during playthrough
 
I ended up cancelling my preorder before it went through today to wait on a bit more information. I live stream on twitch and use a dual pc setup so my encoding/rendering is currently done using an 8 core amd and a gtx 1050 without any issues. My current gaming pc is a 8320 @ 4.5ghz, 24gb of memory, SSD for boot and another ssd for games all running along side a gtx 1070. I was excited for the possibility of an AMD upgrade that will future proof and perform in games for a long time. I am now torn between going ahead and getting the ryzen chip and hoping for optimization, or getting a 7700k which has proven in game performance that will surely last for years.

Ugh the struggle of deciding....I feel like my 8320 is pushed to it's limits and is bottlenecking my 1070 like crazy.


In theory, a Ryzen chip would be enough do do every task you listed in under one rig... Encoding, Streaming, Gaming and even have 40 tabs of pron to fap to in between matches. Ghz isn't everything in this world... well maybe to some folks on hard it is.
 
I've seen some chatter that power usage was abnormally low in some games where Ryzen did poorly. Could be a case of the clock/voltage/boost not working as expected which potentially could be improved by a BIOS update.

I'd love to see some real world gameplay testing done with maybe power/temp recorded during playthrough

Couldnt this be fixed by just turning off the power saving features and locking the cpu in at 3.5ghz or whatever the stock boosted clocks are? I did see a review, I think shintai shared, where they did this, and they saw the CPU throttling around 75C
 
Last edited:
In theory, a Ryzen chip would be enough do do every task you listed in under one rig... Encoding, Streaming, Gaming and even have 40 tabs of pron to fap to in between matches. Ghz isn't everything in this world... well maybe to some folks on hard it is.

True, I have always built AMD computers going back to the 486 days and it was fine -- 6 months ago i was still happy with a fx8320e build -- then i picked up VR, maintaining 45+fps no longer cuts it for me, i need minimums of 90 now. This weekend i am going to microcenter do i A.) pick up a $259 i7-6700k or B.) spend $329 for less gaming performance? I might just buy both because i am having a hell of a time deciding between what the heart wants and the brain tells me to do.

The irony here. I remember being disgusted with people buying pentium 4's when AMDs were clearly cheaper and faster -- now here i am about to buy an AMD that is slower for my workload and more expensive... *sigh*
 
True, I have always built AMD computers going back to the 486 days and it was fine -- 6 months ago i was still happy with a fx8320e build -- then i picked up VR, maintaining 45+fps no longer cuts it for me, i need minimums of 90 now. This weekend i am going to microcenter do i A.) pick up a $259 i7-6700k or B.) spend $329 for less gaming performance? I might just buy both because i am having a hell of a time deciding between what the heart wants and the brain tells me to do.

The irony here. I remember being disgusted with people buying pentium 4's when AMDs were clearly cheaper and faster -- now here i am about to buy an AMD that is slower for my workload and more expensive... *sigh*

I've been wondering about that, actually. Core/thread count made a huge difference for me (i3 vs i7 and 6700k vs 6700k sans HT) in VR. Going to the i3(and to a lesser extent the non-HT i7) was vomit-inducing for me on some games(I assume due to physics calculations being handled by the CPU) because of reprojection. With VR liking that extra oomph we may see more work being done to utilize >8 threads. And if disabling SMT in the meantime is a temporary fix toward getting closer to i7 speeds you may see the 1700x with similar performance and possibly even as a better option in the near future.

Besides, it's not BAD as of now, it's just not quite as good. Personally, I got the 1700x, but the system isn't dedicated strictly to gaming.
 
True, I have always built AMD computers going back to the 486 days and it was fine -- 6 months ago i was still happy with a fx8320e build -- then i picked up VR, maintaining 45+fps no longer cuts it for me, i need minimums of 90 now. This weekend i am going to microcenter do i A.) pick up a $259 i7-6700k or B.) spend $329 for less gaming performance? I might just buy both because i am having a hell of a time deciding between what the heart wants and the brain tells me to do.

The irony here. I remember being disgusted with people buying pentium 4's when AMDs were clearly cheaper and faster -- now here i am about to buy an AMD that is slower for my workload and more expensive... *sigh*

I dunno, I'm itching to upgrade from my 2600K which a lot of gamers/enthusiasts are as well, but to what?

The definition of gamer has in my opinion changed a lot during the past 3-4 years, people are just doing so much more and multi-processing past 4 cores is becoming a thing (streamers/content creators/multimedia boxes).

People that argue that Ryzen isn't 'fast' enough is looking for a drag racer in a monster truck competition. We can look at graphs for days but at the end, Ryzen isn't a little tiny Honda that's riced out to have 500HP, nor is it this behemoth rendering chip that's not optimized for gaming. It's like .. somewhere in the middle lol.

Ryzen is buggy, people are finding inconsistencies and months from now if and when it matures then the people that nag and bitch will quietly go back to their holes and what we will have is a chip that has effectively put itself between a gamer and workstation rig at a crazy value price. If you want to wait till it matures after a year or two, then by all means do it.

I think this will be a big battleground for Intel and AMD though and AMD has a head start. I think Ryzen is a landmark chip, just like the XP 2500 (if you remember).. Since then I went Intel.. but Ryzen to me is worthwhile to come back to. I'm just looking to see what boards come out that are much better suited for this processor.

As far as your VR comment, even that platform isn't mature but eventually it will benefit from having multi c/t.
 
Last edited:
I think it looks like a great CPU. It would definitely benefit streamers and content creators. It is pretty much 95% of a 6900k at half the price. Who games at 1080p that buys a $500 CPU anyways?
 
I think it looks like a great CPU. It would definitely benefit streamers and content creators. It is pretty much 95% of a 6900k at half the price. Who games at 1080p that buys a $500 CPU anyways?

43% of steam still make up the 1080p gamers out there which is the biggest group to date. Some like using triple monitor setups but aren't willing to shell out 3 grand on three way SLI for trip-4K gaming lol.

I think the bigger question is, what sort of gamer would pay X amount less for a chip that can do gaming and workstation tasks? And AMD answered that question.

This would be like what if they made a graphics card that was in between a geforce/radeon card over a firegl/quadro card. People would scream that it doesn't go any faster than their gaming cards.. but wait.. it's up to par with features of a workstation cards at half the price? (and no that is not the Titans)... it's an untapped market.
 
So....maybe the real money chip we're waiting on is a 8/8 chip? Wondering if no SMT will allow a little extra clock? I mean, 8 physical cores is nothing to sneeze at, and might push the gaming performance to where it needs to be, or at least in the ballpark, while still being solid at the more compute intensive work.
 
Back
Top