The Definitive AMD Ryzen 7 Real-World Gaming Guide @ [H]

I hear ya. All that [H]ard work for it all to become kinda mute with the new AGESA.
Everyone has to pick a moment, and the wait was long on AMD's end for anyone to do any testing. Maybe in a future test do a 3600Mhz RAM@4Ghz on a game or two to see what, if any, numbers change vs. this article. It would be a very welcomed follow up.

My opinion is that this will not affect GPU limited, real-world gaming situations (like we performed) to any great degree changing the playable gameplay experience. It might get you a few percent higher in very CPU limited/bound scenarios in multitasking content creation workloads and so forth, but in real-world gaming, as we have tested, I don't think it's going to be a "magical" boost in performance.

Typically memory bandwidth isn't something that greatly affects gaming performance past a certain point.
 
Full quote please Kyle! I did mention "Maybe in a future test do a 3600Mhz RAM@4Ghz on a game or two to see what, if any, numbers change vs. this article."
I do not think my fully quoted post has me at all believing everything I read on the internet. The fact I suggested a game or two with high-speed RAM as to augment to your extensive [H]ard work should not be replied to with some misdirected canned benchmark misdirection nonsense.
Simply a request that many would like to see. Hopefully when you are more relaxed, you will consider it.
Well you know us, of course we will never test anything ever again.....
 
My opinion is that this will not affect GPU limited, real-world gaming situations (like we performed) to any great degree changing the playable gameplay experience. It might get you a few percent higher in very CPU limited/bound scenarios in multitasking content creation workloads and so forth, but in real-world gaming, as we have tested, I don't think it's going to be a "magical" boost in performance.

Typically memory bandwidth isn't something that greatly affects gaming performance past a certain point.
Well ya, we all know what they say about opinions.
Why so defensive guys? You put in all the [H]ard work. Now we get:
"My opinion", "It might get", "I don't think"..... You have a solid article. What are you guys going so sideways about?
 
I'd be very interested too see how much speed AMD gets if DRAM overclocks of DDR4-4000 are possible. Now that the latest AGESA is out, we may finally know.
They are not possible any more than they were. You can set it but you will almost never get it stable (unless you have near-ambient chiller).
This is what you get in some games with Ryzen.
This actually is not Ryzen results, it's 7700k results cropped by MSI marketing :)
 
A lot of work went into this review, so I am going to apologize for what I am going to say next. I quietly read CPU reviews from a gamer perspective and have wondered why, for years now, reviewers keep creating scenarios completely antithesis to multi-tasking when reviewing multicore CPUs and their impact on gaming.

First off, I get the methodology of creating a clean Windows slate, I get that it is good practice and lowers variables and I get that limiting what Windows is doing ensures that the only thing using the CPU (as much as possible) is the game being tested. However, we perpetually create unrealistic scenarios, scenarios that only exist to possibly professional gamers.

I've been gaming for most of my life and the days of shutting down every app before I launch something are dead and gone. These days I game with 50 chrome tabs open in the background, Windows Defender running, multiple chat programs, Steam, Streaming software, Foobar2k, 3rd party log monitors, Gaming wikipedias, youtube, etc.

Reviewers should be brain storming the kinds of scenarios that could possibly be solved by a high core count. Here are a few personal ones:
1. I used to run 24 everquest windows from the same workstation. This is called "botting" and in these kinds of multi-game scenarios where you dedicate a core per process, you quickly see CPU eclipse GPU as the necessary resource.
2. I like to run Server and Client combinations from the same high-end workstation, such as a minecraft server that I am also connected to. Perhaps it is rare, but I would like to think that many "enthusiast gamers" like to host their own servers. The upstream bandwidth to do these kinds of things easily exists these days where it was the bottleneck before. (Minecraft is also a game that doesn't really do much with the GPU)

Additionally, there are a number of games that appear to become CPU dependent as time goes on. IE: Turn 300 in any CIV game or space sim. Distant Worlds: Universe, Stellaris, etc.

The most interesting thing about the review you've written is your massive dip in Fallout 4. Obviously multiple cores did not help here, however it was a situation that potentially touched on an actual CPU bottleneck instead of just a bunch of roughly paired GPU graphs.

The ultimate question, for me, still has not been answered and it is a genuine one. Can someone gaming with Ryzen gain a benefit from 8 cores over 4 cores - will the next wave of much larger cores be of any benefit?

The biggest jump in office computing was the dual core CPU, not because it made what you were running any quicker, but because it off-loaded virus scanning to another core. If you closed down the virus scanner to benchmark the office PC, you would have completely missed your largest benefit of a multi-core system.
 
I must have missed it, but what RAM frequency was being run on the Ryzen and Intel rigs? I saw the noted frequency for the 2600, but missed the 1700X and 7700k frequencies.
 
A lot of work went into this review, so I am going to apologize for what I am going to say next. I quietly read CPU reviews from a gamer perspective and have wondered why, for years now, reviewers keep creating scenarios completely antithesis to multi-tasking when reviewing multicore CPUs and their impact on gaming.

First off, I get the methodology of creating a clean Windows slate, I get that it is good practice and lowers variables and I get that limiting what Windows is doing ensures that the only thing using the CPU (as much as possible) is the game being tested. However, we perpetually create unrealistic scenarios, scenarios that only exist to possibly professional gamers.

I've been gaming for most of my life and the days of shutting down every app before I launch something are dead and gone. These days I game with 50 chrome tabs open in the background, Windows Defender running, multiple chat programs, Steam, Streaming software, Foobar2k, 3rd party log monitors, Gaming wikipedias, youtube, etc.

Reviewers should be brain storming the kinds of scenarios that could possibly be solved by a high core count. Here are a few personal ones:
1. I used to run 24 everquest windows from the same workstation. This is called "botting" and in these kinds of multi-game scenarios where you dedicate a core per process, you quickly see CPU eclipse GPU as the necessary resource.
2. I like to run Server and Client combinations from the same high-end workstation, such as a minecraft server that I am also connected to. Perhaps it is rare, but I would like to think that many "enthusiast gamers" like to host their own servers. The upstream bandwidth to do these kinds of things easily exists these days where it was the bottleneck before. (Minecraft is also a game that doesn't really do much with the GPU)

Additionally, there are a number of games that appear to become CPU dependent as time goes on. IE: Turn 300 in any CIV game or space sim. Distant Worlds: Universe, Stellaris, etc.

The most interesting thing about the review you've written is your massive dip in Fallout 4. Obviously multiple cores did not help here, however it was a situation that potentially touched on an actual CPU bottleneck instead of just a bunch of roughly paired GPU graphs.

The ultimate question, for me, still has not been answered and it is a genuine one. Can someone gaming with Ryzen gain a benefit from 8 cores over 4 cores - will the next wave of much larger cores be of any benefit?

The biggest jump in office computing was the dual core CPU, not because it made what you were running any quicker, but because it off-loaded virus scanning to another core. If you closed down the virus scanner to benchmark the office PC, you would have completely missed your largest benefit of a multi-core system.

Except we already know that helps with Ryzen. Kyle was attempting to show us what the difference was in a pure gaming scenario. And that's useful information. We know that we'll see a benefit with more cores when it comes to multitasking, running background processes, etc... So if you're lazy about that sh*t (as I am, also), a 6 or 8 core CPU starts to look real nice. If you're a purist gamer, and you don't install a lot of extra sh*t - because it's a gaming build - then it's clear that Ryzen isn't for you, as per Kyle's testing here.
 
  • Like
Reactions: mat9v
like this
I don't see how these sorts of conclusions are made. We really are in the multicore era. The average user who works and games on his machine gets everything with Ryzen EXCEPT a 5% benchmark queen status. Which is completely meaningless, the only people I know who need and understand why they want high refresh, high FPS gaming are competitive CSGo gamers. My cousin is a professional competitor in that game and even he downclocked his 7700K from 5GHz because he couldn't notice a difference.

For the rest of us, say myself, we may do normal gamer things like use a Steam Link. I have a R7 1700 with 1060FE in a Node 202. A pretty reasonable modern system for 2017, 8C/16T. With CPU encoding I'm able to stream a game to my TV over SteamLink using the max setting of 8-threads. This results in much lower latency and vastly superior image quality from NVENC and especially QuickSync or VCE. Because the CPU can handle it, I'm able to use the unlimited bandwidth setting too (100Mbit is the max for the SteamLink NIC).

Why all these review sites aren't glowing for Ryzen considering these sorts of benefits that can affect any gamer is truly beyond me. 8+ cores or bust for me from here on out. It allows my general purpose work to be faster, better multitasking, better FPS in "real world gaming" because I don't close everything running ideal conditions like reviews do. "Real world gaming" in regards to articles like these need to notate in big bold letters that this is absolute best-case scenario stuff. Put more load on those already 100% maxed 8 threads of the 7700K (which is what modern AAA titles do to it), and see how much girth its got left then.

People can tell themselves their 7700K or 2600K is still great but a little thought into it and it's clear the 4 core era is over on the enthusiast desktop.
 
As expected, no matter how in depth you go, no matter how much work you put into investigating something you'll still get people coming in with 1001 ideas about how their preferred vendor's product is actually better than represented if you just change x,y,z. And even if you do go back and look at those factors they'll just come up with some others instead.
 
Yeah that's it. We're attacking the authors or preferring a given company. I use an AMD CPU + NV GPU, that's a legit fanboy right there. It's never that maybe the analysis is fundamentally flawed or so overly specific in it's scope that it ends up being misleading. That's always the case. Just ask anyone on the internet.

How ridiculous, that's why there's a lot of people who don't even bother posting, and we shouldn't, it may upset sensitive folks who can't handle the truth. Point blank, there isn't anything [H]ard about quadcore CPUs with an IGP where the other half of your cores and cache should be. Sorry you prefer that.

Everything uses the CPU so you may want a core count worthy of 2017, that goes beyond gaming- but PCs are never solely for gaming (you're probably reading this on your desktop now), and if they are you're doing it wrong. PC isn't just a glorified Xbox that runs a single game by itself, no matter how many kids wish it were so.
 
Exceedingly thorough, thank you guys for this!

Very impressive turnout from Ryzen at 4GHz vs. Sandy at 4.5 and Kaby at 5.

Still puzzled that AMD didn't release higher clock speed lower core parts, but I assume that had something to do with the way the "modules" were configured for the quad core parts (2x2 instead of 1x4) and/or marketing reasons. But as more D3D12/Vulkan stuff comes out that takes better advantage of multithreading, maybe we'll see extra performance out of the higher core count parts.

Also, very interested to see how 2nd generation Ryzen looks, if they do some optimizations to increase IPC and clock speeds. It's already done what AMD needed to do, IMO, provide a good alternative to Intel hardware, so if they can close the gap to get the fence-sitters interested, we could all be in for another few years of good competition and performance increases.
 
What kind of hard drive was used for the test? I'm curious about whether intel Optane + 2TB spinner can have any sort of improvement on FPS.
 
You know... this thread is proof you can never make everybody happy. Kyle did a bang-up job answering a question that a lot of us cared about: how does a maxed out Ryzen compare to the best gaming CPU of today, and the best gaming CPU of yesterday in an OC'd state (because the regular reviewers don't do this). Because, you know, some people actually want an answer to that question. We knew it wasn't going to win against the fastest gaming CPU of today, but it's useful to see how much difference there really is. And it's very useful to see how it compares to the fastest gaming CPU of yesteryear, too. Answer was somewhere in between, though closer to yesteryear's fastest gaming CPU, than to today's.

We already knew Ryzen is great for a productivity and/or mixed-use scenario, and a heavy multitasking scenario. Kyle didn't need to prove that.

So thank you, Kyle and all [H] staff for doing this.
 
He did a great job. But I agree that if we are going to do real world gaming tests, then the Ryzen system should have had 3200mhz C14 memory which is easy to achieve as well as anti-virus, browsers, chat apps, ect. in the background. If we were catering a review to the 10 pro gamers that need 144 fps on a 144hz monitor and have bare bones windows installs to compete, I get that. But for the other 99.99% of us, Ryzen will be faster in games on a NORMAL running windows computer with the proper ram. Trust me, I've tried this all over the past 3 months.
 
He did a great job. But I agree that if we are going to do real world gaming tests, then the Ryzen system should have had 3200mhz C14 memory which is easy to achieve as well as anti-virus, browsers, chat apps, ect. in the background. If we were catering a review to the 10 pro gamers that need 144 fps on a 144hz monitor and have bare bones windows installs to compete, I get that. But for the other 99.99% of us, Ryzen will be faster in games on a NORMAL running windows computer with the proper ram. Trust me, I've tried this all over the past 3 months.

The main question is: will Ryzen be faster than a 7700K? in your case scenario, i doubt it, so the article conclusion stands.
 
No my point is Ryzen with 3200mhz C14 or faster memory and gaming in windows with all those other apps and programs running, Ryzen will DEFINATELY be faster than 7700k. I can play AAA title games at over 60 fps WHILE all 16 threads are being used for Acronis image backups or anti-virus scans. The 7700k would drop to 20 fps like my old 2600k at 4.6ghz did. For a multi-tasker (which is EVERYONE but twitch pro gamers), windows will always be running a hundred programs, check your task manager buddy, and Ryzen will be faster in games because there is always a ton of background stuff going on. Hell your browser is inactive yet loading pictures and comercial video as pre-fetch. No one games in a vacuum except professionals, and that is very few people.
 
I am one of those rare high FPS birds which is why I use a GTX 1080 even on 1080p. Ryzen is a nice CPU but at high framerates the 77ook can lead by nearly 40%.
 
It's not 1999. Most of the background tasks the average gamer has running while gaming aren't going to stretch a $330 cpu. The vast majority of games aren't taking full advantage of all 8 threads either. So unless you are looking at a niche of a niche, like professional steamers, all this discussion about needing extra multitasking capability (beyond 8 threads) in a review thread about gaming is, at best, irrelevant. Or, at worst, deliberately trying to sidetrack the issue at hand.

The recommendation in this price bracket is the same as it always was. If you are primarily doing productivity work, get the 1700x. If you are primarily gaming, get the 7700k. It's a no brainer. Any debate is only going to crop up if you do a mix, and that doesn't include the people who feel the need to run full priority virus scans and a boinc project while gaming and then complain about a slowdown.
 
Last edited:
It's not 1999. Most of the background tasks the average gamer has running while gaming aren't going to stretch a $330 cpu. The vast majority of games aren't taking full advantage of all 8 threads either. So unless you are looking at a niche of a niche, like professional steamers, all this discussion about needing extra multitasking capability (beyond 8 threads) in a review thread about gaming is, at best, irrelevant. Or, at worst, deliberately trying to sidetrack the issue at hand.

The recommendation in this price bracket is the same as it always was. If you are primarily doing productivity work, get the 1700x. If you are primarily gaming, get the 7700k. It's a no brainer. Any debate is only going to crop up if you do a mix, and that doesn't include the people who feel the need to run full priority virus scans and a boinc project while gig and then complain about a slowdown.

Absolutely this. Unless you are attempting to deliberately swamp your processor with something like handbrake, Windows and modern operating systems in general are very adept at handling background processing and threading.

1700x for a workstation and 7700k for gaming seems like a very safe recommendation. Under some specific circumstances like gaming + streaming for a casual streamer, the 1700x would likely be a better option. If you plan to be a "full time" streamer and dedicate all of your time to that, you should really look into a dedicated rig or hardware setup for lossless.
 
Typically memory bandwidth isn't something that greatly affects gaming performance past a certain point.

Which is proven by the fact that quad channel 2011 systems don't curb stomp dual channel 115x systems in gaming. If games were very sensitive to memory bandwidth, no enthusiast would even consider dual channel.
 
You don't understand that Ryzens infinity fabric which communicates between the two 4 core clusters runs at 1/2 ram speed, so in Ryzen, ram speed DOES GREATLY affect gaming performance.
 
One item that I didn't see addressed is why the 1700x was chosen for testing over the 1800x. (I can already hear Brent's moan of "Another round of testing? I'm not a robot!" but we can't get enough numbers, can we?) Even if the 1800x was used, I don't think it change the end result: you really can't tell that much of a difference between the CPUs. That's really useful information for myself.

I've been on a FX-8350 for several years because for work I usually run two or three VMs at the same time doing small-scale enterprise development, and if I want to fire up a game at lunch I can without shutting anything down. With a kid in college and a mortgage, bang for the buck matters. For my needs, Ryzen (maybe even Threadripper) seems like it will allow me to have my cake and eat it too. If I needed a dedicated gaming rig, I'd look at the 7700k. I'd like both, but I'm cheap and I'll probably have the college kid go that route.
 
It won't be getting fixed. There's nothing to be fixed, yes RAM helps but Infinity Fabric is not a major issue if you really do your research. CCX cache miss is a total myth meant for forumgoers who read "tech" sites.

Think of it this way to keep it simple: do you want Infinity Fabric or do you want an IGP taking up half the die? Ripping that out of Intel's Kabylake/CoffeeLake lineup is not trivial.
There's not a better, scaleable, flexible, cost effective way to do it than with AMD's Infinity Fabric and does not cause it to get destroyed by Intel HEDT either. Not a mistake, smart engineering. If you can do it better show me the whitepaper detailing how.
 
I don't see how these sorts of conclusions are made. We really are in the multicore era. The average user who works and games on his machine gets everything with Ryzen EXCEPT a 5% benchmark queen status. Which is completely meaningless, the only people I know who need and understand why they want high refresh, high FPS gaming are competitive CSGo gamers. My cousin is a professional competitor in that game and even he downclocked his 7700K from 5GHz because he couldn't notice a difference.

For the rest of us, say myself, we may do normal gamer things like use a Steam Link. I have a R7 1700 with 1060FE in a Node 202. A pretty reasonable modern system for 2017, 8C/16T. With CPU encoding I'm able to stream a game to my TV over SteamLink using the max setting of 8-threads. This results in much lower latency and vastly superior image quality from NVENC and especially QuickSync or VCE. Because the CPU can handle it, I'm able to use the unlimited bandwidth setting too (100Mbit is the max for the SteamLink NIC).

Why all these review sites aren't glowing for Ryzen considering these sorts of benefits that can affect any gamer is truly beyond me. 8+ cores or bust for me from here on out. It allows my general purpose work to be faster, better multitasking, better FPS in "real world gaming" because I don't close everything running ideal conditions like reviews do. "Real world gaming" in regards to articles like these need to notate in big bold letters that this is absolute best-case scenario stuff. Put more load on those already 100% maxed 8 threads of the 7700K (which is what modern AAA titles do to it), and see how much girth its got left then.

People can tell themselves their 7700K or 2600K is still great but a little thought into it and it's clear the 4 core era is over on the enthusiast desktop.

In 99% cases it won't change a thing because games are GPU bound and CPUs are mostly at less then 90% busy, sometimes much lower - there is a space in there for background tasks.
The only real scenarios that CAN affect performance are the ones previously mentioned, hosting your own gaming server or streaming video encoded on CPU, even listening to YT video/music in background on steam overlay is not tasking CPU much, any other players like Spotify, Tidal or even AIMP/Winamp or something take at most few %. Comm programs like Discord may take another few %, one % for Afterburner.
In all when starting a game I have 4-5% CPU utilization and I'm running a lot of apps:
- Tidal
- Discord
- Skype
- VMWare tray
- f.lux
- Riva tuner + afterburner
- Teamviewer
- SteelSeries app
- Steam
- Process Hacker with 6 monitoring elements
- ThrottleStop
- ProcessLasso
- NetTime time synchronization
- FileZilla server
- ESET
- Disk encryption
- 80+ tabs in Chrome
- some other apps I will not mention here :)
Now I did some testing recording idle CPU usage over night and part of a day I was not at home and never in that time frame did it exceed 10%, granted it probably did not run a antivirus scan as it would jump to about 12% for the scanner alone.
What I'm trying to say is that background tasks are just that, background, and unless you are seriously loading your CPU with something big like an active server or CPU encoding, real life game testing will see no impact from them.
If you do such task regularly, you can buy 8 core CPU blindly.
 
Hah. So, memory resident programs do not equal CPU utilization. That's been backed up by your scientific test and been true since we first had personal computers in the late 70s.

Run a game like Battlefield1 which can peg a 7700K at 100% on all 8 threads, then have any of the programs on that long list start pegging a thread or two and see what happens to your FPS benchmark results then. There's no headroom. No one needs more than 640K or 4 cores. Where you choose an idle IGP, Ryzen buyers are saying they prefer 4 more cores and twice the cache.

Youtube would impact your gaming if you ran high res without GPU decoding. For encoding the GPU is vastly inferior and even the run-of-the-mill pedestrian SteamLink streaming benefits greatly with in-home streaming over the CPU. You guys keep on with your Intel IGP, it's pretty awesome. But the rest of us know that "better is better" and we're not downgrading to quadcores anytime soon.
 
Hah. So, memory resident programs do not equal CPU utilization. That's been backed up by your scientific test and been true since we first had personal computers in the late 70s.

Run a game like Battlefield1 which can peg a 7700K at 100% on all 8 threads, then have any of the programs on that long list start pegging a thread or two and see what happens to your FPS benchmark results then. There's no headroom. No one needs more than 640K or 4 cores. Where you choose an idle IGP, Ryzen buyers are saying they prefer 4 more cores and twice the cache.

Youtube would impact your gaming if you ran high res without GPU decoding. For encoding the GPU is vastly inferior and even the run-of-the-mill pedestrian SteamLink streaming benefits greatly with in-home streaming over the CPU. You guys keep on with your Intel IGP, it's pretty awesome. But the rest of us know that "better is better" and we're not downgrading to quadcores anytime soon.
Yes, exactly, I have a lot of programs in a background but they do not burden my CPU, they don't even lower benchmark scores when testing 3D performance - why would I cry for more cores?

Right, like my graphic settings in BF1 (or any other game for that matter, I don't have 1080Ti after all) are so low that GPU is waiting for CPU. Get real friend, Battlefield is loading my CPU at less then 60% so the rest won't happen.
And why would I run YT without hardware decoding? When gaming on my Nvidia card I can decode video in hardware on IGP - no stress.
Ah yes, the magical streaming while playing card, with GPU encoding is "vastly" inferior to CPU one... actually I don't care about that, my friends or kids or anyone else can suffer 10% drop in quality, or even 30% subjective drop - when the time comes that it will matter to me I will do something about it. On the other hand I have streamed from R1700 box while playing CS:GO using OBS and it is a nightmare playing with CPU affinities (finally used Process Lasso) to force it to work on the other CCX (from the game) or else you get stuttering as all hell. I can agree and did that in my post above (which you have not even read fully) that there are cases where Ryzen is best but by no means is it an universal thing, only a few use cases when playing games and outside of that a lot of semi and professional apps.
I build custom systems for a living and have played with R7 a lot, it is also on my "to buy for myself" list but I will not glorify it nor will I bend reality (as to real life cpu usage) to justify migration, buying for the sake of buying is a great example of stupidity.
R7 is mostly equal in pure gaming (5-10% for me is not a deal breaker) and much more powerful in multitasking then 7700K but buying it will mean that an option of hardware encoded streaming is not available to me anymore and I loath to again fight with OBS to get it to work reliably, there were times I considered going back to FRAPS just to get it to work sensibly. Oh, and I don't do SteamLink.

In the end Ryzen is a great CPU but it is just not for everyone, keep that in mind while trying to rant.
 
A couple of things:

Great review and fair conclusion, especially as stated by Kyle.

WTH is the deal with having a multitude of chrome tabs open? Some kind of contest?

And based on that review I am not so sure there is a clear distinction between gaming and whatever else. Biggest take away would be, given equal pricing would most not go with the higher core count?
 
Yes, exactly, I have a lot of programs in a background but they do not burden my CPU, they don't even lower benchmark scores when testing 3D performance - why would I cry for more cores?

Because no one made that point except for you. You created your own strawman then proceeded to beat it up.
Next.
 
I could see Raven Ridge being a lot better for gaming than Ryzen 7 if it ships with mature firmware and can clock into the mid-4's reliably.

The 4 core Ryzen 3 parts might be better for apps that like cache but it doesn't seem like they will be binned well for gaming
 
I have an overclocked 1080ti and 4k monitor and noticed my 6600k OC'd to 4.5 80-95% utilization on 64 player BF1. So I sold that one and got a 7700k that could very nicely run at 5ghz. I was surprised when CPU utilization had only marginally gone down...perhaps to 80-90% utilization in BF1. Both CPU's ran great EXCEPT little to no headroom for anything else running....or the random spikes in intense gameplay that would cause issues.

I never planned on getting a ryzen, but after seeing the price on my new 7700k and 1700x were identical I returned the intel and just got a new motherboard. I run the 1700x at 3.8ghz. The same 3200 MHz memory which ran at 3200 on intel runs at 2933 on ryzen.

Well in the exact same 64 player BF1 sessions my CPU utilization is now at 30% most times. All three processors max out my gpu utilization on the settings I run in BF1.

Quality-wise I see no difference at all...still pegged at 60 FPS in 4K with identical settings....but no more issues with random spikes in gameplay action that are of course impossible to replicate in a benchmark.

Overall happy I made the switch just to futureproof and to have a PC that can run games the same AND do other things at the same time if needed.

I really liked the review thank you. However if there one thing to add, perhaps I just missed it, is CPU utilization while running the benchmarks.
 
Last edited:
No my point is Ryzen with 3200mhz C14 or faster memory and gaming in windows with all those other apps and programs running, Ryzen will DEFINATELY be faster than 7700k. I can play AAA title games at over 60 fps WHILE all 16 threads are being used for Acronis image backups or anti-virus scans. The 7700k would drop to 20 fps like my old 2600k at 4.6ghz did. For a multi-tasker (which is EVERYONE but twitch pro gamers), windows will always be running a hundred programs, check your task manager buddy, and Ryzen will be faster in games because there is always a ton of background stuff going on. Hell your browser is inactive yet loading pictures and comercial video as pre-fetch. No one games in a vacuum except professionals, and that is very few people.


I am all for Ryzen whooping on Intel and everything but who the hell is going to be running acronis image backups and running their anti-virus scans in the background all the time? Hell I don't remember the last time I ran a scan manually lol.

I am not a pro twitch gamer... I have a couple chrome tabs and a music player in the background, am I delusional to think those couple apps would make Ryzen beat out a 7700k? LOL nope. (Unless you have tweaktown or porn sites running in those tabs then Ryzen would totally beat the 7700k by a landslide).

Oh and great write-up Kyle/Brent, great article to read before heading to work this morning =)
 
I am all for Ryzen whooping on Intel and everything but who the hell is going to be running acronis image backups and running their anti-virus scans in the background all the time? Hell I don't remember the last time I ran a scan manually lol.

I run nightly incremental backups (not Acronis) and AV on-access scans are always running. Any process in the background that touches a file gets scanned - most AV works like that.
 
This is one of the best [H] articles I have read in years. Thank you for the hard work, it's illuminating. I think I'll stick to Intel for now, but I'm going to be keeping a serious eye in future Ryzen generations. It looks promising, and I'm always eager to fund competition to Intel if I can justify it.
 
Great Review Guys! Even tho I have that "upgrade" itch, my 4770k and 1080GTX should keep me in good shape for 1440p gaming awhile longer.
 
Great article. Thank you for putting in the tremendous effort and time into this article. I should share my limited experience since it was quite different from your result. I recently upgraded from Intel i5 2500k (oc'ed to 4.6 Ghz, 16 GB ddr3) to Ryzen 1700x (oc'ed to 3.9 Ghz, 16 GB ddr4). Both rigs run a single GTX 1080 Ti at 3400x1440 with max graphical setting. Same drivers, os, and hardwares.
-The Intel rig has average frame rate in the 80's. Peaks would be into low 90's. Dips would be 50's when there are a lot of explosions and firefights. It rarely reach low 100's and there was more subjective stuttering likely due to the frame rate dips.
-The AMD rig is hovering in the 90's to 100's most of the time. The peaks would be up into 130's. Dips stay in the 60's. Game play is smoother overall.
I was very surprise of my result because I was anticipating similar performance and at 3440x1440, GPU would be the limiting factor rather than CPU. Any thought on this anomaly?
 
Great article. Thank you for putting in the tremendous effort and time into this article. I should share my limited experience since it was quite different from your result. I recently upgraded from Intel i5 2500k (oc'ed to 4.6 Ghz, 16 GB ddr3) to Ryzen 1700x (oc'ed to 3.9 Ghz, 16 GB ddr4). Both rigs run a single GTX 1080 Ti at 3400x1440 with max graphical setting. Same drivers, os, and hardwares.
-The Intel rig has average frame rate in the 80's. Peaks would be into low 90's. Dips would be 50's when there are a lot of explosions and firefights. It rarely reach low 100's and there was more subjective stuttering likely due to the frame rate dips.
-The AMD rig is hovering in the 90's to 100's most of the time. The peaks would be up into 130's. Dips stay in the 60's. Game play is smoother overall.
I was very surprise of my result because I was anticipating similar performance and at 3440x1440, GPU would be the limiting factor rather than CPU. Any thought on this anomaly?

What anomaly? They were testing an i7, not an i5
 
What anomaly? They were testing an i7, not an i5

i7 2600k has hyperthreading vs non for i5. As for as I know BF1 does not utilize hyperthreading so if the overall clock speed is similar, the performance between i7 2600k vs i5 2500k should be similar, no? I will record my frame rate between the 2 systems again and see if my experience will mirror Kyle's.
 
Back
Top