The Definitive AMD Ryzen 7 Real-World Gaming Guide @ [H]

Excellent review!
I'm curious how an I5 7600K would compare. Do you know if any of the games tested take advantage of hyper-threading?
 
i7 2600k has hyperthreading vs non for i5. As for as I know BF1 does not utilize hyperthreading so if the overall clock speed is similar, the performance between i7 2600k vs i5 2500k should be similar, no? I will record my frame rate between the 2 systems again and see if my experience will mirror Kyle's.

More cache as well.
 
Fantastic evaluation here.

Suspected this was the case, but no reason to upgrade my 4670k as I'm gaming at 1440p. Do you guys think it's more a symptom of workload being offloaded to GPUs, lack of multi-threading support, a little of both, or something else that's allowing a 6-year old "mid-enthusiast" level CPU to largely hold steady with a new ultra-enthusiast CPU?
 
Fantastic evaluation here.

Suspected this was the case, but no reason to upgrade my 4670k as I'm gaming at 1440p. Do you guys think it's more a symptom of workload being offloaded to GPUs, lack of multi-threading support, a little of both, or something else that's allowing a 6-year old "mid-enthusiast" level CPU to largely hold steady with a new ultra-enthusiast CPU?
It's definitely both. Games just aren't designed for parallelism on the CPU. Some games do better at it but you have to understand what exactly a game uses a CPU for and that's primarily draw calls. One exception is strategy games which use the CPU for AI and why it's not surprising to see strategy games scale well over multiple cores. But for your average game most of the heavy lifting is on the GPU and all they care about is if the CPU is fast enough to deliver data to the GPU hence why the i7 7700k does so well for high frame rate gaming.
 
It's great work of course, and I will never rebuke hard work.
It's always appreciated, so thank you [H]!
The average Joe doesn't have the funds to purchase and test all of this equipment without having to mortgage a house.
So being able to have access to these numbers is amazing.

My impression of the portrayal of these results truly reminds me of back when AMD released the Athlon, and Intel began to constantly repeat 'We're better, cause!' hoping it would be believed.
This is not an attack or accusatory toward you all, just my perception based on test setup, comments and presented opinions; but I don't let it sway the data. There are no #AlternativeFacts here.

My take on the results is that in this very specific control case, where the system OS and games are running at their potential peak, one of Intel's finely tuned best comes out on top by a tiny margin on average.
Then the unoptimized AMD offering, running unoptimized games, and possibly not at its fullest potential due to memory, and at a full 1Ghz less, is coming in second with it's second best (on paper) available chip.

As an enthusiast who purchased and still uses an i7-920 from day one in 2008, I'm of the opinion that AMD did not win this match-up, but Intel lost it.

So, as a 'real-world' enthusiast, AMD is the way to go right now since there are no discernable gameplay differences and costs are similar. This becomes more true if AMD continues to cut prices on Ryzen and Intel does not respond.

In addition, the indeterminable shortened life-span on an overclocked 7700K (against the manufacturer's recommendation) and how (if encountered at all, RNG ftw!) said temperature spikes may affect performance and life expectancy in a month, six or a full year with 20-40 hours of weekly gaming.

As time goes on, games and applications will hopefully make better use of the available Ryzen CPUs, which would allow for better future-proofing compared to the 2600K (Why I'm not even counting it here) and who knows, in a few quarters we may be reading 'THE Definitive AMD Ryzen 7 Real-World Gaming Guide 2.0 @ [H]'. :)

Until then, I will continue to dream of owning 'a better computer' and when I have enough money, I will revisit the facts and make a decision based on them and my needs.

So should we all. . .
 
My take on the results is that in this very specific control case, where the system OS and games are running at their potential peak, one of Intel's finely tuned best comes out on top by a tiny margin on average.
Then the unoptimized AMD offering, running unoptimized games, and possibly not at its fullest potential due to memory, and at a full 1Ghz less, is coming in second with it's second best (on paper) available chip.
How long would you like them to wait for AGESA fixes so that they could run the memory at a more optimal rate? On that same note, how long should they wait for games to become "optimized"? At some point, you have to call it and do the work with what you have today.

For the two processors of interest these are probably the most common speed achieved when overclocking, and this is after all an enthusiast website.

In addition, the indeterminable shortened life-span on an overclocked 7700K (against the manufacturer's recommendation) and how (if encountered at all, RNG ftw!) said temperature spikes may affect performance and life expectancy in a month, six or a full year with 20-40 hours of weekly gaming.
That rep is just choosing to toe the line in regards to overclocking. Intel would love to sell you the additional warranty that covers overclocking. Are there any instances of these "spikes" causing premature failures?
 
Games are not a performance guide anymore. As other user said:
Intel lost. 25% extra frequency for a small margin of improvement over the Ryzen 7. Now enable background processes in the OS, as that is how people runs their games.
Game developers have parallelize better their games.
 
Last edited:
Games are not a performance guide anymore. As other user said:
Intel lost. 25% extra frequency for a small margin of improvement over the Ryzen 7. Now enable background processes in the OS, as that is how people runs their games.
Game developers have parallelize better their games.

Yes, that. All the reviews doing 720P low detail gaming results are predicated on that being a future predictor of gaming performance. Most "tech sites" don't even know why they do that, they say they're isolating the CPU. But, this method was created back when I was using 3dfx cards in the mid 90s. It was valid for the long lasting single core era.

Of course, the reasons and goalposts are always changing to suit "tech" site agendas. But the most recent example we have for "future predictor of gaming performance" is actually Bulldozer vs Sandybridge. 8-core Bulldozer over the years has grown to be faster in modern gaming than 4-core Sandybridge.
Moar coars is the best we have to go on as a predictor of future performance. It sure as hell isn't 320x200 benchmarks. Things and times change, they have changed. The quadcore era is over, AMD killed it and mounted it on their trophy wall already.
Most people take a while to reevaluate their old religion before seeing they had a false god. If they have an agenda, they'll never change and live and die with their stupid ideas. Intel needs a couple years to come up with a real answer to Ryzen, but the 7700K ain't stopping nothing. Not even the 1700X. http://www.legitreviews.com/intel-core-i7-7700k-versus-amd-ryzen-1700x-14-game-cpu-showdown_192508/3

Sure, 1 fps lost and i run something much much weaker than 7700k. What now?

Well, since you asked.

Now hook up a Steam Link to your TV, enable CPU encoding (best quality and lowest latency by far) with the max setting allowed in Steam (8-threads / uncapped bandwidth) and start playing PacMan, the one from 1980. Watch your system stutter and brought to its knees. When done with that, try this again with Doom 2016 and watch your CPU burst into flames before your eyes.
 
Now hook up a Steam Link to your TV
I have neither Steam Link nor TV because both are useless to me. And even if i had a TV, i would just fucking use HDMI cable i have and hook up PC to a TV. Better quality and lower latency than any form of streaming by far.
Watch your system stutter and brought to its knees. When done with that, try this again with Doom 2016 and watch your CPU burst into flames before your eyes.
Since i neither can nor have use for doing what you describe, 1 fps of cost it is. And full disclaimer: i am really just waiting for a good AM4 ITX board to pull the trigger at last.
 
So more cores is apparently the best predictor of future gaming performance yet here we are, almost 6 years after Sandy bridge, and the fastest gaming processor is still one with 4c/8t.

When you need to go through the mental gymnastics that you are doing, drastically changing the scenario and then exaggerating the impact of some of the new factors, that should tell you all you need to know about the quality of your argument.

This is a review on gaming performance. Normal background processes are not going to be a statistically significant factor for a 7700k.
 
Great article guys, pretty much solidifies my stance on waiting for threadripper or intels skylake-x/coffee lake updates before I decide what to do. I've gotten 4 years plus out of each of my last two intel systems, I just dont have the confidence that this first generaion of Ryzen has the clockspeed/ipc to be able to do the same.
 
Of course, the reasons and goalposts are always changing to suit "tech" site agendas. But the most recent example we have for "future predictor of gaming performance" is actually Bulldozer vs Sandybridge. 8-core Bulldozer over the years has grown to be faster in modern gaming than 4-core Sandybridge.
Moar coars is the best we have to go on as a predictor of future performance. It sure as hell isn't 320x200 benchmarks. Things and times change, they have changed. The quadcore era is over, AMD killed it and mounted it on their trophy wall already.
Do you have a source to back up that particular claim I bolded? The closest thing I could find was the following review: http://www.gamersnexus.net/guides/2898-amd-phenom-ii-cpu-revisit-in-2017-x6-1090t-1055t/page-3

The conclusion is exactly the opposite of your claim so if you have a source for that I would love to see it.

Now hook up a Steam Link to your TV, enable CPU encoding (best quality and lowest latency by far) with the max setting allowed in Steam (8-threads / uncapped bandwidth) and start playing PacMan, the one from 1980. Watch your system stutter and brought to its knees. When done with that, try this again with Doom 2016 and watch your CPU burst into flames before your eyes.
Why would you choose to use CPU encoding? With Steam in-home streaming or Nvidia gamestream you are on your own internal network which means you can typically run 50-80Mbps without issues on ethernet and > than 20Mbps on decent wireless. At the aforementioned bitrates, the difference between GPU and CPU encoding is negligible in use.
 
I'll provide answers to peoples questions, but not if you're just going to argue. If you're looking for actual information and may actually change your mind based on new information fine, but I'm not going to get into a pissing match where nothing presented is accepted. Moving the goalposts won't receive any further answers from me, just talk to other people who have an agenda, and don't legitimately want to discuss this stuff. Forums are usually cesspools of ignorance and fanboyism, I'm not sure how many times I've been told that my love for Founders Edition cards was misguided by people on forums who have never owned an FE card. I may to be backing out of this discussion soon because it's almost certainly a complete waste of time. But if you're really interested and not just defending Intel out of some sort of mentally ill grudge, happy to discuss.

Speaking of changing "why I'd need this" excuses to defend what one already owns.
I have neither Steam Link nor TV because both are useless to me. And even if i had a TV, i would just fucking use HDMI cable i have and hook up PC to a TV. Better quality and lower latency than any form of streaming by far.

Since i neither can nor have use for doing what you describe, 1 fps of cost it is. And full disclaimer: i am really just waiting for a good AM4 ITX board to pull the trigger at last.

A 'fucking' HDMI cable? Ryzen's 8 core extravaganza got you mad bro!
So ultimately here you're simply endorsing less capable machines. You're saying in so many words that half your die being taken up by an IGP is ok with you. Half the cores and half the cache with an IGP there instead is a "fine setup" in your guys' mind. Ryzen owners have the power to do it either way. They have options because they have dat AMD CPU girth.

But I'll go deeper into this subject.
So HDMI + dedicated HTPC = good enough for you. That's ok, but don't pretend there's no legitimate reason why anyone would want Ryzen or more cores. By saying "I don't do that", you didn't prove that Ryzen is useless compared to your Intel CPU which is half IGP. You only prove that you're willing to backpeddle on your heels to defend it. More is better, faster is better. Better is better.

Here's what you can't do with 4 cores and a long range HDMI cable setup-
1. Run HDMI more than 50', that's the limit for HDMI cables unless you start using signal repeaters. I've never tried this, but I did use to use a 50' HDMI cable for years instead of SteamLink.
2. Control it easily. USB has a max cable length too without a repeater. Good luck testing and getting reliable results out of 50'+ USB extension cables or wireless that reliably works that far in open air or through walls.
3. Easily keep screen resolution matching between your LCD at the desk and TV matching or scaling properly. You'll have to find a program to conveniently enable/disable your TV when you want to use it or you have a 2nd/3rd screen active while at your desk for programs to open up on.
4. This one is easier and quicker to do now that we have Windows10 but easily switching audio outputs is also annoying.
5. Wake on lan may be flaky with long USB extensions without some auxiliary power. Not sure, never did this one myself.
6. In sum, lose the ability to anywhere even close the 328'+ that I can put between my PC and my SteamLink/TV with CAT6 easy and slick way to do it. Or max of 50' with HDMI and a bunch of problems to solve.

Probably more issues involved that I'm forgetting. I moved to a Steam Link and never looked back but I did what you suggested for many years and I actually solved all these problems myself.

HDMI is inferior to SteamLink unless you have a dedicated HTPC. If you want to live like it's still the dark ages and mind numblingly setup a PC for every task in the house, go ahead. There's two types of PC users. Kids who view it as an upgradable Xbox, and those who do more with it than treat it as a glorified Xbox. Ultimately you don't need a computer, if you don't use one. Don't need more than 4 cores either then. ;) With my setup, I can put my R7 1700 at my TV and plug it in since it's got that amazing 65W TDP 8C/16T AMD IPC per watt efficiency, or I can use a Steam Link.
The point is that Ryzen owners have that power to choose. As long as you're using that quadcore Intel IGP abomination- you don't.

So let's all just stop pretending going from 4 to 8 cores isn't a huge deal. EVERYTHING your computer does uses the CPU. It's a huge deal. Ryzen is glorious. And if you're really willing to back up your hypothesis, don't buy that Ryzen that you want and don't buy 6+ cores from Intel either. Stick to your guns for the foreseeable future. Buy yourself Intel's marvel of engineering, i5-7640X with 112W TDP and 4C/4T. Completely embarrassing, but it's good enough for you and half the people posting here by your own admission.

On a friendly note tho, I do have a Biostar X370GTN with R7 1700 (and an 1800X previously, which is for sale), GF1060FE, Samscum 960 Pro 1TB. The Biostar is rock solid, ran my RAM at 3200MHz on the 1800X. I have no reservations recommending it and I'd buy it again. I am going to replace it, but only with an Asus for the additional USB ports they usually supply and front mounted M.2. Heat is far more of an issue with M.2 than these sites suggest. Mine throttles all the time and I'm putting a heatsink on it. If willing to wait, I'd probably pass on that Asrock personally and just wait all the way to the finish line with an Asus mITX.
I decided to get in on the first batch of Biostar boards and then swap it out when a Z270 ROG feature level board lands. The Asrock is nice too, if you don't care about what ROG boards usually bring to the table I'd get the newly launched Asrock for sure. Just saying, maybe this will help you with your decision.

"But the most recent example we have for "future predictor of gaming performance" is actually Bulldozer vs Sandybridge. 8-core Bulldozer over the years has grown to be faster in modern gaming than 4-core Sandybridge."
Do you have a source to back up that particular claim I bolded? The closest thing I could find was the following review: GN LINK REMOVED

The conclusion is exactly the opposite of your claim so if you have a source for that I would love to see it.

The closest you could find is GN? Kyle and the boys here are much closer to being on-point than the clowns at GN. I don't click on GN links. :)
You do sound pretty reasonable, and not like Intel took advantage of your mother with Ryzen's release, so I'm happy to oblige and dig something up.

source0


Why would you choose to use CPU encoding? With Steam in-home streaming or Nvidia gamestream you are on your own internal network which means you can typically run 50-80Mbps without issues on ethernet and > than 20Mbps on decent wireless. At the aforementioned bitrates, the difference between GPU and CPU encoding is negligible in use.

To answer your question on why I choose CPU encoding for my in-home streaming

source0
source1

source2


There's probably more I'm forgetting but those were some sources. Yes, NVENC is the 2nd best choice. It has decent image quality in slower-moving games. It does not match CPU encoding with fast-action game streams nor does the latency come close.
Basically I'm a PC guy, have been since 1986. If willing to be mediocre, may as well use an Xbox (or stick with my old "good enough" ;) quadcore). I research what I'm doing and try to maximize everything I have. I also have no budget on PC parts, I buy exactly whatever I want. I'm never "jealous" of anything, I have what I want and also never do the "I own a Chevy so Chevy is the best" crap.

I use a Ryzen R7 1700 / GF1060FE / 960 Pro 1TB in a Node202 for technical and functional reasons, not fanboy reasoning. Best of breed is what fanboy bandwagon I'm always on.

So more cores is apparently the best predictor of future gaming performance yet here we are, almost 6 years after Sandy bridge, and the fastest gaming processor is still one with 4c/8t.

False. Many have bought into "tech" review website benchmark lies/misrepresentations based on outdated and inaccurate methodologies with no relation to how you use your system. And there were already posts in this thread directly contradicting your statement.

I like HardOCP because they've done their best to find ways to provide more relevant benchmarks with the "usable settings" concept. That's not exactly what I would do personally, but these guys definitely understand where the problem is with benchmarks and reviews and it's better than the alternative sites.

source0
http://www.legitreviews.com/intel-core-i7-7700k-versus-amd-ryzen-1700x-14-game-cpu-showdown_192508
source1

source2
 
Last edited:
The closest you could find is GN? Kyle and the boys here are much closer to being on-point than the clowns at GN. I don't click on GN links. :)
You do sound pretty reasonable, and not like Intel took advantage of your mother with Ryzen's release, so I'm happy to oblige and dig something up.
Thanks for linking the video and they definitely have some interesting results. In the video, it looks like they mostly referenced the 2500k for comparison with Bulldozer. I would imagine the 2600k would hold up better in the testing they did, but it is interesting in any case.


To answer your question on why I choose CPU encoding for my in-home streaming
<snip>

There's probably more I'm forgetting but those were some sources. Yes, NVENC is the 2nd best choice. It has decent image quality in slower-moving games. It does not match CPU encoding with fast-action game streams nor does the latency come close.
Basically I'm a PC guy, have been since 1986. If willing to be mediocre, may as well use an Xbox (or stick with my old "good enough" ;) quadcore). I research what I'm doing and try to maximize everything I have. I also have no budget on PC parts, I buy exactly whatever I want. I'm never "jealous" of anything, I have what I want and also never do the "I own a Chevy so Chevy is the best" crap.

I use a Ryzen R7 1700 / GF1060FE / 960 Pro 1TB in a Node202 for technical and functional reasons, not fanboy reasoning. Best of breed is what fanboy bandwagon I'm always on.
That reddit post is a bit odd to me. I just can't say that it aligns with my own experience. I have one gaming PC along with three Nvidia shields and use gamestream all over the house. My encoding time per moonlight is almost always 1ms with a 1080 Ti.

I usually stream at 60-80Mbps and don't really notice any macro-blocking with fast motion. The only thing I have had trouble with is the occasional gamma differences in dark games which require adjustment in the menus.

In the past when I tried software encoding with a 4930k the quality was great, but the latency suffered compared to GPU encoding. I was keeping an eye on the CPU usage with Afterburner's OSD so I know I wasn't overwhelming it.

I actually have a Ryzen 1700x on its way for some testing with ESXi so I may have to bust out the Steam Link again and see if software encoding has improved at all.
 
Thanks for linking the video and they definitely have some interesting results. In the video, it looks like they mostly referenced the 2500k for comparison with Bulldozer. I would imagine the 2600k would hold up better in the testing they did, but it is interesting in any case.

That reddit post is a bit odd to me. I just can't say that it aligns with my own experience. I have one gaming PC along with three Nvidia shields and use gamestream all over the house. My encoding time per moonlight is almost always 1ms with a 1080 Ti.
I usually stream at 60-80Mbps and don't really notice any macro-blocking with fast motion. The only thing I have had trouble with is the occasional gamma differences in dark games which require adjustment in the menus.
In the past when I tried software encoding with a 4930k the quality was great, but the latency suffered compared to GPU encoding. I was keeping an eye on the CPU usage with Afterburner's OSD so I know I wasn't overwhelming it.
I actually have a Ryzen 1700x on its way for some testing with ESXi so I may have to bust out the Steam Link again and see if software encoding has improved at all.

Aye, that's why I wanted to be clear in my earlier posts that it's i5/4T vs Bulldozer rather than the 2600K/8T. Important distinction but it does provide the best evidence available that "more cores is the best indicator of future gaming performance" is > "720P/low settings CPU bound benchmarks are the best indicator of future gaming performance" as an argument.

On your streaming results, interesting. I ran my own tests using NVENC vs 8-thread CPU encoding and was able to duplicate similar (though better) results myself. I will say 1ms seems astonishing and I'm not sure what exactly each tool is measuring, I may be digging into this further based on what you're saying and sort it all out. Seems like a good topic for a tech site to cover, eh. ;) I think I know what people don't know they want yet for reviews and information, but can't really turn a Youtube channel into a fulltime job either right now. I will say, if you watch all those videos linked, as you already know NVENC is pretty great. If I for some reason didn't want my Ryzen 7 mITX setup today, I'd use a 7700 non-K with a Pascal card and use NVENC myself. I couldn't resist playing around with this stuff upon Ryzen's release though. Sounds like you're in a similar boat on usecase.

Damn. Shame you already have a 1700X on the way, I have my 1800X for sale for $400 lightly used for a week with unopened paste/heatsink/and for some reason a mousepad in the box.. just stuff I'm never going to use now. Will be interested in whatever tests and results you run when you get it setup.
 
Good stuff...but when are we going to get some more VR testing? I pretty much only do VR now so while this is interesting, I really want to see how things shake out when there's some more cpu overhead at play in VR games. Maybe as some AAA titles get VR support they'll make their way in?
 
Now here's the sort of article we've needed for a long while!

I would've liked to see some infamously single-threaded CPU/memory-limited games in the benchmark roundup, though. Stuff like ArmA III, PlanetSide 2 and DCS World that are generally held back more by the CPU than the GPU, contrary to common gaming performance rhetoric.

Granted, PS2 won't be so easy to benchmark for obvious reasons, but if you can swing a BF1 multiplayer match, I'm sure you could work something out for that.

ArmA III and DCS are considerably easier to benchmark just for having singleplayer functionality, at the very least. Yet Another ArmA Benchmark even graphs it out for you!

Good stuff...but when are we going to get some more VR testing? I pretty much only do VR now so while this is interesting, I really want to see how things shake out when there's some more cpu overhead at play in VR games. Maybe as some AAA titles get VR support they'll make their way in?
I'd like VR benchmarks too (doubly so for DCS and possibly IL-2: Battle of Stalingrad), but the whole dependency on head-tracking throws a big variable into the mix for real-world game testing.

That and I want to know if a typical Ryzen board's USB 3.0 controller hubs will play nicely with Oculus' infamously picky sensor cameras, particularly when it has to host three or four of them.
 
I'm wondering what the results would be on systems that have been running for several months and have a bunch of things installed and running. My guess is that the Ryzen would be faster due to having the extra cores running things in the background.


A couple of years ago I was wondering this same thing and I guy was telling me I would do better with an i5 over an i7, that the i7 was just wasted money. Then another guy pointed out that many gamers run additional apps like TeamSpeak or Ventrillo and sometimes have reference docs or webpages open as they play. Some run their favorite music playlists.

I decided I'd rather spend a little more and have 4 cores and not need them, then only pay for two and need more.

I also think that it wouldn't be out of line to test systems in such a manner, because that really is how many people use their systems.
 
Now here's the sort of article we've needed for a long while!

I would've liked to see some infamously single-threaded CPU/memory-limited games in the benchmark roundup, though. Stuff like ArmA III, PlanetSide 2 and DCS World that are generally held back more by the CPU than the GPU, contrary to common gaming performance rhetoric.

Granted, PS2 won't be so easy to benchmark for obvious reasons, but if you can swing a BF1 multiplayer match, I'm sure you could work something out for that.

ArmA III and DCS are considerably easier to benchmark just for having singleplayer functionality, at the very least. Yet Another ArmA Benchmark even graphs it out for you!


I'd like VR benchmarks too (doubly so for DCS and possibly IL-2: Battle of Stalingrad), but the whole dependency on head-tracking throws a big variable into the mix for real-world game testing.

That and I want to know if a typical Ryzen board's USB 3.0 controller hubs will play nicely with Oculus' infamously picky sensor cameras, particularly when it has to host three or four of them.

I don't know that that's much different than a manual play through with mouse look, it's a similar uncontrollable variable.

I'll say I've had zero usb related issues with my Oculus; I do have a standalone pci-e usb controller installed (I have 10+ usb peripherals connected at any time), but the Oculus doesn't seem to care where it's plugged in so far. This would be interesting to test, though. I'm wondering if Kyle is saving the VR stuff for his new venture?
 
Now I am glad that the review was done with 2933mhz ram because it clearly showed the gaming experience was virtually the same between all three cpu's. Of course faster ram may increase the numbers somewhat but it would not increase the gaming experience or what the user will get from it is what I am seeing. None gaming benchmarks show little to no improvement with increase memory speeds as well. So if you have a Ryzen rig and can get to 2933mhz memory - you will basically have max gaming performance from the memory end. Cpu speed and GPU probably becomes more important once you hit memory of 2933 MHz or higher for gaming.
 
Now I am glad that the review was done with 2933mhz ram because it clearly showed the gaming experience was virtually the same between all three cpu's. Of course faster ram may increase the numbers somewhat but it would not increase the gaming experience or what the user will get from it is what I am seeing. None gaming benchmarks show little to no improvement with increase memory speeds as well. So if you have a Ryzen rig and can get to 2933mhz memory - you will basically have max gaming performance from the memory end. Cpu speed and GPU probably becomes more important once you hit memory of 2933 MHz or higher for gaming.
I have seen certain games perform even better with faster RAM (like the Witcher 3), but is definitely a diminishing returns issue. You don't get a steady increase over a wide swath of games past 3000mhz.
 
I am confused as to whether I should get the Ryzen 7 1700 if I want to game at 1080P with a 1080ti. Will it be bottlenecked in all games? Should I just get 7700k.
 
I am confused as to whether I should get the Ryzen 7 1700 if I want to game at 1080P with a 1080ti. Will it be bottlenecked in all games? Should I just get 7700k.
Did you read the conclusion statements?
 
I don't know that that's much different than a manual play through with mouse look, it's a similar uncontrollable variable.

I'll say I've had zero usb related issues with my Oculus; I do have a standalone pci-e usb controller installed (I have 10+ usb peripherals connected at any time), but the Oculus doesn't seem to care where it's plugged in so far. This would be interesting to test, though. I'm wondering if Kyle is saving the VR stuff for his new venture?
That's good to know. The Oculus community is still Intel-centered for obvious reasons, not to mention fixated on those particular Inateck cards when they could find cheaper alternatives with the same FL1100 chipset and that one quad-channel StarTech card actually uses NEC/Renesas controllers.

I have to split the load up on my quad-sensor setup; two on the add-on FL1100 card (and the Rift itself, for that matter), two on the default Intel USB 3.0 controller for Z87/Haswell. Putting three on either with USB 3.0 is asking for "frame dropped" errors.

Beyond that, I have loads of other USB peripherals connected as well - a USB 3.0 hard drive on the Intel hub, a mouse (keyboard's on PS/2 because it's still a superior interface for that), a UPS (just so the system knows the battery status), a Cintiq Companion Hybrid, and a few USB 2.0 hubs because I'd never be able to connect all my game controllers at once otherwise - mostly flight sim stuff like my TM HOTAS Cougar + MFD Cougar setup that take up three USB ports altogether, but also my wireless Xbox One gamepad receiver. I CAN NEVER HAVE ENOUGH USB PORTS!

I have seen certain games perform even better with faster RAM (like the Witcher 3), but is definitely a diminishing returns issue. You don't get a steady increase over a wide swath of games past 3000mhz.
I suppose ArmA III's one of the big outliers, then? That game loves fast RAM in a way most people don't expect, such that I think it's actually half the reason people see big boosts going from Sandy Bridge to Kaby Lake with it - also implicitly going from DDR3 to DDR4.

There's no way that game could love all the single-threaded performance increases between architectures that much to warrant a nearly doubled frame rate!
 
This article smacks of Anti AMD sentiment/Intel Bias on the part of Brent Justice, which is confirmed by Kyle having to step in and add his somewhat more balanced perspective at the end. Perhaps Brent called Intel before he reviewed.
 
Ranger ...you not going to get much mileage posting dumb shit like that [if a joke/sarcasm ...missed it].

What I take away is that in 8 of 10 games Ryzen is only between less than 1 frame to 4 frames behind the 7700k @ 1080p It seems after 2 months Ryzen is indeed settling in and you now need some pretty dedicated testing to see any difference between the two systems.

Given how conspicuous they are by their absence ...not what the usual AMD thread crappers were hoping for!
 
Last edited by a moderator:
@ Ranger ...you not going to get much mileage posting dumb shit like that [if a joke/sarcasm ...missed it].

What I take away is that in 8 of 10 games Ryzen is only between less than 1 frame to 4 frames behind the 7700k @ 1090p It seems after 2 months Ryzen is indeed settling in and you now need some pretty dedicated testing to see any difference between the two systems.

Given how conspicuous they are by their absence ...not what the usual AMD thread crappers were hoping for.
Agree and I would think most on this forum game above 1080p to begin with which makes any of the cpu choices good enough for gaming. I thought that review was outstanding and actually nails what a user will get. I see no bias with this review just cold hard facts.
 
I did. The RX 480 and GTX 1060 were slower on the Ryzen then on the i7 at 1080P so this implies I cant pair a GTX 1080ti with an Ryzen as the gap would even worsen but then theres the fact that Ryzen AM4 platform will be supported longer. Hence I am confused as to which to get
Here is what I said.

Summary


The narrative that "Ryzen is horrible for gaming," simply does not ring true overall. In most of our examples here today, we have seen Ryzen put up competitive framerates overall. Only when you start to look at 1080p gaming does there really seem to be any appreciable real world gaming differences.



The gaming demographic that really needs to be most concerned with this, would be those folks looking to capture extremely high framerates at 1080p and 1440p with 144Hz screens. Of course there might be some others "twitch gamers" that are looking for extreme framerates as well, and more gigahertz and IPC are what you need to focus on when it comes to a CPU.


If you are using your system for "gaming only," a highly clocked Intel CPU is still the way to go. If you do any kind of encoding or content creation, or anything that can take advantage of multiple cores with a box that is also your gaming rig, the Ryzen 7 is right up your alley.


Desktop gaming is still largely GPU-limited, even when looking at a lot of 1080p scenarios with low-end enthusiast GPUs. I would be remiss if I did not mention our six year old Intel Core i7-2600K, quite frankly it makes Intel and AMD look bad on the gaming front.
 
Nope. I stand corrected. Apologies to Brent, it was a very good article.
Thank you for taking the time to read the article and making a fair comment. I honestly do not mind criticism if it is fair.
 
I cant believe AMD is on the top of my list for a new build after reading both Ryzen Articles....Are these new AMD setups rock solid stable at 4ghz clocks over "extended periods of time" like our intel setups have been? Last time i had an AMD set up, at least one blue screen a month was typical. I got to have something thats bullet proof while over clocked, even if it cost me more.
 
I cant believe AMD is on the top of my list for a new build after reading both Ryzen Articles....Are these new AMD setups rock solid stable at 4ghz clocks over "extended periods of time" like our intel setups have been? Last time i had an AMD set up, at least one blue screen a month was typical. I got to have something thats bullet proof while over clocked, even if it cost me more.
I've used a lot of AMD and Intel and that's a legit concern. Not a lot of reviews are touching on the topic but the days you recall though are long gone. I have both an 1800X and 1700, used with a Biostar board (fairly maligned) and rock solid stable. I didn't OC my 1800X but it hit 4.0GHz all the time on its own thanks to XFR. Ryzen is as much SOC as anything else, reducing reliance on the chipset. I had the same fear but couldn't resist 8C/16T in the mainstream. No regrets, wouldn't even think twice about doing it again. Air cooled mini-ITX I'd use my R7 1700, ATX I'd use an AIO with the 1800X or Threadripper 16C/32T. I'd say the biggest question mark is the USB for VR, though with an ATX build you can always add in a USB PCIE card if necessary.
 
Question... Wouldn't a simple crude way to have gaming go multi core be to have multiple instances of the program run concurrently but dealing with a portion of the screen?.
 
That was almost the idea behind the original 3dfx SLI implementation, top half of the screen rendered by one Voodoo card, the bottom by a second one. Since it was application agnostic and took place after the game engine it worked. That early implementation required the same exact cards with same clocks to keep everything in time as much as possible (with vsync off). To do the same concept with a game engine is difficult due to keeping each portion of the game in sync. If one thread is off time it's a huge mess. At this point most are splitting the work up into different threads between audio/physics/rendering but it's still not easy due to the same sync issues. I'm a web developer, so I know a bit about code and while I've never programmed a game engine I can assure that dividing up a 1080P screen into 4 quadrants and somehow keeping correlated physics/video in time would be quite a nightmare. Anything is possible, but I doubt you'll see anyone attempt it. Rather they'll do what Valve did with Source years ago, split up the threads into physics, draw calls, audio, input etc. Renderers are parallelized but I don't know enough about them to speak with any authority other than what I've read about DirectX over the years.

Shorter answer is that it's already extremely complex, adding in that sort of parallelization makes it that much harder which is why its taken a couple decades just to get where we are today. Notice how the game engine market is shrinking, the bar is rising and rolling your own with all these issues is prohibitive.
 
has any one done any tests with games that are single threaded in nature limited (Factorio on large maps that have lots of stuff built on it, it can use 2 threads but that's for other tasks) a lot of games like Total annihilation or supreme commander late game can be CPU bound

there is like 3 games i play that is very hard CPU limited due to been mostly single threaded on the main thread, witch sways me towards a intel 7700k at 5Ghz but on the other side of it every thing els i do on the PC would benefit from a RYZEN "or 10 core Threadripper 10 core soon, so 4x8GB be easier to use" and just eat the 10% loss in IPC vs 7700k for the gain of 8/10 cores RYZEN/Threadripper

i am going from a i7-920/24GB ram so 7700k or RYZEN is still 2x faster than what i have now relating to single threaded performance (but Muti core the RYZEN is many times faster and less likely to get bogged down)

And as others have posted under normal use you would not have a Clean PC after you have installed all your software that would likely impact a 4 core 7700k and a year has passed (like steam, discord, antivirus+malwarebytes especially and so on) so background stuff likey not going to affect the RYZEN 6-8 core as much as it would intel 4 core

just waiting for Threadripper to see how it lines up with RYZEN due to having 2x 8 cores under the Lid (looking at 10 core version as its clocked very close to 4Ghz and supports Quad channel but more interested in making 4x8GB work)
 
What you guys have to understand is that with clean installs you get that baseline and in the case of this site and the review we are posting in, it keeps the testing fair and consistent. Not sure if there is a need for the argument being Kyles conclusion pretty much states that between the 3 systems, there is no discernable difference to the end user behind the screen. As far as having the extra cores, numerous users have stated that their gameplay in multiplayer gameplay was greatly enhanced from their 4c/8t Intel CPUs (pre-7700k).
 
i guess if your playing games that are using more then 4 or using 4 threads i guess yes 8 cores is very likely going to make things run smoother as other system tasks are just not going to affect the game at all or even program that your working on could run smother when you got a CPU intensive thing running

i still like to see games that are CPU bound by nature (not GPU at all)
like most simulation games

City skyline when you got all tiles unlocked and mass city
factorio late game (just join a multiplayer server that late game with lots of stuff on it and save it locally)
total annihilation/ Planetary annihilation / supreme commander (if you got to the point where unit count is bogging the CPU down)
 
Nice review, although I feel it's a bit premature to call it "definitive" and "once and for all" since as you state it's only representative for now.
Also it could be interesting to see how much difference a Ryzen 5 with two cores (and $100) less compares at the same clock speed, since IPC is the same (or marginally better) than that for Ryzen 7.
& a full 8GB of l3 for a mere 4 cores. I wonder as to its effect.

as per the story, the extra threads ~dont help atm.
 
Back
Top