AMD Ryzen 1700X CPU Review @ [H]

They can't really do that that is their bulk of their GPU sales, but anycase until the bandwidth limitation is circumvented it shouldn't be much of a problem but that will keep their APU performance in check with what Intel has right now, added to that they need to iron out the issues with the ram frequencies cause that is going to be hard on the APU.
Considering we probably won't see any APU till the fall at the earliest they have plenty of time to work that out. Then again they had plenty of time to launch Ryzen in a better state as well lol
 
The pic posted on the previous page is a FPS chart. KL is short for klatek which is Frames, so yes those minimums look great.
 
Considering we probably won't see any APU till the fall at the earliest they have plenty of time to work that out. Then again they had plenty of time to launch Ryzen in a better state as well lol


I think they will work it out, or motherboard manufacturers will have another revision of their boards to fix what ever is going on.
 
In FO4 it's due the Single thread nature of that game, everything run in 2 or 3 cores at best.. but it require high single thread performance to be acceptable, that was one of the few games that forced me to upgrade to my 6700K and OC it to 4.8ghz as im one of "those" 1080P high refresh rate peasant.



That would require a new engine basically. Nothing can be made actually more than force a game to hide SMT.



Great, you game a lot on a FX Platform, how much of that experience has been with an intel Platform? your experience about gaming performance means nothing if you haven't even compared your own experience first hand with the hardware side by side without that AMD Bias inside..

You are starting to be like ManofGod who switched from AMD to Intel and was spreading everywhere that his computer was faster for everything even for installing app and then sold the machine to downgrade to a FX platform because he wasn't able to stand using an intel platform just like if he were sinning or being unfaith to AMD, after that he said he was unable to tell any difference from his FX 8300 and the OC'd 6700K in any of his task, then he upgraded to a Ryzen platform and then suddenly all again is faster again?.. pretty much meh, that's exactly what a blind BIAS cause both for Intel and AMD fanboys..

I have A LOT of intel and FX active platforms I defend both on the cases they are good I have used both AMD and Nvidia GPUs and still can't understand why people have to be so biased on one side or other to that point of a wrong feeling by using the competitor platform.. and you are under the same category as him..

If you want to talk about a gaming experience expertise and debate, discuss to treat that experience as knowledge about hardware expertise then you should have to at least be used both platforms with the same frequency as both sides are entirely different.

Hey, at least you remembered me. :) Lets see, to I try to defend myself when I do not need to or do I give you a big you know what and move along. Hmmmmmm.......

Oh, and the 6700K was not really an upgrade but a short term purchase instead. (Would not last for what I do on computers.) Had I purchased the 5820K, I probably would have switched to that but, now I do not need too. Besides, I would have been better off having not upgrading at all until now since it became a waste of money at that time.

I will probably upgrade my FX 8350 at work to a 1700 with a B350 motherboard but, I cannot imagine I will do that until late summer or early fall. Depends on ram prices at that point.
 
Last edited:
I expect this too. I think OEMs will be happy with the stock clocks and performance for the price and power usage. It's just simply plug it sell it and they don't have to do much. That will probably be the win for amd. Selling cheap 8 core systems will be good marketing thing for oems. Especially more economic models.

APUs based on zen will be pretty good as well. I think amd will gain some market share back with zen. It's just a good CPU, not as great as Intel in getting you every frame in every game, that will likely improve a little as platform matures. If I was building a system I would seriously consider the 1700 model and run it at 3.9 or 4ghz. I am not a hardcore gamer and I am not after every frame as long as gameplay experience is there which seems to be the case.

I think amd did good here I a sense initial inventory seems to be 1700x and 1800x in majority.

Because those people that game but not that hardcore about everything probably jumped on those. I would have done the same cuz I am impatient. But if all three were available I wouldn't bother with anyone but 1700 model and clock it up and save me some money. Lol


Regardless of initial hiccups of the platform looks like they did sell out initial inventory on these. It's a good chip for a vast majority and decent for gamers.

Cool thing is, I am also on a platform now that, if I want to upgrade in the future, all I have to do is swap my cpu out, no board or ram upgrading required. The X370 chipset is supposed to be a long term investment, at least from what I understand.
 
Was there a power/heat usage section coming? Or did I miss something?

So in a nutshell if you were deciding if you wanted to go on ebay and buy a 2600k or a new 1700x you have an option..... Well I'm certainly going to wait for the mobile chips to come out cause Ive enjoyed my A9 more than the last 2 intel offerings I've had but I'll stick with my 4770k for some time I think.
 
In FO4 it's due the Single thread nature of that game, everything run in 2 or 3 cores at best.. but it require high single thread performance to be acceptable, that was one of the few games that forced me to upgrade to my 6700K and OC it to 4.8ghz as im one of "those" 1080P high refresh rate peasant.



That would require a new engine basically. Nothing can be made actually more than force a game to hide SMT.



Great, you game a lot on a FX Platform, how much of that experience has been with an intel Platform? your experience about gaming performance means nothing if you haven't even compared your own experience first hand with the hardware side by side without that AMD Bias inside..

You are starting to be like ManofGod who switched from AMD to Intel and was spreading everywhere that his computer was faster for everything even for installing app and then sold the machine to downgrade to a FX platform because he wasn't able to stand using an intel platform just like if he were sinning or being unfaith to AMD, after that he said he was unable to tell any difference from his FX 8300 and the OC'd 6700K in any of his task, then he upgraded to a Ryzen platform and then suddenly all again is faster again?.. pretty much meh, that's exactly what a blind BIAS cause both for Intel and AMD fanboys..

I have A LOT of intel and FX active platforms I defend both on the cases they are good I have used both AMD and Nvidia GPUs and still can't understand why people have to be so biased on one side or other to that point of a wrong feeling by using the competitor platform.. and you are under the same category as him..

If you want to talk about a gaming experience expertise and debate, discuss to treat that experience as knowledge about hardware expertise then you should have to at least be used both platforms with the same frequency as both sides are entirely different.
Guess you aren't remembering right as far as Manofgod and his FX went. He said tat the difference wasn't that uge as he gamed @4k when he gamed. But he was doing far less of tat these days and didn't see the huge investment being worth it thereby returning to the FX to tide him over. Again a personal choice you obviously don't share, but the is in essence of the term personal choice... it doesn't have to make sense to anyone else.

As far as gaming on Intel, yes I have and I am no fool. I have recommended i7s and i5s often when one has no real bias to AMD or Intel. But with that said i3 are terrible at true gaming or any desktop use other than light tasks and facebook games. I have not once led a single person to believe anything other than actual facts not some half baked crap by posting slanted reviews or cherry picked benchmarks.

Now after stating that did you have any real topic for debate or just some random desire to be an ass like half the Intel-arrogant-elitest here. Sorry not trying to be too harsh here yet so maybe you didn't convey your message right but being you have no evidenced of what you allude to then maybe you can explain it better or is my previous statement spot on?
 
Anyone order from B&H before? I just ordered a MSI Tomahawk from them. It showed as in stock and then right after my order went through it shows up as a "new item - coming soon". So either I got probably the very last one they had or they were already out of stock and the system hadn't caught up yet. Guess I'll be finding out on Monday.
 
Anyone order from B&H before? I just ordered a MSI Tomahawk from them. It showed as in stock and then right after my order went through it shows up as a "new item - coming soon". So either I got probably the very last one they had or they were already out of stock and the system hadn't caught up yet. Guess I'll be finding out on Monday.

I've ordered things from them off and on throughout the years solely because of their best price at that time and have no complaints.
 
Was there a power/heat usage section coming? Or did I miss something?

So in a nutshell if you were deciding if you wanted to go on ebay and buy a 2600k or a new 1700x you have an option..... Well I'm certainly going to wait for the mobile chips to come out cause Ive enjoyed my A9 more than the last 2 intel offerings I've had but I'll stick with my 4770k for some time I think.

Waiting for the firmware updates and OS updates is a good idea, the CPU's when you get lucky and hit a perfect optimization, you can have 3200Mhz DRAM and stable core usage, but trying to tweak can throw the system out completely, trying to revert back doesn't net the same result. The boards are very bad at the moment.
 
Anyone order from B&H before? I just ordered a MSI Tomahawk from them. It showed as in stock and then right after my order went through it shows up as a "new item - coming soon". So either I got probably the very last one they had or they were already out of stock and the system hadn't caught up yet. Guess I'll be finding out on Monday.
I order stuff from them all the time. Never had any issues myself.
 
Right now, the 1700 is by far the best deal for non-gamers and 1440p+ gamers. There is no arguing that. If the 1600x can reach 4.5 ghz due to fewer cores, It may be a very compelling choice to ALL types including 1080p gamers.
 
I think that the "less than stellar" gaming performance that the new Ryzen CPUs provide is due to a new architecture that is different than what Intel offers in Haswell, Broadwell/Broadwell-E and Skylake, and because many games still make heavy use of the FPU for physics calculations and other things. AMD's position is that heavy FPU computation should take place on the GPU itself, but developers have to write their games so that they work best on a wide variety of platforms, especially on two and four core CPUs.

To put it in simple terms, because AMD thinks that GPUs are better for heavy floating point workloads, they designed the FPU in Ryzen to be narrower than Intel's designs. It's a tradeoff and a gamble in order to increase the L2 cache to 512KB per core, as opposed to Intel's 256KB per core. AMD wanted to keep the cores small.

So no, gaming performance won't improve with drivers, or BIOS/uEFI updates, or anything like that. It will improve once new games come out that are optimized for the Zen architecture. These games will be coded to do floating point calculations on the GPU.

Just my two cents, so please guys, tell me if I'm wrong.
 
What makes people think devs will optimize for Ryzen? Intel has an even bigger CPU market share than Nvidia does GPU share.
 
What makes people think devs will optimize for Ryzen? Intel has an even bigger CPU market share than Nvidia does GPU share.

They probably won't, but I was just pointing out that it would be the only way for gaming performance to improve on Ryzen.

As I said in my comment, most devs optimize for the most common processors because they want to sell to the broadest audience. Not everyone can afford high end parts, but most can afford to buy games.
 
What makes people think devs will optimize for Ryzen? Intel has an even bigger CPU market share than Nvidia does GPU share.
It mostly seems Windows scheduler related. Perhaps some MS VC++ optimizations can make it in as well down the road. So I don't think those are unrealistic. Ryzen seems to fare very well in Linux, which seems to already have the necessary changes in.
 
It mostly seems Windows scheduler related. Perhaps some MS VC++ optimizations can make it in as well down the road. So I don't think those are unrealistic. Ryzen seems to fare very well in Linux, which seems to already have the necessary changes in.
Ryzen does even worse in Linux gaming (that is almost always CPU limited because YAY for shitty ports) than in Windows.

And yes, reportedly it should had all the necessary stuff in it since 4.10.

EDIT: http://www.phoronix.com/scan.php?page=article&item=amd-ryzen-cores&num=2

Holy shit
 
I think that the "less than stellar" gaming performance that the new Ryzen CPUs provide is due to a new architecture that is different than what Intel offers in Haswell, Broadwell/Broadwell-E and Skylake, and because many games still make heavy use of the FPU for physics calculations and other things. AMD's position is that heavy FPU computation should take place on the GPU itself, but developers have to write their games so that they work best on a wide variety of platforms, especially on two and four core CPUs.

To put it in simple terms, because AMD thinks that GPUs are better for heavy floating point workloads, they designed the FPU in Ryzen to be narrower than Intel's designs. It's a tradeoff and a gamble in order to increase the L2 cache to 512KB per core, as opposed to Intel's 256KB per core. AMD wanted to keep the cores small.

So no, gaming performance won't improve with drivers, or BIOS/uEFI updates, or anything like that. It will improve once new games come out that are optimized for the Zen architecture. These games will be coded to do floating point calculations on the GPU.

Just my two cents, so please guys, tell me if I'm wrong.


Excellent theory, and the most interesting I've heard so far. If true, then it begs the question as to why AMD designed Ryzen this way when the best GPU they offer is only good for 1080p gaming.

Also, wouldn't this issue scale linearly? Issue at 1080p comparable to issue at 1440p and 4K? I mean, if the FPU were the root cause wouldn't the lack luster performance show at any resolution instead of only observable when you remove the GPU bottleneck?

Thoughts?
 
It is a bit of a puzzle at the moment. Logically it should scale linearly, but its a new less than perfectly optimized platform. The fact that it does not scale linearly is a great sign that it will be ironed out. It seem like there is more than one issue at the moment.
 
It is a bit of a puzzle at the moment. Logically it should scale linearly, but its a new less than perfectly optimized platform. The fact that it does not scale linearly is a great sign that it will be ironed out. It seem like there is more than one issue at the moment.

I don't think so, if it is the FPU then it can't be fixed per say, just masked like AMD wants by having people turn the GPU into the bottleneck.
 
I don't think so, if it is the FPU then it can't be fixed per say, just masked like AMD wants by having people turn the GPU into the bottleneck.

It is not FPU related, it is power state related and the choice of 14nm LP, which I don't think AMD will take over to the next update, if it was so univerally broken then why are there like 12 prominant gaming titles that the CPU can easily match and even beat comparitive Intel parts? Issues will be debugged, how fast, can't really say but the product is at least 2 months to early.
 
It is not FPU related, it is power state related and the choice of 14nm LP, which I don't think AMD will take over to the next update, if it was so univerally broken then why are there like 12 prominant gaming titles that the CPU can easily match and even beat comparitive Intel parts? Issues will be debugged, how fast, can't really say but the product is at least 2 months to early.
Show me a single title where Ryzen beats 6900k beyond margin of error, let alone 12. And if you reference Joker's video again, you gotta address if he was honest at all first.
 
It does not have to beat intel by a mile. the fact that it trades blows with it already and has higher minimums is more than good enough.
 
It does not have to beat intel by a mile. the fact that it trades blows with it already and has higher minimums is more than good enough.

Could you like the higher minimums? Broad selection please.
 
Could you like the higher minimums? Broad selection please.
Wait... oh right because it shows good on AMD you "don't know of it". It has been posted and discussed a few times already. Even how the 7700k has far greater spread in frametimes over the 1800X has been shown numerous times , but I am sure you are unaware of that also.
 
It mostly seems Windows scheduler related. Perhaps some MS VC++ optimizations can make it in as well down the road. So I don't think those are unrealistic. Ryzen seems to fare very well in Linux, which seems to already have the necessary changes in.


Gaming in Linux no it doesn't it mirrors what happens in Windows, so thinking its a windows scheduler problem no its not.
 
Last edited:
Great review. I still have a 2600k that runs at 4.5 ghz day in day out. Good to see that thrown in for comparison.
My question is this. I didn't see which VR setup you were testing on. The Rift or the Vive?
I'm interested in VR, and if I can get away with using my 2600k set up it will be much easier to afford the VR.
If I need a new CPU, motherboard and ram, no longer affordable.
 
The CL14 TridentZ I have isn't on the QVL for the B350 MSI.

Any suggestions on how to handle that? Never dealt with a situation like this. I know memory can be finicky. Should it at least POST for me?

EDIT: lol neither is my 1TB 960 EVO m.2. This may get interesting.
 
The CL14 TridentZ I have isn't on the QVL for the B350 MSI.

Any suggestions on how to handle that? Never dealt with a situation like this. I know memory can be finicky. Should it at least POST for me?

EDIT: lol neither is my 1TB 960 EVO m.2. This may get interesting.
Boot with 1 stick and set looser timings, then put other sticks in.
Try different slots if it still is a pain.
 
Wait... oh right because it shows good on AMD you "don't know of it". It has been posted and discussed a few times already. Even how the 7700k has far greater spread in frametimes over the 1800X has been shown numerous times , but I am sure you are unaware of that also.

No I asked because I havent seen its the case.

http://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/5

upload_2017-3-5_15-32-34.png

upload_2017-3-5_15-33-33.png

gtav-8.png

dxmd-8.png

wd2-8.png
 


Of course he doesn't know, because he won't look at anything that shows otherwise, or it goes in one ear and goes out the other. Now watch you will try to reason with rationality to get around this one. And we all know trying to rationalize something, is just placating the problem.
 
Then next year or the year after when performance graphics cards are the performance as today's Titan X or Ti or and much more, you will be CPU limited at 4k? What happens then? in 2 years new CPU right?

This is the problem with "I only game at 4k so the 1080p results don't matter". They do matter, they show you what happens when the GPU is not the limiting factor and every generation you get 50% improvement per bracket for GPU performance, unless you get AMD GPU's only, where they show they haven't been able to keep up with is in a timely manner
Well it doesn't really work that way does it? First it takes new game engines for a real push in requirements. So a new $500 card that is as fast as a titan X, will just be that much better than the previous $500 cards but the benchmarks wouldn't change as much. It means better settings at 4k. Second the for the most part it looks like a latency issue where AMD just can't keep feeding the information at the rate intel can to a video card. The fact that the Ryzen generally has a small but measurable lead at high rez games also makes me wonder that if there are legitimate CPU bottlenecks in games in the future if the Ryzen would be better suited for it and just not great at feeding simple tasks at insane rates like 100+ and 200+ FPS. Ryzen isn't exactly hurting at the attempts with games to put pressure on the CPU and perform as good if not better in situations that really test the platform as whole.
 
Well it doesn't really work that way does it? First it takes new game engines for a real push in requirements. So a new $500 card that is as fast as a titan X, will just be that much better than the previous $500 cards but the benchmarks wouldn't change as much. It means better settings at 4k. Second the for the most part it looks like a latency issue where AMD just can't keep feeding the information at the rate intel can to a video card. The fact that the Ryzen generally has a small but measurable lead at high rez games also makes me wonder that if there are legitimate CPU bottlenecks in games in the future if the Ryzen would be better suited for it and just not great at feeding simple tasks at insane rates like 100+ and 200+ FPS. Ryzen isn't exactly hurting at the attempts with games to put pressure on the CPU and perform as good if not better in situations that really test the platform as whole.


Games are not done that way at least not the graphics portion, features wise yes, but not how much they push the GPU, lets say today I start making a game.

I look at today's best hardware and make sure they what I'm planning on to run at 30 fps. I know its going to be GPU limited, and CPU is going to get hit hard too, cause I epxect per clock instruction to go up, but I'm not planning on them doubling processing power on a per 2 gen basis (which would be around the time the game is released (actually 3 gens), unlike for GPU's, I'm planning on double the performance in many tasks in those 2 gens.

So by the time the game is released, GPU bound amounts should drop on higher end cards, and lower end mainstream cards can run the game fine too. By dropping settings, older generations cards should have no problem as well.

The CPU, any CPU from 5 years should be able to run the game.

And this is why the average upgrade cycles for CPU is 4.5 years and GPU's is 2.5 years.

So effectively next generation games will be GPU bound on older processors that is the norm, but as GPU tech moves faster than software development, the GPU bound scenarios drop, until the next, next gen games come out.
 
Back
Top