AMD Ryzen 1700X CPU Review @ [H]

Your numbers are off. R7 1700 is $329 everywhere. i7-7700K is only $299.99 at Microcenter. It's $349 at Newegg where the R7 1700 is $329.

You're right. $350 stuck in my head for some reason. 7700k is currently $292 at Staples.
 
As you get future cards (more powerful ones) the GPU bottleneck even at higher resolutions start to drop ........

People can just ask themselves how the FX8150 performs today vs the 980X while they keep pretending CPU performance doesn't matter. Just wait...yes wait and it will only be slower and slower.

AMD-FX-8150-Performance-Revealed-On-Par-with-Intel-s-Core-i7-980X-3.jpg
 
If that is the case, this is not an easy solve, they will have to work with MS to fix it and its just a band aid not a all round fix.

NUMA node cache coherency problems on-die? There's an innovation we could've done without. :facepalm:
 
Because the 8 core chips are higher margins so all of their (presumably) limited silicon is going to where more money is. I'm sure they're gradually building up chips with bad cores for the 6 and 4 core parts, but I'd think almost all of their effort is going to the higher margin parts right now.

Higher margin for a smaller market? If that's what they want, why throw away such a hugh margin as AMD demonstrably shows 1800X is competitive against 6950X. Why moves the price point of 1700 and 1700X close to the price point of 7700K? It's obvious to me that the pricing is anchored by Intel's 6700K/7700K pricing because that's what AMD wants to compete (for volume) against. The 1800X pricing is in turn anchored by the 1700 and 1700X pricing. I never would understand why a company not attack a larger market if they can. It's not because of limited volume if you know how big Global Foundry is. The 14nm process has been in volume production since late 2014.
 
RYZEN is a GREAT CPU. If you're unhappy you can buy i7-6850K for $600.:rolleyes:
 
Well, that settles that. Was looking forward to building an AMD machine but I guess I'll have to stick with my 6700K.
 
Because resolution is the only thing that has to do with wanting a higher core CPU...did you think before you posted that?
Your point. It matches or beat Intel in encoding and everything else. Other than low res gaming. Yes I did think before I posted that. You just missed all the reviews.
 
I mostly agree.

There is a large subset of the gaming crowd these days that insist on very high frame rates on 144hz Free/G-Sync screens.

For them this will be a measurable difference in their real world gaming. I question - however - whether it is meaningful. I find games more than sufficiently smooth and playable as long as they don't ever drop below 60fps. There is some evidence to suggest that at the borders higher framerates are noticeable, but as you start climbing above 60fps the diminishing returns set in very quickly, and by 90fps any further returns are pretty much negligible.

I'm not sure how long you've been gaming, but I go back to 1980.

Though I remember the the 3dfx Glide API. Back then the target was a sustained 24fps- the same frame rate as a movie. We used to game in the dark since 24fps looks pretty bad in a lighted room.

So it's all relative. My rig does a sustained 60fps with anything I play. So I'm good.
 
So just trying to make sense of the reviews. If I game at 6000x1200 The ryzen should be fine correct? The poor low end score for lower resolutions, does that affect anything else other than if you game 1080p or below, even at 1080p is there a huge difference? Also my buddy just got a 4k monitor since prices are on par with 1080p. No concerns with ryzen with that are there? Does the price per performance still work out in ryzen favor?
 
You play at 1080p on a 40" monitor? :confused: 1080p is a peasant resolution in 2017. It's a fact.

That's true. But LCD tech is a dead end that I have spent enough money on. By the time I can afford a 4K HDR OLED screen, 4K capable GPUs will hit their stride
 
Sounds like a convenient excuse. I won't believe it unless multiple developers confirm it.

Check out https://software.intel.com/en-us/articles/optimization-notice:
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Simply put, if you're writing a game (or anything, really) for PC, using the Intel compiler will give much better performing code on Intel chips. They make this statement, by the way, under protest. Intel was caught many years ago deliberately compiling things in a way to cripple AMD chips and settled in court over it.

That said, when you're a AAA game developer and 80% of your target market is running Intel, it's a no-brainer to use Intel's compiler for free (well, zero programming cost) gains. So, in turn, it makes sense for a customer right now to go Intel.

It would be great if AMD were to similarly put out a compiler (or even better, contribute to open source ones) that will optimize for their own chips to balance the scales, but doing that costs money that they'll never get if customers right now continue to go Intel.

It's a spiral.
 
NUMA node cache coherency problems on-die? There's an innovation we could've done without. :facepalm:


well you have the 4 states MESI, and when you are using, S or I, which is for multi threaded and you have to switch cores, latency should be the last thing you should be thinking about as programmer, cause it really shouldn't be a problem, now if the cache is too small to hold typical work loads, that creates a huge headaches, because you are no longer pulling form the direct cache related to that core, to pull from another cache will increase latency considerably, by doing so stalling the core till it gets the information it needs. That can't be fixed by developers unless every single developer wants to rework all their engines. Does that sound likely for 18% of the market?

Easier have MS do it in the OS right? And even that is still tricky as well. Its not a straight forward fix.
 
So just trying to make sense of the reviews. If I game at 6000x1200 The ryzen should be fine correct? The poor low end score for lower resolutions, does that affect anything else other than if you game 1080p or below, even at 1080p is there a huge difference? Also my buddy just got a 4k monitor since prices are on par with 1080p. No concerns with ryzen with that are there? Does the price per performance still work out in ryzen favor?

At high res the CPU is a good gamer, and you won't get much different from Intels top offerings at that spec. You need a stronger GPU so 4K is more suited to gaming as you put your money where it maters most, on a top end GPU
 
I'd caution Intel fanbois from celebrating too early. Those of us from the 90s who remember games needing processor patches, particularly from Athlon, know this is routine for a new architecture, which we really haven't seen in a very long damn time.

This is not routine for a new architecture.

I remember those days as well. You have to consider the underlying reasons why things were that way back in the day. Often times the days where processor patches were beneficial were before DirectX and API's that prevented hardware level access by software. CPU's back then also supported different instruction sets and had particularly strong weaknesses in some cases with some of them. In this case I don't think anything can be done unless there is a specific issue that needs to be addressed in the Windows scheduler. There are a lot of reasons why you don't see processor specific patching in games anymore. It's the nature of Windows, the known quantities of the game engines and cross licensing agreements on instruction set compatibility between Intel and AMD all go hand in hand to prevent this sort of thing. Believe it or not, we've seen significant enough changes in architecture on the Intel side to cause problems were "processor patches" still needed in games. Despite the fact that it wasn't a huge performance improvement over Haswell or Devil's Canyon, Skylake was a new architecture (according to Intel) with different design goals in mind than earlier CPU's. Nehalem and Sandy Bridge were vastly different from Core 2 and yet we saw no game patches for processor compatibility at that time.

The Athlon in the 1990's was a different beast because of how the architectures were in those days. In the 1990's the AMD processors were generally reversed engineered Intel CPU's and due to certain changes didn't work properly with some software. This was a non-issue by the time later Athlons rolled around.
 
So just trying to make sense of the reviews. If I game at 6000x1200 The ryzen should be fine correct? The poor low end score for lower resolutions, does that affect anything else other than if you game 1080p or below, even at 1080p is there a huge difference? Also my buddy just got a 4k monitor since prices are on par with 1080p. No concerns with ryzen with that are there? Does the price per performance still work out in ryzen favor?

Ryzen isn't getting faster with a higher res. You just try and hide the CPU limitation at best for a while.
 
at 1440 the game becomes entirely GPU bottlenecked, so any advantage or disadvantage the CPU's have goes away.

if it isnt getting faster, then why does it beat... everything but the 7700k @ 1440?... shouldn't it remain roughly in the same hierarchy performance wise?
 
Lay off your fanboy wars, the Frenchmen have found a reason why Ryzen does so weirdly in games and compression benches:
http://www.hardware.fr/articles/956-22/retour-sous-systeme-memoire.html

Looks like the issue is.... The CCX choice, once again AMD gets bitten by it's clustering.

Basically anything that does not fit in single CCX's L3 is as good as being located outside of L3.

I can't read that or take the time to translate that at work right now. If I'm getting the gist of things it's as I said and shows that certain architectures handle certain workloads better than others. It's as simple as that. AMD isn't as good at Intel CPU's at gaming. That's hardly surprising. I think we need to see more stuff at GPU limited resolutions and see if there is a real difference before any one really calls Ryzen a complete turd for gaming.

If a design decision in the architecture led to poor performance in gaming then people who expect huge gains from patches or microcode updates are in for massive disappointment because you can't redesign the CPU with software.
 
So just trying to make sense of the reviews. If I game at 6000x1200 The ryzen should be fine correct? The poor low end score for lower resolutions, does that affect anything else other than if you game 1080p or below, even at 1080p is there a huge difference? Also my buddy just got a 4k monitor since prices are on par with 1080p. No concerns with ryzen with that are there? Does the price per performance still work out in ryzen favor?
You won't have any issues at 4K. 1080p is fine i guess but I don't see any cpus running 144fps in any new games if you have a 144hz monitor. Don't give in to fanboy wars. Some games are performing lower than expected but still very playable. If he has a 4K monitor I don't see an issue there.
 
if it isnt getting faster, then why does it beat... everything but the 7700k @ 1440?... shouldn't it remain roughly in the same hierarchy performance wise?


bottlenecks don't work on an all or none principle, even within each frame of rendering bottlenecks can shift to GPU, CPU, Vram, system ram, rops in the gpu, tmu's in the gpu, ALU in the cpu, etc, etc.
 
Not sure why anyone would expect a 4.5GHz KBL to not continue to be the winner over a Ryzen in games. I know I didn't expect that. To me the fact Broadwell-E 6800/6850K win out over it in that application is cause for a bit of disappointment.
I was not expecting Ryzen to beat kaby Lake but I did not expect it to lag nearly 20% behind.
 
So is the best value is to buy a $329 1700 and overclock it to 4.0Ghz? If we want an AMD gaming rig that is.
 
Sounds like a convenient excuse. I won't believe it unless multiple developers confirm it.

It is a fact of technology that software these days is optimized (on desktop/laptop operating systems) for Intel based processors - AMD makes Intel compatible processors but that doesn't mean they respect every single thing that's capable of being done with high degrees of efficiency or optimization and there will be some things that AMD's architectural differences can do better than Intel's can if the code is optimized and written to make use of those differences.

This isn't even a point of debate, it's simply how things actually work in this day and age with respect to the CPUs.
 
if it isnt getting faster, then why does it beat... everything but the 7700k @ 1440?... shouldn't it remain roughly in the same hierarchy performance wise?

probably attributable to platform differences, even at 1080p the differences are pretty much within the noise floor of the benchmark
 
You won't have any issues at 4K. 1080p is fine i guess but I don't see any cpus running 144fps in any new games if you have a 144hz monitor. Don't give in to fanboy wars. Some games are performing lower than expected but still very playable. If he has a 4K monitor I don't see an issue there.
4K right now it performs fine, but as newer GPU remove the bottleneck for higher framerates even at 4k this cpu will bottleneck hard. FPS games run at 144hz even newer ones on an i7 7700k paired with a gtx 1080. Obviously a game like the new Deus Ex isn't. But here is the thing games look better in motion over 90hz and this CPU struggles even at 1080 doing this. That is a big deal to lots of gamers.
 
at 1440 the game becomes entirely GPU bottlenecked, so any advantage or disadvantage the CPU's have goes away.

I don't know that that's entirely true, the GPU may be more dependent on a different type of calculation by the CPU to keep it fed (ie, geometry no longer the limiter). That benchmark isn't the only one where Ryzen is faster at higher res, even if it's just by a couple fps.
 
well you have the 4 states MESI, and when you are using, S or I, which is for multi threaded and you have to switch cores, latency should be the last thing you should be thinking about as programmer, cause it really shouldn't be a problem, now if the cache is too small to hold typical work loads, that creates a huge headaches, because you are no longer pulling for the direct cache related to that core, to pull from another cache will increase latency considerably, by doing so stalling the core till it gets the information it needs. That can't be fixed by developers unless every single developer wants to rework all their engines. Does that sound likely for 18% of the market?

Easier have MS do it in the OS right? And even that is still tricky as well. Its not a straight forward fix.

I still have my old Sun E220R with dual-Ultrasparcs; planning around the interconnect latency was common back then! ;) Releasing updated engines isn't unheard of. Dragon Age Origins and Oblivion released updated executables when everybody started bitching about their quadcores only using the one core, IIRC. I expect the MMO-types will recompile since they're in it for the long haul at a monthly fee. Everyone else is unlikely unless sales boom due to the price.
 
Check out https://software.intel.com/en-us/articles/optimization-notice:


Simply put, if you're writing a game (or anything, really) for PC, using the Intel compiler will give much better performing code on Intel chips. They make this statement, by the way, under protest. Intel was caught many years ago deliberately compiling things in a way to cripple AMD chips and settled in court over it.

That said, when you're a AAA game developer and 80% of your target market is running Intel, it's a no-brainer to use Intel's compiler for free (well, zero programming cost) gains. So, in turn, it makes sense for a customer right now to go Intel.

It would be great if AMD were to similarly put out a compiler (or even better, contribute to open source ones) that will optimize for their own chips to balance the scales, but doing that costs money that they'll never get if customers right now continue to go Intel.

It's a spiral.


And this is why AMD is/was hoping to get gamers on their systems because it will make developers aware of them and start working with them, now lets see how it goes......
 
bottlenecks don't work on an all or none principle, even within each frame of rendering bottlenecks can shift to GPU, CPU, Vram, system ram, rops in the gpu, tmu's in the gpu, ALU in the cpu, etc, etc.

Then why do we have benchmarks above 1080?
 
I can't read that or take the time to translate that at work right now. If I'm getting the gist of things it's as I said and shows that certain architectures handle certain workloads better than others. It's as simple as that. AMD isn't as good at Intel CPU's at gaming. That's hardly surprising. I think we need to see more stuff at GPU limited resolutions and see if there is a real difference before any one really calls Ryzen a complete turd for gaming.

If a design decision in the architecture led to poor performance in gaming then people who expect huge gains from patches or microcode updates are in for massive disappointment because you can't redesign the CPU with software.

I think it is a bit of both, ucode may help iron out things like the IMC latencies and allow it to run at higher DRAM frequencies, it could make more efficient use of cache etc but yeah if one is hoping for 30% gains, it will not happen 10% is possible.
 
Ive seen some 2011v3 CPUs that fail to hit more than 4.2GHZ and people loved that.
Almost nobody was a fan of Broadwell-E overclock potential. It had the nice property that you could lock it to 4Ghz and push voltage down to something like 1-1.1V, but if you wanted performance, BDW-E sucked compared to HSW-E. Because of OC limit. Good thing Skylake-X looks to lift that one, according to recent rumors.
If that is the case, this is not an easy solve, they will have to work with MS to fix it and its just a band aid not a all round fix.
Not an easy solve? It looks borderline impossible to me, any communication between 2 CCXs is bound to be a performance bottleneck. And well, if your CCXs can't communicate, that L3 cache is as good as none. I mean, it already is but whatever.
 
I was not expecting Ryzen to beat kaby Lake but I did not expect it to lag nearly 20% behind.

Considering the IPC deficiency and the clock frequency disparity I'm not sure why people weren't expecting the exact results we are seeing. I think many people in this thread were unrealistic about what to expect despite AMD underselling Ryzen to us in the various leaks and official statements prior to the New Horizon's stream or whatever it was where they told us the CPU's name.
 
Yeah I'm sure those big corporations are all happy to spend time optimising games that sell for 5$ today ;)
 
Back
Top