Zen 2 Review Summary

Intel is 7--12% ahead depending what model is compared.

Depends on where you're getting your averages. I've seen averages across reputable reviewers and a collection of many games ranging from 3% to 12%, in either 720p or 1080p. 5% was the value I predicted, and the most comprehensive 1080p average I've seen so far suggested 4.97% as the average gap. I accept this value for 1080p. I have not yet done enough digging on the 720p results to either agree or disagree with any specific value there yet.

"Tomorrow" can mean the next year or the one after. And the best current CPUs will continue being relevant for gaming. If you play to games at 4K and you don't plan to update your GPU, then you don't need the best gaming GPU, you don't need a $400 Zen2 CPU either, an older $100 i3 is enough.

Of course it is. Sandy Bridge (and maybe even Nehalem) are still fine. Really, anybody with SB or higher doesn't need to upgrade their CPU. Not for gaming, anyway.

"Tomorrow" doesn't mean a decade away. Much faster GPUs will be available in one or two years, but the number of cores/threads used by game engines will not increase in the same proportion in the same time period, not even close. Reviewers know this and that is why they perform "CPU tests" of gaming.

For a time. As we approach silicon limits, I imagine GPUs may reach a similar wall as CPUs are approaching.

have been hearing the myth of parallelism on games since Bulldozer ("soon eight cores will be the minimum required for games!"), and I did hear it again with Piledriver, with PS4 and Xbox, with Mantle, with DX12, with Vulkan... Reality is that ancient four-core i7 continues being valid for a gaming build today

It's not a myth. Parallelism in games has increased. There was a time when single core CPUs were sufficient. When dual cores first game out, it took a while for devs to use those resources more directly (though dual core was always better if you were multitasking). It was the same for when quads came out, then quads with HT. Today, near-max performance is attainable with either 4c/8t CPUs or 6c/6t CPUs. Though Gamers Nexus noted that the Ryzen 3600 was a better buy than the 9600k, even though the 9600k had a higher average frame rate - because the 9600k was subject to frame time spikes and stuttering that the 6c/12t Ryzen part wasn't subject to (though it wasn't that the 9600k was bad). So it may be that 6c/12t or 8c/8t will soon become the default minimum for high end gaming performance.

Point is, parallelism has increased over the years. It's not a myth. The myth was that it would increase overnight, or that an abysmally bad uarch like Bulldozer would ever see enough of a gain to be even remotely competitive with Intel on anything except price.

This is the master thread, everything else (rendering, AI, sound processing,...) can be moved to slave threads executed in additional cores, but those slaves threads are launched, controlled, synchronized, and ended by the master thread.

So the performance of the game will continue being given by the master thread. If the core running the master thread bottlenecks then it doesn't matter if you have eight cores or a hundred cores executing slave threads, the game will run exactly the same because the extra cores will be awaiting to the core executing the master thread.

True to a point. Games will likely always have one thread that is more critical than the others for this reason. However, necessity is the mother of invention here. The tough part of our current circumstances is that increasing core/thread count is becoming easier than increasing single-thread speed. New nodes are becoming very difficult and expensive. The silicon wall approaches, my friend. We both know this. Hell, we may witness the death of x86 altogether at some point, because ARM is becoming quite impressive. Hard to say where things are going to go. But if I were a game developer, I'd see the writing on the wall for raw single-threaded performance, and be looking for ways to better utilize parallelism.
 
I only partially agree with this sentiment. The only thing you save on is the motherboard, but unfortunately, that comes with a different kind of cost. The fact is, even mid-range AM4 motherboards feature more cost cutting than you often see on similar Intel boards. Hence the whole BIOS upgrade fiasco. Many of them do not have the ability to support flashing without a CPU / RAM installed. Any BIOS with a 128Mbit flash ROM may also lose some features due to accommodate the larger AGESA code required to support Ryzen 3000 series CPU's. This includes lost support for APU's, the UEFI GUI and even RAID support in some cases.

Like it or not, Intel switching chipsets and sockets so often prevents issues like these. It does cost more money, but there are a lot less potential headaches to deal with dealing with broader CPU compatibility over two or three generations. Those same motherboards with 128Mbit BIOSes will be even more problematic if a fourth generation Ryzen stays on AM4. While X570 has better VRM's across the broader spectrum of models, X370 and X470 motherboards often have VRM's that are cut rate and incapable of supporting some of higher core count CPU's beyond stock speeds. This could even potentially impact boost clocks. We saw similar issues with AM3 / AM3+ motherboards where some motherboards could only handle 95w TDP CPU's while the 130w options required higher end motherboards. That's basically where we are headed with AM4.

True, although i'll note that with the BIOS issue most features will be added back in. MSI has stated that this round of lightweight BETA bios's is just a bridge for users, and claims in a few months will be releasing the full featured BIOS's for most chipsets with a version that basically dumps all CPU support before Zen2 to make room for the nice GUI again. Even if this doesn't happen RAID support is absolutely being patched back in once they work out the initial bugs.

Finally, most people who were smart about this got boards with a decent VRM setup like the gaming pro carbon, etc.
 
True, although i'll note that with the BIOS issue most features will be added back in. MSI has stated that this round of lightweight BETA bios's is just a bridge for users, and claims in a few months will be releasing the full featured BIOS's for most chipsets with a version that basically dumps all CPU support before Zen2 to make room for the nice GUI again. Even if this doesn't happen RAID support is absolutely being patched back in once they work out the initial bugs.

Finally, most people who were smart about this got boards with a decent VRM setup like the gaming pro carbon, etc.

We will see on the BIOS. That will take time though. On the subject of VRM's, boards like the X470 Gaming Pro probably don't have the chops to really push the upcoming Ryzen 9 3950X. That sort of issue is common when you look at many of the AM4 boards out there. Its even less likely among B350/B450 motherboards to get capable VRM's. I wouldn't use more than an eight core CPU on many of them.
 
We will see on the BIOS. That will take time though. On the subject of VRM's, boards like the X470 Gaming Pro probably don't have the chops to really push the upcoming Ryzen 9 3950X. That sort of issue is common when you look at many of the AM4 boards out there. Its even less likely among B350/B450 motherboards to get capable VRM's. I wouldn't use more than an eight core CPU on many of them.

This is why when I bought my X370 board back when, I chose a barebones board with decentish VRMs (Asus X370 Prime Pro). Cheap, but didn't cut corners too much where it counted. I'll report back when I get a 3900X and see how well it performs, but it handled a 2700X at 4.3Ghz for a while (before I decided to drop back down and just use PBO).
 
This is why when I bought my X370 board back when, I chose a barebones board with decentish VRMs (Asus X370 Prime Pro). Cheap, but didn't cut corners too much where it counted. I'll report back when I get a 3900X and see how well it performs, but it handled a 2700X at 4.3Ghz for a while (before I decided to drop back down and just use PBO).

The 2700X isn't nearly as demanding as the 12c/24t Ryzen 9 3900X. I don't know the VRM configuration of that specific motherboard off hand so I won't speculate as to how well it will do.
 
^

THE case for the 3700x. It should run just fine (bios dependent) on all but the most garbage quality existing AM4 boards and serve as a way to max out such a system running a 1600 or less Ryzen now. Way cheaper than a new system and will be great for most users for several years more.
 
"Tomorrow" can mean the next year or the one after. And the best current CPUs will continue being relevant for gaming. If you play to games at 4K and you don't plan to update your GPU, then you don't need the best gaming GPU, you don't need a $400 Zen2 CPU either, an older $100 i3 is enough.

An i3 is nowhere near enough for 4K gaming. Its not even enough to max out current games at 1080p.
 
  • Like
Reactions: N4CR
like this
The 2700X isn't nearly as demanding as the 12c/24t Ryzen 9 3900X. I don't know the VRM configuration of that specific motherboard off hand so I won't speculate as to how well it will do.

40A Dual N-FETs, just like on the Crosshair 6 Hero, except 6-phases instead of 8
 
We will see on the BIOS. That will take time though. On the subject of VRM's, boards like the X470 Gaming Pro probably don't have the chops to really push the upcoming Ryzen 9 3950X. That sort of issue is common when you look at many of the AM4 boards out there. Its even less likely among B350/B450 motherboards to get capable VRM's. I wouldn't use more than an eight core CPU on many of them.
Gaming pro carbon has the VRM for it, it’s quite an upgrade VRM wise over the non-carbon version.
 
An i3 is nowhere near enough for 4K gaming. Its not even enough to max out current games at 1080p.

Correct.

cpuscaling.jpg
 
That is not a 4K result. When the game is truly GPU limited the CPU choice becomes less relevant
 
An i3 is nowhere near enough for 4K gaming. Its not even enough to max out current games at 1080p.

This is a bit backward- while CPU dependency does increase a bit with resolution, GPU dependency increases more, so an i3 as in the above 8350k should be better suited to 4k than 1080p. Still not going to max anything out with four cores and four threads, but certainly also not useless.
 
The 3900X runs on a 350 chipset as well. Someone had asked if the x370 was going to be compatible...:D

So you can theoretically go from a dual core Bristol Ridge to a 16 core 3950x on the same socket?

That's got be be like a 50x performance boost :p
 
Point is, parallelism has increased over the years. It's not a myth. The myth was that it would increase overnight

Exactly. And that is the myth I quoted "soon eight cores will be the minimum required for games!"

An i3 is nowhere near enough for 4K gaming. Its not even enough to max out current games at 1080p.

There is no GPU bottleneck at 1080p, but there is a GPU bottleneck at 4K

relative-performance-games-38410-2160.png
 
Exactly. And that is the myth I quoted "soon eight cores will be the minimum required for games!"



There is no GPU bottleneck at 1080p, but there is a GPU bottleneck at 4K

View attachment 173530

Pick a chart that shows the minimum frames and you will see the difference and who the hell buys a 2080Ti and pairs it with a I3. Average frames dont tell the whole story of how a game will play.
 
I'm also getting really tired of posts that show only average framerates.

Same. At this point, most modern CPUs can sustain a very good average framerate. The frametime spikes are more interesting to me. I don't care too much about e-peen numbers anymore, and I don't care about absolute fps overmuch, so long as north of 60 fps, generally (I'm not some e-sports gamer).

But I do like a seamless, smooth, nice looking gameplay experience. Nothing jarring or annoying. Especially since I am fan of very immersive games.
 
Pick a chart that shows the minimum frames and you will see the difference and who the hell buys a 2080Ti and pairs it with a I3. Average frames dont tell the whole story of how a game will play.

It seems you don't understand the concept of GPU bottleneck. If you select a cheaper and slower GPU than the 2080Ti used for that 4K graph, then the GPU bottleneck is bigger and the gap between CPUs reduced. The slowest Ryzen in that 4K graph wouldn't be at 93.2% of relative performance but at 100%.

A GPU bottleneck affects both average and minimum frames. I can pick minimum frame charts showing the same behavior at 4K

2017-03-06-image-18.jpg

2017-03-06-image-16.jpg
 
Ended up getting a new system, and I am happy with it.

R5 3600X - was on promo for marginally more than the 3600 and comes with the Wraith Spire which is a pretty RGB stock cooler.
Gigabyte B450 I Aorus
G.Skill Trident Z RGB 16GB 3200 CL16
Leadek 2 Superflower Gold 500 Modular - carry over
GTX 1660ti - it has been very close to the 980ti performance so far, really happy with this card so far for the games I play it is much cooler than the 980ti
Phanteks Enthoo Evolv - carry over
SSD's - carry over.

Overall heat emission has been better, the 4970K and 980ti were a lot hotter and the performance is definitely better now, games like Rust and Battlefield 5 have far less hitching and stuttering.
 
aco-1440p.png


While it doesn't include an i3 I think the notion that an i3 for 2K and up is a bad suggestion is carried out in this title like Battlefield 5 and Farcry 5 that leverages on threads we see that the 7600 is a bad choice but look at the stock performance of the 9900K shows massive frame drops. Going forward I would not touch an i3 for anything beyond 1080P and even then the i3 has almost no value compared to better options.
 
Going forward I would not touch an i3 for anything beyond 1080P and even then the i3 has almost no value compared to better options.

The chart above actually makes a pretty good case for an i3 like the 8350k and future iterations, supposing it is overclocked (as it should be), for gaming. If the price is right, the potential is there to push even the 0.1% lows up to competitive levels. 1% lows would definitely be above 60FPS while 0.1% lows wouldn't be far behind, using the i5 7600k as a point of reference and considering the overclock which is just wildly easy on that CPU (I have two).
 
[While it doesn't include an i3 I think the notion that an i3 for 2K and up is a bad suggestion is carried out in this title like Battlefield 5 and Farcry 5 that leverages on threads we see that [...]

Second time in this thread that someone throws a low resolution benchmark to discuss 4K gaming. LOL
 
Waiting on comparison of 3600x to OC'ed SB/IB i7, or of 3600x to fire-sale $160 2600x (before they're gone).
 
Waiting on comparison of 3600x to OC'ed SB/IB i7, or of 3600x to fire-sale $160 2600x (before they're gone).

You probably won't see that. The fact is, reviewers probably don't have a ton of 7+ year old hardware laying around. I may still have a 3770K floating around. I doubt I have a 2600K or 2700K.
 
Waiting on comparison of 3600x to OC'ed SB/IB i7, or of 3600x to fire-sale $160 2600x (before they're gone).

Tech Deals is talking about doing tests on older CPUs in a generational comparison. Looks like SB will be included, specifically the 2600K
 
Second time in this thread that someone throws a low resolution benchmark to discuss 4K gaming. LOL

At least he showed a chart with the processors were actually discussing. If you want to try to make a point I suggest you stick with 1080p and lower where someone who wants a high frame rates above 144 fps might see a difference with the 9900K over a 3900X. At 4K in your current argument not one soul will be able to tell the difference between those cpus. Also people that build 4K rigs are not going to op for a 4 core chip, because they want the best performance they can get and if you mix in a guy that likes to stream well then things do get one sided and not in Intels favor.
 
You probably won't see that. The fact is, reviewers probably don't have a ton of 7+ year old hardware laying around. I may still have a 3770K floating around. I doubt I have a 2600K or 2700K.
More than a few sites and YT reviewers have thrown in overclocked Sandy or Ivy into the mix for the Ryzen 2xxx reviews.
The single biggest group of hold outs are probably those users in the 4.3 to 4.6GHz range.
 
More than a few sites and YT reviewers have thrown in overclocked Sandy or Ivy into the mix for the Ryzen 2xxx reviews.
The single biggest group of hold outs are probably those users in the 4.3 to 4.6GHz range.

I had Sandy Bridge chips that would do 5.1GHz. I'll see if I can dig one up. If you guys are interested and I can find one, I'll do the tests.
 
I had Sandy Bridge chips that would do 5.1GHz. I'll see if I can dig one up. If you guys are interested and I can find one, I'll do the tests.
Appreciated. You really won the silicon lottery with that one. ~4.5 would be more useful as it's a typical OC for most. Ideally, ram should be clocked at over stock if you can.
 
At least he showed a chart with the processors were actually discussing. If you want to try to make a point I suggest you stick with 1080p and lower where someone who wants a high frame rates above 144 fps might see a difference with the 9900K over a 3900X. At 4K in your current argument not one soul will be able to tell the difference between those cpus. Also people that build 4K rigs are not going to op for a 4 core chip, because they want the best performance they can get and if you mix in a guy that likes to stream well then things do get one sided and not in Intels favor.

He pretended to use a 1400p chart to discuss 4K gaming. Moreover he quoted a single game, whereas I am not. I gave 4K performance for "the processors were actually discussing" in #415.

You change the argument in each new reply. First it was average frames, then minimum frames, now it is gaming and streaming... :rolleyes:

2k is not low resolution

2K is lower than 4K.
 
He pretended to use a 1400p chart to discuss 4K gaming. Moreover he quoted a single game, whereas I am not. I gave 4K performance for "the processors were actually discussing" in #415.

You change the argument in each new reply. First it was average frames, then minimum frames, now it is gaming and streaming... :rolleyes:



2K is lower than 4K.

This is a zen2 thread stick with that or dont bother replying, no one cares how first gen zen performs in this thread. Your post in 415 shows a 1.4% difference between a 3900X and a 9900K a pointless difference. Minimum frames still matter but you want to time travel to a first gen zen as a reply to me which has nothing to do with Zen 2. 2K gaming is actually better as most are trying to hit 144 fps or above 100 fps to keep it smooth so the processor matters a bit more then at 4K where fps are typically near 60 fps. However your ridiculous statement that a I3 is enough for 4K gave us all a good laugh, well unless you meant a 4K movie :LOL:

No surprise you dont want to post up a chart about streaming and gaming, looks all bad for Intel there and I never mentioned average frames it was just minimum frames. Easy to get confused when trying to argue with everyone. Look forward to the next reply and a good laugh.
 
Last edited:
More than a few sites and YT reviewers have thrown in overclocked Sandy or Ivy into the mix for the Ryzen 2xxx reviews.
The single biggest group of hold outs are probably those users in the 4.3 to 4.6GHz range.

I stayed on my SB rig until SATA ports started dying on the motherboard, and the memory started acting... weird (I have no better description than that). It was clear the mobo was on its last legs.
 
https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/

Clock vs Clock is very impressive, now the only real limitation on Ryzen is

a) sustainable boost clocks perhaps this is worked out with more firmware updates to allow the CPU's to maintain higher clocks ie: 4.3-4.5ghz instead of 4ghz that would improve performance a bit.

b) overclocking headroom, looks like that will be the only deficit.

It is still impressive to say that Intel and AMD have parity though.
 
This is a zen2 thread stick with that or dont bother replying, no one cares how first gen zen performs in this thread.

I am discussing Zen2.

Your post in 415 shows a 1.4% difference between a 3900X and a 9900K a pointless difference.

Because there is a GPU bottleneck at 4K.

Minimum frames still matter but you want to time travel to a first gen zen as a reply to me which has nothing to do with Zen 2.

GPU bottlenecks don't care about CPU muarch. GPU bottlenecks affect to Piledriver, Zen, Zen+, Zen2, Coffeelake, Haswell,...
 
https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/

Clock vs Clock is very impressive, now the only real limitation on Ryzen is

a) sustainable boost clocks perhaps this is worked out with more firmware updates to allow the CPU's to maintain higher clocks ie: 4.3-4.5ghz instead of 4ghz that would improve performance a bit.

b) overclocking headroom, looks like that will be the only deficit.

It is still impressive to say that Intel and AMD have parity though.

You aren't going to get higher boost clocks on all cores through AGESA code improvements. More consistent single core boosts? Maybe. We've actually seen that already, however it didn't necessarily improve performance. On all cores? No. I seriously doubt it.
 
Gamers Nexus saw 2% or less gains with their update. It's just that a couple reviewers like Anandtech were really pushed off the mark. Most other reviewers were fine-ish. 1080p gaming gap of ~5% vs 9900k isn't going anywhere. Buy on the other merits, or don't, as your use case demands.

I don't expect much change in the near term.

Long term, there may be a slow shift toward more parallelism, mitigating things for higher core count parts. Or not. We'll see. But we're talking years there.
 
Back
Top