AMD Ryzen R9 3900X Review Round Up

Agreed. Clock for clock measurements seemed interesting back in the PIII vs Athlon days, but we have kind of learned a lot since then.

Some architectures allow for higher clocks, others don't. It only makes sense to consider how an architecture oerforms when clocked as high as it will go. Any other measure is not really valuable.

You can have the highest IPC CPU in the world, but if it's stuck at 200Mhz, it's probably not going to be a great performer in 2019. It makes next to no sense to slow other architectures down to 200mhz to see how they perform.

I ran them at both the same clocks and their standard base / boost clocks as well as overclocked as high as I could take them. I think that the only real metric that matters is how they perform at base / boost clocks for people who don't overclock and at overclocked speeds for those who do. The latter crowd needs both numbers to know that their performance will fall somewhere in between depending on what they can achieve. The clock for clock IPC comparisons are interesting. However, they are more for back ground and to satisfy curiosity than anything else. That information does tell a story on its own. What I saw was that often times, Intel only had a lead over AMD where it did due to the clock speed advantage. This tells us that AMD's IPC is generally on par with Intel's if not greater.

This is actually a repeat of situations we've seen before. Back in the day, the Cyrix 6x86 PR200+ was generally faster than Intel's Pentium 200MHz CPU at a mere 150MHz. It obviously had a weak FPU implementation that caused it to suffer in games, but generally speaking it had greater IPC. AMD's K5 was sort of the same way as it had lower clocks than Intel and Cyrix, but still managed to do better per clock. It just didn't clock worth a damn. Today, we have AMD at lower clocks competing very well with a more highly clocked Intel product. AMD often beats Intel at lower clocks and people simply want to understand why. Sometimes its just due to a core count advantage, but other times its due to the way the architecture is built and its features, such as L3 cache sizes, etc. make a big difference in certain applications.

So why do we care? Why do we even talk about IPC? For the same reason we talk about the architecture or feature improvements. It's done because people reading this stuff are interested in how things work. It's why car sites and magazines talk about the engine specs or drive train details of a car rather than just post track times and fuel economy data. It adds context which can be helpful but it also satisfies curiosity, increases product knowledge and if nothing else, it entertains people. The more you know, the more you can understand how the product may perform for your specific needs. Its also helpful because some people might just look at Intel's Core i9 9900K specs and see a 5.0GHz boost clock and assume its all around faster than AMD's Ryzen 9 3900X when it isn't.

Listen, I don't care how the performance is achieved. I don't care if we have a 3GHz processor that nets me 240FPS in Destiny 2 @ 4K or a 5.5GHz processor that does the same thing. As long as both get the job done with reasonable cooling and at a reasonable price, I couldn't care less how the target performance was achieved. However, I find it extremely interesting to know how it was achieved even though its only the end result that truly matters. People are naturally curious to know how and why things are the way they are. In this case, a direct IPC comparison is helpful for that. Additional context like that may also help answer questions later that you didn't know to ask now.
 
I ran them at both the same clocks and their standard base / boost clocks as well as overclocked as high as I could take them. I think that the only real metric that matters is how they perform at base / boost clocks for people who don't overclock and at overclocked speeds for those who do. The latter crowd needs both numbers to know that their performance will fall somewhere in between depending on what they can achieve. The clock for clock IPC comparisons are interesting. However, they are more for back ground and to satisfy curiosity than anything else. That information does tell a story on its own. What I saw was that often times, Intel only had a lead over AMD where it did due to the clock speed advantage. This tells us that AMD's IPC is generally on par with Intel's if not greater.

This is actually a repeat of situations we've seen before. Back in the day, the Cyrix 6x86 PR200+ was generally faster than Intel's Pentium 200MHz CPU at a mere 150MHz. It obviously had a weak FPU implementation that caused it to suffer in games, but generally speaking it had greater IPC. AMD's K5 was sort of the same way as it had lower clocks than Intel and Cyrix, but still managed to do better per clock. It just didn't clock worth a damn. Today, we have AMD at lower clocks competing very well with a more highly clocked Intel product. AMD often beats Intel at lower clocks and people simply want to understand why. Sometimes its just due to a core count advantage, but other times its due to the way the architecture is built and its features, such as L3 cache sizes, etc. make a big difference in certain applications.

So why do we care? Why do we even talk about IPC? For the same reason we talk about the architecture or feature improvements. It's done because people reading this stuff are interested in how things work. It's why car sites and magazines talk about the engine specs or drive train details of a car rather than just post track times and fuel economy data. It adds context which can be helpful but it also satisfies curiosity, increases product knowledge and if nothing else, it entertains people. The more you know, the more you can understand how the product may perform for your specific needs. Its also helpful because some people might just look at Intel's Core i9 9900K specs and see a 5.0GHz boost clock and assume its all around faster than AMD's Ryzen 9 3900X when it isn't.

Listen, I don't care how the performance is achieved. I don't care if we have a 3GHz processor that nets me 240FPS in Destiny 2 @ 4K or a 5.5GHz processor that does the same thing. As long as both get the job done with reasonable cooling and at a reasonable price, I couldn't care less how the target performance was achieved. However, I find it extremely interesting to know how it was achieved even though its only the end result that truly matters. People are naturally curious to know how and why things are the way they are. In this case, a direct IPC comparison is helpful for that. Additional context like that may also help answer questions later that you didn't know to ask now.

I agree that it can be interesting from an "understanding the architecture" perspective and from a perspective of predicting the future "what might happen when process matures" and that type of discussions, which is why I like to see the analysis.

But when it all comes down to how it performs it doesn't matter to me if that performance comes from higher clocks or from higher IPC. It's the results that matter.
 
From what I can see most of the improvement from Zen+ is mostly due to the increased L3 cache, with some mild IPC gains. That coupled with a couple hundred mhz clock results in the gains. That being said, again, I think it's mostly due to the doubling of the L3 cache per chiplet.
 
From what I can see most of the improvement from Zen+ is mostly due to the increased L3 cache, with some mild IPC gains. That coupled with a couple hundred mhz clock results in the gains. That being said, again, I think it's mostly due to the doubling of the L3 cache per chiplet.

I think you mean Zen 2. The Ryzen 3000 series gets its performance increases from the various changes. There are a lot more of them than simply doubling of the L3 cache. That's probably the biggest gain in gaming, and AMD pretty much stated this. However, other gains in gaming come from increased memory controller performance and from a reduction in latency across CCX complexes. No longer is there this complicated pathing structure and NUMA like design. Now each CCD and its CCX complexes have equal access to the I/O die and memory controllers. The Infinity Fabric bandwidth was also increased, etc. The process node shrink gave us greater core density, improved energy efficiency, and a reduction in heat. There are a ton of changes and very few of them amount to anything on their own. Its when you add them all together that results in the improvements we see.

The clock speed increase is probably the least effective one. Functionally, we are talking 100MHz on an all core overclock as Ryzen generally topped out at 4.2GHz in my experience. I see 4.3GHz on the 3000 series so far. Boost clocks are far greater, but I have yet to see mine do more than 4.5GHz. Even that required using PBO+ the offset.
 
Well, let's wait and see what new bios updates start coming out for B350 and 470 boards. I don't want to buy a new board dammit.
 
My 6600k still plays everything just fine @ 1440p. I really wish I needed to upgrade.
Then you just have some very low standards and/or are just oblivious to hitching and stuttering. Your CPU has absolutely zero chance of maintaining a smooth frame rate in some of the newer games regardless of what the actual FPS number show. And in some games you most certainly cannot maintain 60fps. Most of the culprits are Ubisoft games though of course. Mafia 3 is another one that will eat 4 cores for lunch and is not remotely smooth if I disable hyper-threading. But again like I said pretty much most modern games will have 4 cores pegged most of the time and in some cases all of the time. And even just leaving your browser open in the background can cause some additional hitching because you don't have any cpu power to spare. Heck even all 8 threads of my CPU get fully pegged at times in many if not most newer games.
 
Last edited:
Then you just have some very low standards and/or are just oblivious to hitching and stuttering. Your CPU has absolutely zero chance of maintaining a smooth frame rate in some of the newer games regardless of what the actual FPS number show. And in some games you most certainly cannot maintain 60fps. Most of the culprits are Ubisoft games though of course. Mafia 3 is another one that will eat 4 cores for lunch and is not remotely smooth if I disable hyper-threading. But again like I said pretty much most modern games will have 4 cores pegged most of the time and in some cases all of the time. And even just leaving your browser open in the background can cause some additional hitching because you don't have any cpu power to spare. Heck even all 8 threads of my CPU get fully pegged at times in many if not most newer games.

I think his 6600k is fine. You are totally blowing it out of proportion
 
I'm a little disappointed that the clocks are as low as they are but the IPC improvements are nice enough to make up for that. The cache and latency improvements seems to have payed off which isn't surprising since they were weaknesses before.

At this point I'm more interested in seeing if a 3950 is worth throwing in my system when it gets here but a 3700x or higher would still be a decent upgrade so if there's some deals next holiday season I might be tempted. I saw somewhere but haven't confirmed that my CH7 MB actually supports PCIe 4.0 for the lanes from the CPU which is interesting, like someone already mentioned that's less important than the ones from the chipset but being able to fully utilize a PCIe 4 NVME would be a nice bonus.

I do think I'm going to grab a 3600 for a budget gaming build for a friend.

Oh man, these model numbers are getting confusing AF. I was all like, "WHAT?! There's an R9 3970X? I thought they only went up to 3950."

Tell me about it, I was trying to look up info on the 3600 earlier and kept seeing the Athlon 64 3600+ pop up in the results and even clicked on a couple of them that just used amd 3600 in the title.
 
I think his 6600k is fine. You are totally blowing it out of proportion

Depends how close you are to the screen too. I can notice every little hitch on my main rig (34” at 2.5 ft-ish) where my TV (55” at 7’) 30 fps is almost playable.

But yeah, 6600k isn’t exactly a slouch.

I want to see reviews on the 3950x. Hoping it clocks slightly higher like the boost suggests. 3900x is solid. Just want to see all the options.
 
Yeah, but due to its turn based nature its not really something that you can feel all the time. I haven't played a big Civ VI game in a while, but the CPU limitation is mostly just what happens between turns and the actual FPS during turns isn't as relevant. AI turns going faster is nice, but its not something that truly affects gameplay.

Most of what I play (simulation city builders and such) the cpu limitation hits hard when there are too many entities involved with the game. Most modern games of this nature try to simulate every single person/car/object and assigns every single one of them a task, the true end game is making sure that all these unique objects can perform the task efficiently while not running into the other ones. My 8 core, 16 thread 1700 is basically stuck at 20fps in my biggest cities skylines cities, all cores at 80% workload or so while my GPU sits at maybe 50% or less because its simply not getting the data fast enough to hit a higher framerate.



Go to around 8:40 for Universe Sandbox testing on 3700x.
 
https://www.msi.com/blog/the-latest-bios-for-amd-300-400-series-motherboard

Looks like MSI is supporting most of their 300 series motherboards as well.

Yeah which is funny because they got some flack when one of their reps made a comment about some of their boards not supporting the new CPUs. In a way they actually have the best support at launch because they have the largest selection of boards that the BIOS can be updated without a CPU, Asus has it on some higher end boards but not on any mid-range boards or mATX.
 
Just for shits I benched my now ancient 4930K @ 4.5 in Cinebench R20:

View attachment 172616

Single core score is identical to that of Ryzen 5 1600, while multitcore score is a smudge better (from TechSpot):

I did not realize just how much my aging 4930K was holding me back. :eek: Figured it'd at least be equal to a 1800X in ST performance, but nah not even close.

Might be useful for any Sandy/Ivy holdouts wondering if these are worth the upgrade lol.

This is only at 4.4Ghz will try 4.7Ghz later. Either way it really shows how far ahead Ryzen is now.
 

Attachments

  • CB_R20_s.png
    CB_R20_s.png
    1 MB · Views: 0
Go to around 8:40 for Universe Sandbox testing on 3700x.

thanks for that, love messing around in universe sandbox 2 but it completely destroys my r5 1600. probably going to pick up either the 3600 or 3600x this week.
 
thanks for that, love messing around in universe sandbox 2 but it completely destroys my r5 1600. probably going to pick up either the 3600 or 3600x this week.
I'm hoping he does the same benchmarks with the 3900X, its one of the cases where I feel more threads is gonna have almost a direct correlation with more performance. I'll see what happens in the next few days.
 
thanks for that, love messing around in universe sandbox 2 but it completely destroys my r5 1600. probably going to pick up either the 3600 or 3600x this week.

IMO, anyone considering a 3600/3600x should be considering a 2700/2700x instead if your primary workloads can use the threads. Yes, the 3600/x will be faster for single core things, but the two extra cores of the 2700 make it score better in any multi-threaded workloads. 2700 is only $199 new now, and will overclock to 4.1ghz all core no problem giving it similar scores to a 2700x.

For only $200 the 2700 is a serious monster for that price now, and I don't see the value in the 3600/x, quite honestly.
 
Last edited:
IMO, anyone considering a 3600/3600x should be considering a 2700/2700x instead if your primary workloads can use the threads. Yes, the 3600/x will be faster for single core things, but the two extra cores of the 2700 make it score better in any multi-threaded workloads. 2700 is only $199 new now, and will overclock to 4.1ghz all core no problem giving it similar scores to a 2700x.

For only $200 the 2700 is a serious monster for that price now, and I don't see the value in the 3600/x, quite honestly.

the 2700x just doesn't do anything for me, most of the applications i use top out around 8 threads, the extra 4 threads on the 1600 i'm currently using just allow me to do other shit while those things are running. i'd rather have the performance gain of the 3600/x. i'm not buying the 3600 because i'm penny pinching, price doesn't matter to me. i just buy things that i can actually use, not just because i want it.

I'm hoping he does the same benchmarks with the 3900X, its one of the cases where I feel more threads is gonna have almost a direct correlation with more performance. I'll see what happens in the next few days.

sadly that's not how US2 works.. the simulation it's self is still single threaded.. while the extra threads help for non simulated functions, you see better gains from higher thread clocks and ipc. they still need to do some work on thread utilization because no matter what you throw at it, it'll never exceed 80% cpu load on the simulation thread.
 
Last edited:
the 2700x just doesn't do anything for me, most of the applications i use top out around 8 threads, the extra 4 threads on the 1600 i'm currently using just allow me to do other shit while those things are running. i'd rather have the performance gain of the 3600/x. i'm not buying the 3600 because i'm penny pinching, price doesn't matter to me. i just buy things that i can actually use, not just because i want it.
You realize the 3600 is 6 cores, correct? If you aren’t penny pinching why not get the 8 core 3700/3800? If your stated goal is running something that can use more threads it doesn’t make sense to limit yourself if you aren’t penny pinching.
 
IMO, anyone considering a 3600/3600x should be considering a 2700/2700x instead if your primary workloads can use the threads. Yes, the 3600/x will be faster for single core things, but the two extra cores of the 2700 make it score better in any multi-threaded workloads. 2700 is only $199 new now, and will overclock to 4.1ghz all core no problem giving it similar scores to a 2700x.

For only $200 the 2700 is a serious monster for that price now, and I don't see the value in the 3600/x, quite honestly.

Not to pick on you but as a 2700x owner I generally disagree with that, the 3600 comes a lot closer in multi-core performance than the 2700(x) comes in for single core performance, The 3600 really is a great deal when paired with a b450 board, I'm not going to sidegrade to a 3600 but if I was looking to build new for cheap(which I am, for a friend) the 3600 is the best deal for a budget gaming PC right now IMO.
 
Last edited:
Not to pick on you but as a 2700x owner I generally disagree with that, the 3600 comes a lot closer in multi-core performance than the 2700(x) comes in for single core performance, The 3600 really is a great deal when paired with a b450 board, I'm not going to sidegrade to a 3600 but if I was looking to build new for cheap(which I am, for a friend) the 360 is the best deal for a budget gaming PC right now IMO.

If the goal is a budget multi threaded build is it really worth $50 more? I disagree, but that’s just me. $200 and $250 is a big gap, and for things like rendering the last gen 8 core part is still faster, and the reality is that the single threaded gains isn’t going to let you play any games with any higher settings.

Now, if it’s just a gaming PC I agree, get the 3600 and get marginally higher frame rates which is all you’re going to get at real GPU bound resolutions.
 
If the goal is a budget multi threaded build is it really worth $50 more? I disagree, but that’s just me. $200 and $250 is a big gap, and for things like rendering the last gen 8 core part is still faster, and the reality is that the single threaded gains isn’t going to let you play any games with any higher settings.

Now, if it’s just a gaming PC I agree, get the 3600 and get marginally higher frame rates.

The 3600 is $200 not $250. But yeah it's a budget gaming build and I consider 6c/12t the new 4c/8t in terms of baseline core count, if it's strictly for well multi-threaded stuff then I would go with the 2700(x) if prices were the same.
 
The 3600 is $200 not $250. But yeah it's a budget gaming build and I consider 6c/12t the new 4c/8t in terms of baseline core count, if it's strictly for well multi-threaded stuff then I would go with the 2700(x) if prices were the same.

That was my point, if someone is talking about running stuff that more cores will help I have to question getting the 3600 in that specific scenario.
 
I honestly don't get what the big deal is. AMD with the 3900x released basically a 7920X two years later than Intel. Granted at about half price, but HEDT CPU's that are just as fast as the 3900x have been out there for a long time. The only reason the 3900x wins in applications over the 9900K is the extra four cores. Gaming the 9900K is still clearly the leader. You could basically have had a 3900X years ago with Skylake-X if you ponied up a little more ca$h.

Sure AMD has come out with a good value proposition, but that is all I see. The IPC and frequency certainly isn't anything to write home about.
 
I honestly don't get what the big deal is. AMD with the 3900x released basically a 7920X two years later than Intel. Granted at about half price, but HEDT CPU's that are just as fast as the 3900x have been out there for a long time. The only reason the 3900x wins in applications over the 9900K is the extra four cores. Gaming the 9900K is still clearly the leader. You could basically have had a 3900X years ago with Skylake-X if you ponied up a little more ca$h.

Sure AMD has come out with a good value proposition, but that is all I see. The IPC and frequency certainly isn't anything to write home about.

Well there are a lot more people who want the 3900x who don't just game. When it comes to productivity the 3900x just plain owns for around the same price as a 9900k. Even PCI-E 4.0 raid is so much faster than anything Intel has currently.

If all you care about gaming, then yes the 9900k is the top dog. And this is the reason why you don't see the big deal, it seems like all you care about is gaming performance and that is perfectly fine. The 3900x is great because not only does it woop any Intel CPU around $500 when it comes to productivity it can decently game as well. This is why they call them Personal Computers.
 
No I get all that, but once again it comes down to price. For two years you could have had a 7920X that does basically everything as well as a 3900x save PCI-E 4.0.

I was hoping AMD would surpass Intel, not just come out with a cheaper 7920x two years late...
 
No I get all that, but once again it comes down to price. For two years you could have had a 7920X that does basically everything as well as a 3900x save PCI-E 4.0.

I was hoping AMD would surpass Intel, not just come out with a cheaper 7920x two years late...

Not only a cheaper 7920x, but also a faster version of it for half the price. Intel cant do that.....yet
 
No I get all that, but once again it comes down to price. For two years you could have had a 7920X that does basically everything as well as a 3900x save PCI-E 4.0.

I was hoping AMD would surpass Intel, not just come out with a cheaper 7920x two years late...

Two years earlier at a $700 premium, or giving up a 2080 basically. If you only game and do nothing else, that would've been a total waste and an outrageously dumb purchase.
 
No I get all that, but once again it comes down to price. For two years you could have had a 7920X that does basically everything as well as a 3900x save PCI-E 4.0.

I was hoping AMD would surpass Intel, not just come out with a cheaper 7920x two years late...


Bruh, AMD has had Threadripper for a while now.....
Here we go again comparing HEDT to mainstream....
 
people just find excuse to not spend money lol. Its just call self justification to not buy the new thing because you were never going to in the first place.

can't argue that, i use it sometimes as well to keep myself from spending money i don't need to, lol.
 
I never really understand these "last year's performance, but today!" arguments either. The vast majority of pc users are probably 4+ years behind on cpus/gpus anyways, so why does it really matter? Especially if the company that caught up is offering that performance for cheaper along with increased performance in other areas? I get it, Ryzen ain't for everyone, but some of the mental gymnastics people go through to downplay the whole lineup is staggering.

Anyways, I see these CPU's doing great in the future. The majority of pc users don't have a dedicated gaming pc, and sacrificing a few % or so in specific gaming loads to save some cash and have better general performance is going to be an obvious choice. I'm already sold on the platform (thanks for the reviews!), but I'm gonna hold out until possible holiday sales (also to let the platform mature a little). Have a 1600 atm so maybe a 3600, up to the 3800x max.

On a side note, anyone have any experience with the Aorus Ultra/Master? I planned on sticking with gigabyte and those 2 looked good for the NVME support.
 
3900x ordered, it'll push through compiles, vm loads while doing a gaming session with ease.
Not that my 1700 had any issues, but I need to give my unraid box an upgrade so no better way than upgrade my primary rig :D
 
I want to see how a 3900X performs in a single PC streaming setup. Since it handles encoding so efficiently, would this processor negate the need for a dedicated streaming PC?

Using software encoding my PC could stream with an 1800X at 1080p with a little headroom when not pushing settings to the max.
I did a little testing after building my 1800X & Vega64 end of 2017 using Prey and was pleased with the results.
Were they YT quality? Of course not..but not horribly grainy when my internet was able to keep up.
 
Looks like Anandtech posted their retested results for the 3900x this morning (they are still working on the 3700x)

Article Testing Methodology Update (July 9th):

We've updated the article benchmark numbers on the Ryzen 9 3900X. We've seen 3-9% improvements in exclusive ST workloads. MT workloads have remained unchanged, Gaming had both benefits and negatives. We continue to work on getting updated 3700X numbers and filling out the missing pieces.

Seems like the impact was smaller, at least for them, than I had hoped.

Still no mention of whether the BIOS impacted the ability of PBO+AutoOC to get higher clocks.

I'm looking forward to Dan_D 's testing.
 
I honestly don't get what the big deal is. AMD with the 3900x released basically a 7920X two years later than Intel. Granted at about half price, but HEDT CPU's that are just as fast as the 3900x have been out there for a long time. The only reason the 3900x wins in applications over the 9900K is the extra four cores. Gaming the 9900K is still clearly the leader. You could basically have had a 3900X years ago with Skylake-X if you ponied up a little more ca$h.

Sure AMD has come out with a good value proposition, but that is all I see. The IPC and frequency certainly isn't anything to write home about.

Talk about trying to twist something to make Intel look better, yikes.

Zen 2's IPC either beats or nips right at the heels of Intel's mainstream CPUs. They went from a fairly large IPC deficit to effectively matching Intel over the course of a YEAR. If not for Intel's massive clock speed advantage they wouldn't even have the gaming wins. If Intel doesn't have a good answer next year (and they might, we'll see) and AMD can deal with their clock speed limitations Intel could even lose that gaming win.
 
Don't forget that Intel will likely see a clock regression on their 10 nm parts even when they make it to desktop.
 
Talk about trying to twist something to make Intel look better, yikes.

Zen 2's IPC either beats or nips right at the heels of Intel's mainstream CPUs. They went from a fairly large IPC deficit to effectively matching Intel over the course of a YEAR. If not for Intel's massive clock speed advantage they wouldn't even have the gaming wins. If Intel doesn't have a good answer next year (and they might, we'll see) and AMD can deal with their clock speed limitations Intel could even lose that gaming win.

The whole gaming argument is still bogus in my opinion. The only area where you'll get anything going Intel at this point is if you are a low-resolution very high frame rate gamer. Anyone else gaming at 1440p+ these days won't even have any benefit from going 9900K over the 3800X/3900X when it comes to gaming. The vast majority are going to be better off spending that $500 on the 3900x and getting superior MT performance moving forward.
 
Don't forget that Intel will likely see a clock regression on their 10 nm parts even when they make it to desktop.

It could be so. Although AMD did not have a clock regression at 7nm.
 
The whole gaming argument is still bogus in my opinion. The only area where you'll get anything going Intel at this point is if you are a low-resolution very high frame rate gamer. Anyone else gaming at 1440p+ these days won't even have any benefit from going 9900K over the 3800X/3900X when it comes to gaming. The vast majority are going to be better off spending that $500 on the 3900x and getting superior MT performance moving forward.

X2

Bring on all those new [H] 1080p builds. LOL
 
Back
Top