AMD Ryzen R9 3900X Review Round Up

Do you do anything besides gaming at all? (encoding, rendering etc) What resolution do you game at mainly?



I meant another 5-6% on top of the 9900K's existing lead. Sorry if it was confusing.

Only gaming. 1440p 60fps but it will be 1080 when the new games come out.
 
Hmmm that's kinda tough. I guess it would depend on how much you can save with a 3700X + mobo over 9700K + mobo, and whether the savings is enough to buy you the next tier of GPU performance. Since you're only gunning for 1440p/60, I'm leaning towards the 3700X.
 
What's up with the 3800x? Has nobody received a unit to review?

I'm just going to lean toward a 3700x for myself. Given the OC headroom I'm not sure why the 3800x exists. I'd pay that premium if the boost was higher but 100mhz?
 
When you go up to 4K gaming it's a wash. 9900k is no better than the 3900X. Really the 3700X performs just as good as the 9900K. At 1440P the 9900K has about a 5% advantage. I'd still take the 3900X over the 9900K. It's better, literally, at everything else.
 
What's up with the 3800x? Has nobody received a unit to review?

I'm just going to lean toward a 3700x for myself. Given the OC headroom I'm not sure why the 3800x exists. I'd pay that premium if the boost was higher but 100mhz?

We did not get a 3800X in our kit...
 
What's up with the 3800x? Has nobody received a unit to review?

I'm just going to lean toward a 3700x for myself. Given the OC headroom I'm not sure why the 3800x exists. I'd pay that premium if the boost was higher but 100mhz?

I think AMD only sent out the 3700X and 3900X to reviewers. Everything else will probably come later.
 
When you go up to 4K gaming it's a wash. 9900k is no better than the 3900X. Really the 3700X performs just as good as the 9900K. At 1440P the 9900K has about a 5% advantage. I'd still take the 3900X over the 9900K. It's better, literally, at everything else.

When you go up to 4k an oc'd 3930k is just as good as all of those.
 
I am very impressed with the new Ryzen... clock speed aside they are so freaking close to a 9900K at the gaming tests when the graphics card is taken out of the equation. If this version of Ryzen was released when I was shopping I would've went Ryzen instead of my 8700k 100%.
 
When you go up to 4k an oc'd 3930k is just as good as all of those.

Ya, As excited for these reviews as I was, my favorite take away was the i3-9100f being within a few % of relative performance on all of the techpowerup charts. Just goes to show how far games have to catch up before I can actually get excited over CPUs again.
 
Ya, As excited for these reviews as I was, my favorite take away was the i3-9100f being within a few % of relative performance on all of the techpowerup charts. Just goes to show how far games have to catch up before I can actually get excited over CPUs again.
Now actually go play those demanding games on a pathetic 4 core/ 4 thread cpu and get back to me. I can tell you right now that most demanding modern games will have a cpu like that pegged at 100% nearly the whole time even if you had nothing else running in the background. Regardless of the fps numbers you see, some of those games would be a stuttery shitshow.
 
Just for shits I benched my now ancient 4930K @ 4.5 in Cinebench R20:

2ITTVGal.jpg


Single core score is identical to that of Ryzen 5 1600, while multitcore score is a smudge better (from TechSpot):
CB20_Multi.png


CB20_Single.png

I did not realize just how much my aging 4930K was holding me back. :eek: Figured it'd at least be equal to a 1800X in ST performance, but nah not even close.

Might be useful for any Sandy/Ivy holdouts wondering if these are worth the upgrade lol.
 
Can someone PLEASE explain why all if these reviews are in Hardware Tech News and not CPUs/GPUs? OPs, you do realize that thear threads will soon get burried and lost? It would be far better to put them in GPU/CPU sub threads.

Shouldn't HardwareNews be like "Intel leaks more info on their upcoming CPUs"

Now that Tech news is open to all, it seems to be a new fad to post all threads there.
 
Just for shits I benched my now ancient 4930K @ 4.5 in Cinebench R20:

View attachment 172616

Single core score is identical to that of Ryzen 5 1600, while multitcore score is a smudge better (from TechSpot):

I did not realize just how much my aging 4930K was holding me back. :eek: Figured it'd at least be equal to a 1800X in ST performance, but nah not even close.

Might be useful for any Sandy/Ivy holdouts wondering if these are worth the upgrade lol.

Make sure you do at least 3 to 5 (or more) runs of Cinebench. You will gain a little performance as you do more runs. For whatever reason it seems to work a bit better as the CPU warms up, might have something to do with how boost clocks work.

Can someone PLEASE explain why all if these reviews are in Hardware Tech News and not CPUs/GPUs? OPs, you do realize that thear threads will soon get burried and lost? It would be far better to put them in GPU/CPU sub threads.

Shouldn't HardwareNews be like "Intel leaks more info on their upcoming CPUs"

Now that Tech news is open to all, it seems to be a new fad to post all threads there.

Maybe you should actually look in the AMD CPU section. There is a thread there for all Zen 2 reviews. Also, FPN is probably still one of the most visited sections on the forum.
 
Just for shits I benched my now ancient 4930K @ 4.5 in Cinebench R20:

View attachment 172616

Single core score is identical to that of Ryzen 5 1600, while multitcore score is a smudge better (from TechSpot):

I did not realize just how much my aging 4930K was holding me back. :eek: Figured it'd at least be equal to a 1800X in ST performance, but nah not even close.

Might be useful for any Sandy/Ivy holdouts wondering if these are worth the upgrade lol.

You can always get an Xeon from ebay and double your cores, probably better off upgrading to Ryzen / threadripper though unless you don't want to buy new ram. (Mine is OC'd):
 

Attachments

  • cinebenchr20.jpg
    cinebenchr20.jpg
    38.6 KB · Views: 0
Nah I game most of the time, with maybe 10-15% of encoding thrown in, so more jiggahertz > MOAR COARZ. (well ideally I'd like to have both, but Intel can go pound sand with their asking price for the 9900K :))
 
Now actually go play those demanding games on a pathetic 4 core/ 4 thread cpu and get back to me. I can tell you right now that most demanding modern games will have a cpu like that pegged at 100% nearly the whole time even if you had nothing else running in the background. Regardless of the fps numbers you see, some of those games would be a stuttery shitshow.

My 6600k still plays everything just fine @ 1440p. I really wish I needed to upgrade.
 
My 6600k still plays everything just fine @ 1440p. I really wish I needed to upgrade.


There is no way that is not a stutter fest, evebbwith a 4.6+4.8Ghz OC. With all the security patches in place, it gets even worse.

I noticed my 5.2Ghz AC OC'd 3770k with ddr2600 (which is faster/on a par then your 6600k) took a huge kit with minimum fps and frame times vs my 1600@ 4.1Ghz (3200c16) . ..that was a over a year ago, and Intel has had even more security mitigation since.
 
Eh, the difference is mostly within the margin of error at high resolution, I wouldn't classify the difference as "wrecking" AMD.

GPU limited tests are irrelevant for CPU testing purposes. I knew my comment would incite some interesting conversation. Most reviews I'm seeing a stock 9900K is ~5-15% faster in gaming than the 3900X. Add in that my 9900KF is running at 5.3 GHz, we are talking 20+% performance increase over 3900X since it over-clocks so poorly. 20%+ in the CPU world is "Wrecking" to me in a component field that has stagnated for years.

You playing games at 640x480 resolutions?

Gamers are nuts. Any modern CPU is good enough for gaming.

Uh no. I play games like PUBG at 4K @ 144 Hz/FPS which is VERY demanding on both the CPU and GPU. Especially when you are trying to not drop below 144 FPS ever due to using a back-light strobing monitor. 20% CPU speed differential is substantial in my usage scenario.


All at 4 GHz:

20190707-192828.jpg


(no game was faster on the AMD than the Intel all being at 4 GHz)
 
Last edited:
What's up with the 3800x? Has nobody received a unit to review?

I'm just going to lean toward a 3700x for myself. Given the OC headroom I'm not sure why the 3800x exists. I'd pay that premium if the boost was higher but 100mhz?

The 3800X exists because +40W of TDP for PBO. Given the 99th percentile numbers of the 3700X were frequently better than those of the 3900X, I'm guessing the 3800X wasn't sampled because it will likely end up being a better gaming chip than the 3900X overall. It seems like all-core OCs are pretty meaningless as well since PBO will get you (almost literally) 98% of the way there for multithreaded loads with no tweaking and will be faster in lightly threaded loads.

GPU limited tests are irrelevant for CPU testing purposes. I knew my comment would incite some interesting conversation. Most reviews I'm seeing a stock 9900K is ~5-15% faster in gaming than the 3900X. Add in that my 9900KF is running at 5.3 GHz, we are talking 20+% performance increase over 3900X since it over-clocks so poorly. 20%+ in the CPU world is "Wrecking" to me in a component field that has stagnated for years.



Uh no. I play games like PUBG at 4K @ 144 Hz/FPS which is VERY demanding on both the CPU and GPU. Especially when you are trying to not drop below 144 FPS ever due to using a back-light strobing monitor. 20% CPU speed differential is substantial in my usage scenario.


All at 4 GHz:

View attachment 172646

(no game was faster on the AMD than the Intel all being at 4 GHz)

Yet you get to 1440p at stock clocks and the difference is just 4%/6% (avg/99th percentile FPS respectively) in BFV, which will decrease even more as you go to 4K. Also, why the hell would you use a 7940X over something like a 9900K if you're worried about max framerates? Kind of just seems like sour grapes. EDIT: just re-read and saw the 9900KF bit. Given Intel's muckery with MCS/PL2 on all of the motherboards, I'm willing to bet the difference is MUCH smaller than the 20% you claim. I'm sure we'll see more comparisons in the future that speak to that, though.
 
Last edited:
GPU limited tests are irrelevant for CPU testing purposes. I knew my comment would incite some interesting conversation. Most reviews I'm seeing a stock 9900K is ~5-15% faster in gaming than the 3900X. Add in that my 9900KF is running at 5.3 GHz, we are talking 20+% performance increase over 3900X since it over-clocks so poorly. 20%+ in the CPU world is "Wrecking" to me in a component field that has stagnated for years.



Uh no. I play games like PUBG at 4K @ 144 Hz/FPS which is VERY demanding on both the CPU and GPU. Especially when you are trying to not drop below 144 FPS ever due to using a back-light strobing monitor. 20% CPU speed differential is substantial in my usage scenario.


All at 4 GHz:

View attachment 172646

(no game was faster on the AMD than the Intel all being at 4 GHz)
What does a 20% speed differential at low resolution/settings translate to at 4K Ultra settings though? I doubt it's going to be a 1:1 scaling going from say 1080p Low to 4K Ultra.

That all being said, yes, in your particular usage I think Intel is the better option.
 
Cooling really didn't seem like the issue. Even with outlandish voltage settings, the CPU never pulled that much power. You can set the voltage manually, but all that means is that it can use up to that much. It never came close to doing that at those higher clocks. 4.4GHz and 4.5GHz were just right out. 4.4GHz gave me hope of working on occasion but firing up anything CPU intensive would always result in a crash or sudden restart.

I thought more about this overnight, and something about this just doesn't make sense to me.

We know the boost clocks are 4.6ghz. that means this is what the cores are capable of hitting individually.

Usually the reason you can't get an all core overclock is because of heat.

If heat isn't it, and we know the cores are capable of hitting the clocks, then what is it? Power delivery?

Or maybe the BIOS/microcode isn't quite there yet?
 
What does a 20% speed differential at low resolution/settings translate to at 4K Ultra settings though? I doubt it's going to be a 1:1 scaling going from say 1080p Low to 4K Ultra.

That all being said, yes, in your particular usage I think Intel is the better option.

It's not just low resolution/settings. With a 2175 MHz RTX Titan and a not incredibly demanding game (such as PUBG), even at 4K I max out my 5.3 GHz 9900KF running minimum 144 FPS. No one is arguing that the 3900x isn't a better value. I'm just stating the 9900K is still the king of gaming, especially overclocked.
 
I thought more about this overnight, and something about this just doesn't make sense to me.

We know the boost clocks are 4.6ghz. that means this is what the cores are capable of hitting individually.

Usually the reason you can't get an all core overclock is because of heat.

If heat isn't it, and we know the cores are capable of hitting the clocks, then what is it? Power delivery?

Or maybe the BIOS/microcode isn't quite there yet?

I don't think it's power delivery. The MSI MEG X570 GODLIKE may not do anything innovative on the VRM front but it's VRM's are more than capable of delivering enough power to reach 5.2GHz on LN2. The CPU also never cracked 78c on the bench at full load. It rarely even got to that point so it wasn't a heat issue. My money would be on immature BIOS. I actually have an updated BIOS for the board that was released on launch day. So I'll be taking a look at it and seeing how it impacts boost clocks.

EDIT: I think I misread the above post. I thought we were specifically addressing the reason why the CPU's aren't hitting their boost clocks and heat doesn't seem to be a factor. On an all core overclock, I'd almost certainly bet that heat would be an issue. While I never got my chip to heat up that much at 4.5GHz, it didn't because it crashed too fast. I don't think its a matter of power output specifically, as these chips have been overclocked over 5.0GHz on LN2. That requires far more power than you could pull on air or water. Looking at the VRM's on this test board, power shouldn't be an issue.
 
Last edited:
I don't think it's power delivery. The MSI MEG X570 GODLIKE may not do anything innovative on the VRM front but it's VRM's are more than capable of delivering enough power to reach 5.2GHz on LN2. The CPU also never cracked 78c on the bench at full load. It rarely even got to that point so it wasn't a heat issue. My money would be on immature BIOS. I actually have an updated BIOS for the board that was released on launch day. So I'll be taking a look at it and seeing how it impacts boost clocks.

Do you think maybe AMD put a artificial cap on it? Maybe to protect questionable motherboards from being overloaded?
 
Isn't Civ VI fairly CPU limited in late-game scenarios?
Yeah, but due to its turn based nature its not really something that you can feel all the time. I haven't played a big Civ VI game in a while, but the CPU limitation is mostly just what happens between turns and the actual FPS during turns isn't as relevant. AI turns going faster is nice, but its not something that truly affects gameplay.

Most of what I play (simulation city builders and such) the cpu limitation hits hard when there are too many entities involved with the game. Most modern games of this nature try to simulate every single person/car/object and assigns every single one of them a task, the true end game is making sure that all these unique objects can perform the task efficiently while not running into the other ones. My 8 core, 16 thread 1700 is basically stuck at 20fps in my biggest cities skylines cities, all cores at 80% workload or so while my GPU sits at maybe 50% or less because its simply not getting the data fast enough to hit a higher framerate.
 
Sure, I get what you're saying. I was just suggesting it as a potential analogue since Civ shows up in a few benchmark suites, but maybe it's not particularly applicable.
 
Yeah, but due to its turn based nature its not really something that you can feel all the time. I haven't played a big Civ VI game in a while, but the CPU limitation is mostly just what happens between turns and the actual FPS during turns isn't as relevant. AI turns going faster is nice, but its not something that truly affects gameplay.

Most of what I play (simulation city builders and such) the cpu limitation hits hard when there are too many entities involved with the game. Most modern games of this nature try to simulate every single person/car/object and assigns every single one of them a task, the true end game is making sure that all these unique objects can perform the task efficiently while not running into the other ones. My 8 core, 16 thread 1700 is basically stuck at 20fps in my biggest cities skylines cities, all cores at 80% workload or so while my GPU sits at maybe 50% or less because its simply not getting the data fast enough to hit a higher framerate.

Agreed. But even late game on a huge map, I've never felt that the turn times were too long on my old hexacore. I'd imagine it could be bad on something anemic, but any halfway decent CPU can handle it.
 
GPU limited tests are irrelevant for CPU testing purposes. I knew my comment would incite some interesting conversation. Most reviews I'm seeing a stock 9900K is ~5-15% faster in gaming than the 3900X. Add in that my 9900KF is running at 5.3 GHz, we are talking 20+% performance increase over 3900X since it over-clocks so poorly. 20%+ in the CPU world is "Wrecking" to me in a component field that has stagnated for years.



Uh no. I play games like PUBG at 4K @ 144 Hz/FPS which is VERY demanding on both the CPU and GPU. Especially when you are trying to not drop below 144 FPS ever due to using a back-light strobing monitor. 20% CPU speed differential is substantial in my usage scenario.


All at 4 GHz:

View attachment 172646

(no game was faster on the AMD than the Intel all being at 4 GHz)


What review is that graph from I wanna see more. Thanks.
 
Make sure you do at least 3 to 5 (or more) runs of Cinebench. You will gain a little performance as you do more runs. For whatever reason it seems to work a bit better as the CPU warms up, might have something to do with how boost clocks work.

I've noticed this too, but I think it has more to do with Windows having cached various things already.
 
It's not just low resolution/settings. With a 2175 MHz RTX Titan and a not incredibly demanding game (such as PUBG), even at 4K I max out my 5.3 GHz 9900KF running minimum 144 FPS. No one is arguing that the 3900x isn't a better value. I'm just stating the 9900K is still the king of gaming, especially overclocked.


Since when is PUBG not demanding?

Granted I haven't played it since the summer of 2017, but at that point it brought my Titan to it's knees at 4k.

In order to get to even 60fps, I had to run at a custom lower resolution (21:9 ultrawide I side 4k, letterboxed)

Maybe something has changed since then?

Fun game, but got old after I won my first couple of chicken dinners. Not much replay value, IMHO.
 
Last edited:
Do you think maybe AMD put a artificial cap on it? Maybe to protect questionable motherboards from being overloaded?

No, I do not. Even if they did, enabling PBO should fix that. Precision Boost Overdrive takes the PPT, EDC and TDC values of the motherboard when PBO is engaged. These values are set by the motherboard manufacturer, not AMD. Basically, the conditions monitored by the processor which the algorithm checks come from the processor's preset limits from AMD when its set to automatic or PB2. The difference between the two is aggression. With PBO engaged, the processor's OEM defined limits are suspended and the Precision Boost 2 / PBO algorithm simply uses the new values. These are values that the user can actually adjust as well in the UEFI or in Ryzen Master. I actually set these during my testing to try and get the boost clocks up, but it had no discernible impact. The only thing that did was the new PBO offset value. You can enable PBO and then add an offset of up to 200MHz. This simply adds 200MHz on top of whatever AMD's algorithm sets the boost clocks to at the time.

This was taken from AMD's reviewer's guide:
  • Package Power Tracking (“PPT”): The PPT% indicates the distance to configured maximum power (in watts) that can be delivered to the processor socket. “Limit” notes total motherboard-provided capacity if PBO is enabled.
  • Thermal Design Current (“TDC”): The TDC% indicates the distance to the configured current limit (in amps) that can be delivered by the motherboard voltage regulators when thermally constrained. “Limit” notes total motherboard-provided capacity if PBO is enabled.
  • Electrical Design Current (“EDC”): The EDC% indicates the distance to the configured current limit (in amps) that can be delivered by the motherboard voltage regulators in a peak/transient condition. “Limit” notes total motherboard-provided capacity if PBO is enabled.
It's possible that the motherboard BIOS was using more conservative values or something in the BIOS code threw the algorithm off and reducing the maximum headroom of the boost clocks. I'd lean towards something in the AGESA code that came from AMD. I don't think specific manufacturers had anything to do with this as the ASUS, ASRock and MSI test samples all seemed to have this same issue.
 
I don't think it's power delivery. The MSI MEG X570 GODLIKE may not do anything innovative on the VRM front but it's VRM's are more than capable of delivering enough power to reach 5.2GHz on LN2. The CPU also never cracked 78c on the bench at full load. It rarely even got to that point so it wasn't a heat issue. My money would be on immature BIOS. I actually have an updated BIOS for the board that was released on launch day. So I'll be taking a look at it and seeing how it impacts boost clocks.


I did some googling. The line of thinking that an updated BIOS may help is pretty popular out there, but there are also some who say it is not likely.

Who knows really.
 
An extra 100-200mhz isn’t going to help much at normal gaming resolutions. It’ll help for the benchmarks, but that’s about it.
 
I did some googling. The line of thinking that an updated BIOS may help is pretty popular out there, but there are also some who say it is not likely.

Who knows really.

Well, the initial post is wrong about one thing, I was able to hit 4.3GHz on all cores with a fairly modest voltage. It never even pulled that much and that was under Cinebench and Blender which hit the CPU far harder than the games do. Through PBO+Offset, I hit over 4.5GHz, just not all the way to 4.6GHz or beyond.

I hear what the guy is saying and he did math and all that, but my observations with the processor disagree somewhat. He's talking about the chips being thermally limited preventing the clocks from reaching the advertised values and that's just not what I observed. My CPU never cracked 78c no matter what I did to it. On the other hand, the Intel Core i9 9900K seemed more thermally limited as it routinely hit 85c and pushed upwards of 90c. At 5.1GHz mine throttled consistently in several tasks. I've also seem my own Threadripper system throttle here and there during testing in some workloads. Agian, my Ryzen 9 3900X just doesn't seem to be hitting any thermal walls that I could see.
 
Last edited:
GamersNexus says 4.3-4.4 is where all of his CPU's pooped out. No more real scaling with voltage after that.
 
Back
Top