AMD Ryzen R9 3900X Review Round Up

My 2950x would hit 165 ish single Thread in r15 at 4.1ghz.

The 3700x today in Mcenter they had on demo would hit 201 at 4.3ghz.

Umm that's no a result of a clock speed bump. That's a far more efficient core performance.

Intel guys just dont know how to yield this one to AMD

So why not let AMD eat thier cake this time. You'll get your 10nm Intel soon enough then you can eat your cake.


Kind of what I was saying earlier, AMD have made massive performance gains over last generation without clockspeed being the reason for that which is impressive on its own. If there is a Zen 2 + maybe they can squeeze some extra juice out of the process. I am interested in a 3700 but will likely just go with the 3600 which showed gaming performance on par with Intels 8th and 9th generation non K's and even showed remarkably better minimum frames than the 9600K in known thread junkie games. On a clock vs clock basis AMD is pretty much around Intel now, Intel have the buffer of more clockspeed and a very good memory controller but still this is a very convincing release
 
A large improvement in https://techreport.com/wp-content/uploads/2019/07/Crysis_3_time_spent_7.png compared to the previous generation ryzen chips.

Overall performance seems to be what I was expecting after looking at a few of the reviews. A more refined CPU that performs better than intel for most tasks but is let down by it's inability to ramp clock speeds as high as intel to be able to outpace it when it comes to gaming.
 
  • Like
Reactions: dgz
like this
Eh, the difference is mostly within the margin of error at high resolution, I wouldn't classify the difference as "wrecking" AMD. Is Intel faster? Yes. Is it a noticeable percentage? I guess that depends. Most people are GPU limited and even if you are not, it's such a small difference overall, vs the performance increase in almost everything outside of games...

I want one of these just for the PCI-E 4.0 support and twice the cores at the same price.

He posted the same thing on other forums, some people really have an agenda.

https://www.overclock.net/forum/225...ryzen-3000-series-reviews-6.html#post28029756
 
long live the new king of CPU's- AMD!...it took a long time but congrats to AMD!...now if they could only get their GPU's to match Nvidia
 
So the IPC claims were 100% true. I was hoping to see these able to OC around 4.5-4.7ghz on ambient however, minor disappointment causing the chips to trail behind the intel parts a bit in games and single core but no biggie. Still a really strong buy. I'll be grabbing an 8 core and calling it a year.
 
Dan_D
The tomshardware review saw high power consumption on the ASROCK Taichi x570 with Prime95;
Finally, a test that really stands out! ASRock’s X570 Taichi consumed far more power at full load, and a quick search for the cause revealed that this board, and only this board, was running the 3700X at 1.31V and 4.1GHz under Prime95 small-FFTs. The other boards were running less than 1.2V, at 3.9 to 4.0 GHz in this test.
Is that because the cpu queried the mb and the mb said "yes, i can give you more voltage!"
 
Just finished reading Dan's review on tfpsr. Good writeup. Pretty much what I expected, thought I had hoped it would overclock better with good cooling. Judging that his instability during manual overclocks came at relatively low temps, this may not be the case, which is a shame.

Either way, I'm still on board for a Ryzen 9 3950x come September.
 
Just finished reading Dan's review on tfpsr. Good writeup. Pretty much what I expected, thought I had hoped it would overclock better with good cooling. Judging that his instability during manual overclocks came at relatively low temps, this may not be the case, which is a shame.

Either way, I'm still on board for a Ryzen 9 3950x come September.

Cooling really didn't seem like the issue. Even with outlandish voltage settings, the CPU never pulled that much power. You can set the voltage manually, but all that means is that it can use up to that much. It never came close to doing that at those higher clocks. 4.4GHz and 4.5GHz were just right out. 4.4GHz gave me hope of working on occasion but firing up anything CPU intensive would always result in a crash or sudden restart.
 
Cooling really didn't seem like the issue. Even with outlandish voltage settings, the CPU never pulled that much power. You can set the voltage manually, but all that means is that it can use up to that much. It never came close to doing that at those higher clocks. 4.4GHz and 4.5GHz were just right out. 4.4GHz gave me hope of working on occasion but firing up anything CPU intensive would always result in a crash or sudden restart.

I wonder if "golden sample" 3900X's will be able to be stable at 4.4Ghz. It will be interesting to see what Silicon Lottery is able to get when they put Zen 2 CPUs for sale.
 
Dan_D
The tomshardware review saw high power consumption on the ASROCK Taichi x570 with Prime95;
Finally, a test that really stands out! ASRock’s X570 Taichi consumed far more power at full load, and a quick search for the cause revealed that this board, and only this board, was running the 3700X at 1.31V and 4.1GHz under Prime95 small-FFTs. The other boards were running less than 1.2V, at 3.9 to 4.0 GHz in this test.
Is that because the cpu queried the mb and the mb said "yes, i can give you more voltage!"

I haven't looked at that motherboard specifically. I have no idea what its VRM's look like, and I have no recent experience with ASRock motherboards. So I am not sure what's going on. There were three possible motherboards with the review kits. I think everyone got the MSI and then either the ASRock or an ASUS Crosshair. I didn't get the ASRock board. We got the ASUS and the MSI. Anything I could say would be entirely speculative. My initial feeling is that its likely a firmware issue. The motherboard may simply be over supplying voltage to the CPU.
 
Cooling really didn't seem like the issue. Even with outlandish voltage settings, the CPU never pulled that much power. You can set the voltage manually, but all that means is that it can use up to that much. It never came close to doing that at those higher clocks. 4.4GHz and 4.5GHz were just right out. 4.4GHz gave me hope of working on occasion but firing up anything CPU intensive would always result in a crash or sudden restart.

Looks like you ended up being right and these chips will have a hard limit they can all reach, figured it would be a thermal limit reached instead.
 
I will have to say that overall, I think the 3700X is fairly disappointing looking at the 2700X. It isn't that much faster. Was hoping for a bigger bump. IPC upgrade was about expected but was hoping to see clocks go up further. Looking at more benchmarks it doesn't seem too worthwhile and I may just continue using the 2700X.
 
You playing games at 640x480 resolutions?

Gamers are nuts. Any modern CPU is good enough for gaming.

Yes and no. A modern 4 core is garbage for AAA gaming. Mid-range mainstream platform CPUs are fine for 1080p at 60+ hz and 1440p at 60hz. 1440p above 60hz and 4K at 60hz are going to see a benefit from the higher-end mainstream processors. 1440p at 120-144hz can create both CPU and GPU bottlenecks while at 4K 60hz most high-end mainstream CPUs will be relatively close in performance.
 
Disappointed in X570. Will probably hold off for X670 or whatever supports USB 4.0
Will stick with my X470 board for now.
 
This doesn't even make sense. IPC is instructions per clock... Being "clockspeed boosted" doesn't increase IPC. Also, "Clock vs clock IPC" is simply nonsense because the "per clock" part of IPC factors out the clock speed.
what's even more silly is the amount of people that liked his comment...
 
Cooling really didn't seem like the issue. Even with outlandish voltage settings, the CPU never pulled that much power. You can set the voltage manually, but all that means is that it can use up to that much. It never came close to doing that at those higher clocks. 4.4GHz and 4.5GHz were just right out. 4.4GHz gave me hope of working on occasion but firing up anything CPU intensive would always result in a crash or sudden restart.
It's possible it's limiting voltage to save the chip from heat. derbauer said he got about 50mhz per 20°c, without much change in voltage. The 3900x used 190w max all core (with standard cooling at ~4.3ghz,i assume)
 
Man bummer on the clock speeds. I see that performance is great and I am happy about that. Not really complaining just the OC'er in me wants more clocks.

Odd with a new process size we get basically the same clocks of last gen, maybe 100 mhz bump. Oh well. They still look like great chips, not sure I am compelled to upgrade
from my 2600x with the light stuff I do.
 
So when do we get more salty clickbait rumors on the next threadshredder?
 
Man bummer on the clock speeds. I see that performance is great and I am happy about that. Not really complaining just the OC'er in me wants more clocks.

Odd with a new process size we get basically the same clocks of last gen, maybe 100 mhz bump. Oh well. They still look like great chips, not sure I am compelled to upgrade
from my 2600x with the light stuff I do.

TSMC 7nm isn't that great. 7nm+ might give us what we want.

Look what Intel can do with 14nm++++++++++++++++++++++++++ compared to TSMC latest process.
 
O yea people do have an agenda no doubt. But his post isn't wrong either.
But he’s right

Depends on how you define "wrecking". Yes it is up to 15-20% in three games at 1080p (Far Cry 5, Delaware Doctor, and Hitman 2, although Toms has at about 10% apart in Hitman 2) using a $1200 GPU. In everything else it's around 5%, and those % shrink as you go to 1440p, and "wrecking" would be the wrong descriptor. If you game exclusively at 1080p AND already own a 2080 Ti or plan to buy one AND you need every last frame possible, then sure Intel "wrecks" AMD and you should pick up a 9900K.

Wrecking would imply a clear-cut choice, but this is decidedly not the case here.
 
Depends on how you define "wrecking". Yes it is up to 15-20% in three games at 1080p (Far Cry 5, Delaware Doctor, and Hitman 2, although Toms has at about 10% apart in Hitman 2) using a $1200 GPU. In everything else it's around 5%, and those % shrink as you go to 1440p, and "wrecking" would be the wrong descriptor. If you game exclusively at 1080p AND already own a 2080 Ti or plan to buy one AND you need every last frame possible, then sure Intel "wrecks" AMD and you should pick up a 9900K.

Wrecking would imply a clear-cut choice, but this is decidedly not the case here.

Once you take into account that the 9900k can still be overclocked from its stock configuration, while the 3900 Ryzen cannot, it does get wrecked. Now thats not to take away anything away from the 3900. But the thing that is holding back Ryzen 3000 is clock speeds. Clock for clock Ryzen 3000 is faster then the newest Intel CPU's, but Intel still has them beat with the clock speed advantage.

So, Vega is right....eventhough he does have his own agenda......

But, that wont stop me from upgrading to a 3800/3900 Ryzen ;)
 
TSMC 7nm isn't that great. 7nm+ might give us what we want.

Look what Intel can do with 14nm++++++++++++++++++++++++++ compared to TSMC latest process.

One has absolutely nothing to do with the other. Intel and AMD architecture is vastly different so assuming speed differences between TSMC and Intel process is silly. Also look at what Intel can't do on 10nm and thats deliver anything you want for the desktop.
 
Once you take into account that the 9900k can still be overclocked from its stock configuration, while the 3900 Ryzen cannot, it does get wrecked. Now thats not to take away anything away from the 3900. But the thing that is holding back Ryzen 3000 is clock speeds. Clock for clock Ryzen 3000 is faster then the newest Intel CPU's, but Intel still has them beat with the clock speed advantage.

So, Vega is right....eventhough he does have his own agenda......

But, that wont stop me from upgrading to a 3800/3900 Ryzen ;)

Sure except the headroom on the 9900K is small and you have another 300-400 MHz left in the tank (all core OC, even smaller if we're talking just 1 or 2 cores), for maybe what, an extra 5-6%? Still hardly "wrecking". Let's just call hyperbole for what it is ok?
 
Sure except the headroom on the 9900K is small and you have another 300-400 MHz left in the tank (all core OC, even smaller if we're talking just 1 or 2 cores), for maybe what, an extra 5-6%? Still hardly "wrecking". Let's just call hyperbole for what it is ok?

I mean just look at Dan's (Sorry Dan :( )own review. Overclock that 9900k to 5ghz, and its way bigger then 5-6%....I mean I am speaking the truth, the results are out there.

https://www.thefpsreview.com/2019/07/07/amd-ryzen-9-3900x-cpu-review/8/
 
Last edited:
OK my mistake I was thinking the Navi review!

Yeah, we were talking in Discord last night a little bit while each working on our respective reviews. I'm not sure when he got done, but I finished at 5:00AM central.

On a separate note, the Core i9 9900K can easily reach 5.0-5.1GHz. I actually used one of the presets in the UEFI for the Maximus XI APEX to see if it worked. It's marked as a 24/7 stable 5.0GHz overclock.
 
I mean just look at Dan's (Sorry Dan :( )own review. Overclock that 9900k to 5ghz, and its way bigger then 5-6%....I mean I am speaking the truth, the results are out there.

https://www.thefpsreview.com/2019/07/07/amd-ryzen-9-3900x-cpu-review/8/

Here is the way I see it, you have to overclock that 9900K to get that performance, while the 3900X pretty much maxes out without any intervention on my part. Both are great chips for gaming and one is far better for doing productivity. Intel will get to cling to the fastest gaming cpu, but the margin is so small and it gets it's ass kicked in productivity so badly that I think most wont care about that tiny win in gaming.
 
I mean just look at Dan's (Sorry Dan :( )own review. Overclock that 9900k to 5ghz, and its way bigger then 5-6%....I mean I am speaking the truth, the results are out there.

https://www.thefpsreview.com/2019/07/07/amd-ryzen-9-3900x-cpu-review/8/

Yes, by dialing settings to low to force even more of a CPU bottleneck. Do you know anybody who buys a $500 chip and a $1200 GPU to game at 1080p on low? Maybe hardcore competitive gamers who do it for a living, but that's definitely not the norm. I stand by what I said earlier, "wrecking" = hyperbole.

Hell Dan even says as much himself:

Tests at these settings are in no way representational of what you should expect to see in actual gaming unless you really use a $1,299.99 video card to game at 1920×1080 with potato mode levels of detail in games

And honestly, all this argument is kinda moot anyway. I'd wager anybody who drops $1200+ on a GPU is likely looking for all out best performance regardless of cost, in which case they'd be looking at the 9900K and nothing else anyway.

For all other people who are more price conscious, the gap at 1080p is likely to hover around 10-15% or less once you back away from a 2080 Ti. You now have a $150 price gulf between 9900K and 3700X. That $150 is much better spent buying the next tier up GPU performance instead of worrying about CPU bottlenecks. Thus once again, this whole "wrecking" thing is just silly.
 
Last edited:
Here is the way I see it, you have to overclock that 9900K to get that performance, while the 3900X pretty much maxes out without any intervention on my part. Both are great chips for gaming and one is far better for doing productivity. Intel will get to cling to the fastest gaming cpu, but the margin is so small and it gets it's ass kicked in productivity so badly that I think most wont care about that tiny win in gaming.

i'm reminded of the 1800x vs 7700k.
 
Yes, by dialing settings to low to force even more of a CPU bottleneck. Do you know anybody who buys a $500 chip and a $1200 GPU to game at 1080p on low? Maybe hardcore competitive gamers who do it for a living, but that's definitely not the norm. I stand by what I said earlier, "wrecking" = hyperbole.

Hell Dan even says as much himself:



And honestly, all this argument is kinda moot anyway. I'd wager anybody who drops $1200+ on a GPU is likely looking for all out best performance regardless of cost, in which case they'd be looking at the 9900K and nothing else anyway.

For all other people who are more price conscious, the gap at 1080p is likely to hover around 10-15% or less once you back away from a 2080 Ti. You now have a $150 price gulf between 9900K and 3700X. That $150 is much better spent buying the next tier up GPU performance instead of worrying about CPU bottlenecks. Thus once again, this whole "wrecking" thing is just silly.

Totally agree, All I am saying is even when you max out settings the 9900k is still father ahead then just 5-6%.
 
So you guy are telling me I should return the 9700k I bought this morning and get the 3700x with a b450m mb?

Do you do anything besides gaming at all? (encoding, rendering etc) What resolution do you game at mainly?

I'd say keep the 9700K if both of the following is true:
- the most CPU intensive thing you do is gaming
- you game at 1080p exclusively

Totally agree, All I am saying is even when you max out settings the 9900k is still father ahead then just 5-6%.

I meant another 5-6% on top of the 9900K's existing lead. Sorry if it was confusing.
 
Back
Top