AMD Ryzen R9 3900X Review Round Up

GamersNexus says 4.3-4.4 is where all of his CPU's pooped out. No more real scaling with voltage after that.

I can confirm. Clocking all cores, this is absolutely true. However, the issue here is that these chips aren't hitting their advertised 4.6GHz boost clocks. I couldn't even do it using PBO with a +200MHz offset.
 
GamersNexus says 4.3-4.4 is where all of his CPU's pooped out. No more real scaling with voltage after that.

I was going to upgrade, but I assumed more OC headroom. I'm on a 4.8ghz 6700k and am finding it difficult to justify upgrading at 4k res now.
 
  • Like
Reactions: mikeo
like this
Just for shits I benched my now ancient 4930K @ 4.5 in Cinebench R20:

View attachment 172616

Single core score is identical to that of Ryzen 5 1600, while multitcore score is a smudge better (from TechSpot):

I did not realize just how much my aging 4930K was holding me back. :eek: Figured it'd at least be equal to a 1800X in ST performance, but nah not even close.

Might be useful for any Sandy/Ivy holdouts wondering if these are worth the upgrade lol.
Will test my 3970X this evening. I suspect an even worse score. It's good and bad news lol.
 
Yes and no. A modern 4 core is garbage for AAA gaming. Mid-range mainstream platform CPUs are fine for 1080p at 60+ hz and 1440p at 60hz. 1440p above 60hz and 4K at 60hz are going to see a benefit from the higher-end mainstream processors. 1440p at 120-144hz can create both CPU and GPU bottlenecks while at 4K 60hz most high-end mainstream CPUs will be relatively close in performance.

I consider Doom (2016) a modern AAA game. The best in recent years, in fact. My 2500k didn't seem to hold my 1070 as I was playing at 90-170fps on a 120Hz panel. That's hardly garbage. We need better games is all
 
I can confirm. Clocking all cores, this is absolutely true. However, the issue here is that these chips aren't hitting their advertised 4.6GHz boost clocks. I couldn't even do it using PBO with a +200MHz offset.

Question Dan, When you say 'doesn't hit advertised boost'. Do you mean on all core or single core?
 
  • Like
Reactions: dgz
like this
Well, I was planning on waiting for 3950x in September anyway.

I'm staying tuned, hoping this winds up being a BIOS teething issue.
 
Well, the initial post is wrong about one thing, I was able to hit 4.3GHz on all cores with a fairly modest voltage. It never even pulled that much and that was under Cinebench and Blender which hit the CPU far harder than the games do. Through PBO+Offset, I hit over 4.5GHz, just not all the way to 4.6GHz or beyond.

I hear what the guy is saying and he did math and all that, but my observations with the processor disagree somewhat. He's talking about the chips being thermally limited preventing the clocks from reaching the advertised values and that's just not what I observed. My CPU never cracked 78c no matter what I did to it. On the other hand, the Intel Core i9 9900K seemed more thermally limited as it routinely hit 85c and pushed upwards of 90c. At 5.1GHz mine throttled consistently in several tasks. I've also seem my own Threadripper system throttle here and there during testing in some workloads. Agian, my Ryzen 9 3900X just doesn't seem to be hitting any thermal walls that I could see.

Hmm.

Another report I found while googling suggests a wierd problem with Nvidia drivers. Something about the Nvidia drivers was preventing higher boost clocks.

This reportedly went away when an AMD GPU was used.

You used a 2080ti in your official testing (as you should have, to minimize any GPU impacts) but I wonder if you experienced this driver bug.

Have you also tested using an AMD GPU? Would it boost better then?

Edit:

Forgot to include the source.
 
Last edited:
Hmm.

Another report I found while googling suggests a wierd problem with Nvidia drivers. Something about the Nvidia drivers was preventing higher boost clocks.

This reportedly went away when an AMD GPU was used.

You used a 2080ti in your official testing (as you should have, to minimize any GPU impacts) but I wonder if you experienced this driver bug.

Have you also tested using an AMD GPU? Would it boost better then?

Edit:

Forgot to include the source.

No I haven't. The most recent AMD GPU I have is a Radeon HD 7970 GHz Edition. I could break that out and run something that hits the CPU, but not the GPU as that wouldn't produce comparable results to the other tests.
 
Mixed feelings. Gaming wise still looks Intel Dominant, which I am sorry is what most here do. 8700K beating a 3700X in gaming@1440? We still in 2017? Doing a whole sale change over next year CPU/GPU/MOBO. It might just be a full Intel Build. This basically whats going in the consoles next year? I was looking at what I need for next level gaming and all the hype was pointing at either the 3600X or 3700X. 3800 and 3900 look like overkill for just gaming. But after your guys comments and reviews, better off till next year on whole sale change.
 
Mixed feelings. Gaming wise still looks Intel Dominant, which I am sorry is what most here do. 8700K beating a 3700X in gaming@1440? We still in 2017? Doing a whole sale change over next year CPU/GPU/MOBO. It might just be a full Intel Build. This basically whats going in the consoles next year? I was looking at what I need for next level gaming and all the hype was pointing at either the 3600X or 3700X. 3800 and 3900 look like overkill for just gaming. But after your guys comments and reviews, better off till next year on whole sale change.


I'm holding off in making any decisions until all the issues with Nvidia GPU drivers and buggy BIOS releases are put to rest.

Something is preventing these chips from boosting properly.

Seems like a massive fail on AMD's part to not have caught this before launch day, as now they are getting worse reveiws out than they should.

All that said, my gaming tends to be heavily GPU limited anyway, so my next CPU will still likely be a Ryzen, but I do want to see what they are capable of.
 
Have not bought a motherboard yet waiting for some of the bugs to be worked out of the 570 series boards.Guess I could held off on the 3900X. Iwas tempted by the devil.
liked what i saw from reviews and went for it.
 
I'm holding off in making any decisions until all the issues with Nvidia GPU drivers and buggy BIOS releases are put to rest.

Something is preventing these chips from boosting properly.

I would certainly like to see how they perform if there are fixes down the road. I have a 2700X and was hoping for a big 10-15% performance jump across the board, but the IPC gains and practically zero clock gains just don't make sense upgrade wise.
 
I would certainly like to see how they perform if there are fixes down the road. I have a 2700X and was hoping for a big 10-15% performance jump across the board, but the IPC gains and practically zero clock gains just don't make sense upgrade wise.

what's going on here

AMD-MSI-firmware-update-boost-changes.png
 
I was gunna upgrade the whole kit n caboodle, but I just watched a youtube video of a guy that put a 3900x into an a320 board. Not "officially supported" and who knows the lifespan of that board. But I've got a B350 with decent VRMs that got a bios update for 3000 series ryzen. Guess I'll plop a 3900x into it!

I know, I know... no PBO2, etc etc. But hey, Saves me from a new $200 motherboard, and I'll keep using my existing build :)
 
I was gunna upgrade the whole kit n caboodle, but I just watched a youtube video of a guy that put a 3900x into an a320 board. Not "officially supported" and who knows the lifespan of that board. But I've got a B350 with decent VRMs that got a bios update for 3000 series ryzen. Guess I'll plop a 3900x into it!

I know, I know... no PBO2, etc etc. But hey, Saves me from a new $200 motherboard, and I'll keep using my existing build :)

I think the biggest limitation from doing this is that you lose the Gen4 PCIe links.

I'm not talking about the ones to the GPU, they are irrelevant right now. Gen3 is fine for this purpose, and likely will be for some time. I'm talking about the 4 lanes that link to the chipset.

For most people this is probably fine. For me it wouldn't work so well, because I need those extra PCIe lanes off of the chipset for my expansion.
 
I was gunna upgrade the whole kit n caboodle, but I just watched a youtube video of a guy that put a 3900x into an a320 board. Not "officially supported" and who knows the lifespan of that board. But I've got a B350 with decent VRMs that got a bios update for 3000 series ryzen. Guess I'll plop a 3900x into it!

I know, I know... no PBO2, etc etc. But hey, Saves me from a new $200 motherboard, and I'll keep using my existing build :)
We'd better not see a future thread of you talking about how you're RMA'ing the CPU, board, or both :p.
 
I don't know how to embed tweets, but this is interesting.


Those poor reviewers won't get to catch up on sleep any time soon.

From the chart they posted it looks like it is now boosting all the way to 4.6Ghz, as well as responding more quickly.

I wonder if this also fixes PBO AutoOC?

D-8fA7DXUAAmQUf.png
 
Last edited:
I think the biggest limitation from doing this is that you lose the Gen4 PCIe links.

I'm not talking about the ones to the GPU, they are irrelevant right now. Gen3 is fine for this purpose, and likely will be for some time. I'm talking about the 4 lanes that link to the chipset.

For most people this is probably fine. For me it wouldn't work so well, because I need those extra PCIe lanes off of the chipset for my expansion.

But thats only taken advantage of by having PCIe 4 NVMe Drives, right? Jayztwocents said in his interview with AMD they flat out said X570 Only gets you PCIe 4, nothing else over X470. I won't lose out on anything performance wise (except PBO2, XFR, ETC) But other than that, it should be a similar experience. - Edit: Re-read your post, this is exactly what you're saying :) Not a problem as I'll keep using the sata drives in that build for now. No need for 5gb read/write speeds ATM.



He said it got a BIOS update, so it's an officially supported configuration.

And I have a top down cooler to help the VRM not melt while running! Maybe I'll get some cheapy VRM heatsinks to stick on top?
 
But thats only taken advantage of by having PCIe 4 NVMe Drives, right? Jayztwocents said in his interview with AMD they flat out said X570 Only gets you PCIe 4, nothing else over X470. I won't lose out on anything performance wise (except PBO2, XFR, ETC) But other than that, it should be a similar experience. - Edit: Re-read your post, this is exactly what you're saying :) Not a problem as I'll keep using the sata drives in that build for now. No need for 5gb read/write speeds ATM.





And I have a top down cooler to help the VRM not melt while running! Maybe I'll get some cheapy VRM heatsinks to stick on top?

Having the benefit of Gen4 going to the chipset gave the chipset more bandwidth. This allowed AMD to offer more expansion off of the chipset. This is why you see x570 boards having either an extra x8 PCIe slot, or more m.2 slots off of the chipset than x370 or x470
 
The good part is, we don't have quite as many tests as some sites and I just need to rerun them on the one machine. If I had to redo the entire lineup it would take a week to do it.

Keeping fingers crossed for 3.8Ghz PBO AutoOC results :p
 
Having the benefit of Gen4 going to the chipset gave the chipset more bandwidth. This allowed AMD to offer more expansion off of the chipset. This is why you see x570 boards having either an extra x8 PCIe slot, or more m.2 slots off of the chipset than x370 or x470

Forgot to mention this is an ITX build. Not much future expansion available :)
 
GPU limited tests are irrelevant for CPU testing purposes. I knew my comment would incite some interesting conversation. Most reviews I'm seeing a stock 9900K is ~5-15% faster in gaming than the 3900X. Add in that my 9900KF is running at 5.3 GHz, we are talking 20+% performance increase over 3900X since it over-clocks so poorly. 20%+ in the CPU world is "Wrecking" to me in a component field that has stagnated for years.



Uh no. I play games like PUBG at 4K @ 144 Hz/FPS which is VERY demanding on both the CPU and GPU. Especially when you are trying to not drop below 144 FPS ever due to using a back-light strobing monitor. 20% CPU speed differential is substantial in my usage scenario.


All at 4 GHz:

View attachment 172646

(no game was faster on the AMD than the Intel all being at 4 GHz)

that sis the most useless fucking metric on the face of the earth.

look at these 2 wildly different architectures and see how setting them at an arbitrary clock makes them perform differently.
 
Hmm.

Another report I found while googling suggests a wierd problem with Nvidia drivers. Something about the Nvidia drivers was preventing higher boost clocks.

This reportedly went away when an AMD GPU was used.

You used a 2080ti in your official testing (as you should have, to minimize any GPU impacts) but I wonder if you experienced this driver bug.

Have you also tested using an AMD GPU? Would it boost better then?

Edit:

Forgot to include the source.

As if GameGimpworks wasn't enough, nVidia is now gimping AMD CPUs through driver hacks! :eek::D
 
9900K still wrecks AMD for gaming. That's all I need to know.

Maybe right at this moment. But every new game release are using more and more cores. Also everyone that buys an AMD cpu are using it for more than just gaming. Lots of peeps here use multithreaded apps.

AMD will ultimately pass Intel in gaming performance. Intel'= aint got shit for their next gen.

Just don't understand why the Intel fanboys on this site, (which is a PC hardware enthusiast), not an Intel lovefest.
 
Last edited:
Maybe right at this moment. But every new game release are using more and more cores. Also everyone that buys an AMD cpu are using it for more than just gaming. Lots of peeps here use multithreaded apps.

AMD will ultimately pass Intel in gaming performance. Intel'= aint got shit for their next gen.

Intels large clock speed advantage is going to make up for the core differences in games going forward. Games are still going to heavily rely on core speed for quite a while. At best they will reach parity though I suspect the 9900k will remain the gaming champion for a few more years. Unless AMD can get their clock speeds up next generation.
 
that sis the most useless fucking metric on the face of the earth.

look at these 2 wildly different architectures and see how setting them at an arbitrary clock makes them perform differently.

Agreed. Clock for clock measurements seemed interesting back in the PIII vs Athlon days, but we have kind of learned a lot since then.

Some architectures allow for higher clocks, others don't. It only makes sense to consider how an architecture oerforms when clocked as high as it will go. Any other measure is not really valuable.

You can have the highest IPC CPU in the world, but if it's stuck at 200Mhz, it's probably not going to be a great performer in 2019. It makes next to no sense to slow other architectures down to 200mhz to see how they perform.
 
Intels large clock speed advantage is going to make up for the core differences in games going forward. Games are still going to heavily rely on core speed for quite a while. At best they will reach parity though I suspect the 9900k will remain the gaming champion for a few more years. Unless AMD can get their clock speeds up next generation.

Probably true.

But this is only relevant for low resolution gaming or for very old titles.

For just about everything else, while the 9900k will have the benchmark edge, there will be no difference as 99% of users will be GPU limited.

All AMD needs to do is to get close enough that they are in the "GPU limited for most users" space, and they got there already with the first Ryzen.
 
Hope not to derail things, but one thing I found really weird was how cores are being used on Ryzen 3rd gen. As in, as I believe Hardware Unboxed reported (still searching), something weird is going on where its using cores cross CCX's and even die's for games. When reality is it could lump lower threaded games on the same CCX for improved performance.

Data -
This was testing the 3700X but should be similar to 3900X - IPC is higher than that of intel
But when frequency locked, the intel processor is still faster in Games
I cant seem to find the graph that showed the core utilization being weird but I'm looking for that right now

I don't know if its a windows scheduler thing, or something else. But it looks like to me there is untapped performance of the Ryzen processors just due to well, inadequate optimizations. I'm not sure if that is from the game itself, or from the windows scheduler, or something else. Just hoping with software optimizations and additional load balancing we could see improved results over the next few months.
 
What? I thought Intel couldn't figure out how to engineer 10nm, and they apparently scraped it? What proof do they have Intel is even capable of 10nm? I believe they are speaking bullshit?

10nm has been very delayed and scaled back in capability, but Intel have actually been selling 10nm parts for some time now. Initially it was just low powered laptop CPU's for the Chinese market.

Ice Lake will be the big 10nm debut, and it is due out any day now.

Many questions remain as to how much better the gimped 10nm process will be compared to 14nm. I guess we will see when it launches
 
10nm has been very delayed and scaled back in capability, but Intel have actually been selling 10nm parts for some time now. Initially it was just low powered laptop CPU's for the Chinese market.

Ice Lake will be the big 10nm debut, and it is due out any day now.

Many questions remain as to how much better the gimped 10nm process will be compared to 14nm. I guess we will see when it launches

Hmmm interesting indeed. I was not aware of this, thanks for info.
 
regardless im impressed, if you want something for the future that packs a punch at a good cost. there is neglible reasons to going intel now. amd is really strong right now, but i dont think it will last once intel ships 10 nm, amd is on 7 nm, and got better ipc per clock. but they will be crushed once intel ships 10 nm. we dont know what amd got from now but, i can immagine they will strugle, they spent their advantage on node shrink. hecc il buy amd just to piss of intel at this point, a 12c or 16 will last way past 5 years right now. my aging 5820k still dont show much of slowing down.
 
Back
Top