Summary of Benchmark Results: 3900X vs 9900k

DuronBurgerMan

[H]ard|Gawd
Joined
Mar 13, 2017
Messages
1,340
I've gone through a gazillion benchmarks (Toms, Anand, FPS, GN, various Youtubers (Linus, Jay, etc...), Puget Systems and have *approximate* values re: use cases.

3900X vs 9900k stock.

1080p Gaming (high end GPU): -6%

4k Gaming: -2%

Photoshop (all tests): +2%

Premiere (all tests): +15%

After Effects (all tests): -5% disclaimer: high variance between tests

3D Rendering (CPU, all): +45%

3D Rendering (GPU, all): +15%

Standard Productivity (Office, Browsers, etc...): ~tie

Compression/Decompression: +25% disclaimer: would be much higher w/o WinRAR

Compiling (common, all): +30%

Encoding (CPU, no GPU/QuickSync): +20% disclaimer: high variance between tests
 
Essentially, Ryzen 3000's are good CPU's but nowhere near insurmountable for Intel, even in the short term. If they are willing to slightly cannibalize their HEDT CPU's by bringing some of them down to their mainstream platforms, they're back on top in pretty much every performance metric, except power consumption.

The CPU war is far from over. I certainly hope AMD is not resting on it's laurels like it did after Athlon 64. They need to figure out how to scale this thing beyond simply adding more cores or their time at the top will be short lived.
 
Thanks for putting this data together, really is a nice quick snapshot of what it does worse, the same, and better. Honestly, it's hard to go wrong with either one at this point, besides some growing pains that I'm hoping are resolved soon.
 
I believe that the 9900K is still the better overall CPU. X570 still has bugs to work out. The 3900X can game of course, but Intel, if they ever manage to pull their head out of their ass and install a half-decent CEO, could crush AMD without much effort. Historically, every win that AMD had in the past was when Intel fucked up. And this time around Intel is far from a Pentium 4 - level fuck-up.

If they are willing to slightly cannibalize their HEDT CPU's by bringing some of them down to their mainstream platforms, they're back on top in pretty much every performance metric, except power consumption.

They can't simply do that. Their CPUs are made of monolithic dies, so anything ported to the mainstream platform has to have the AVX 512 silicon cut out, otherwise the dies are too big. They could probably glue two 9900K dies together and release something like that... But it will be power hungry.

Where Intel fucked up and they don't want to talk about it is that they slacked and failed to upgrade their fans. Intel was looking to diversify in the past few years, and this shows. They thought that PC demand will fall off a cliff. Sadly, they don't have to show much for it. And their CEO is absolutely useless. Knowing Intel, they will get their fans in order and a better CEO and then they will get back on top again. I like how things are right now, they needed to be brought down a few pegs.
 
nice work
But it will suit you well to link to sources

That would take much, much, much longer than the post itself. You don't want to know how many of these I've dug through.

However, I will give you some quick cites to verify.

Hardware Unboxed



*comparison with SMT off

Puget Systems

https://www.pugetsystems.com/labs/a...adripper-2-Intel-9th-Gen-Intel-X-series-1536/

https://www.pugetsystems.com/labs/a...adripper-2-Intel-9th-Gen-Intel-X-series-1537/

https://www.pugetsystems.com/labs/a...adripper-2-Intel-9th-Gen-Intel-X-series-1535/

https://www.pugetsystems.com/labs/a...adripper-2-Intel-9th-Gen-Intel-X-series-1538/

https://www.pugetsystems.com/labs/a...adripper-2-Intel-9th-Gen-Intel-X-series-1529/

PC Mag

https://www.pcmag.com/review/369176/amd-ryzen-9-3900x

Anandtech *note that results were updated after BIOS fix

https://www.anandtech.com/show/14605/the-and-ryzen-3700x-3900x-review-raising-the-bar

Tomshardware

https://www.tomshardware.com/reviews/ryzen-9-3900x-7-3700x-review,6214.html

Hexus

https://hexus.net/tech/reviews/cpu/132374-amd-ryzen-9-3900x-ryzen-7-3700x/

Extreme Tech

https://www.extremetech.com/tag/ryzen-3000

Linus



GN



Hot Hardware *note: results were much lower for both CPUs than most other reviewers

https://hothardware.com/reviews/amd-ryzen-9-3900x-vs-core-i9-9900k-ipc-shootout

Digital Trends *note gaming results were unusually high for 3900X

https://www.digitaltrends.com/computing/amd-ryzen-9-3900x-vs-intel-core-i9-9900k/

PC World

https://www.pcworld.com/article/3405567/ryzen-3000-review-amds-12-core-ryzen-9-3900x.html

Tech Powerup

https://www.techpowerup.com/review/amd-ryzen-9-3900x/10.html

There are more...
 
Last edited:
Some interesting quirks I've found:

1. 1080p Gaming performance is in a range from roughly -10% to almost 0% (and a couple reviewers claimed an AMD victory - this was BS IMHO). I believe the Hardware Unboxed 36 game test is about the most accurate at -6% - and the average of all benchmarkers comes out to roughly that value anyway.

2. For Photoshop, Puget Systems and PC World (running the Puget benchmark suite) claimed a modest AMD win. GN claimed a modest AMD loss. Toms ran a different benchmark under 'photo editing' that resulted in a functional tie. Techpowerup claimed an AMD loss with a different benchmark than Puget/PC World. Averaged out, it's a slight AMD win.

3. For rendering, the numbers are remarkably consistent across reviews and bear no further investigation.

4. For encoding, Premiere and After Effects, Puget Systems investigated a number of things. The 3900X had a mild loss in After Effects, but in everything else had a heavy win... except when measuring encode performance when QuickSync was enabled. Even after accounting for this, the 3900X still wins overall in each program/function category (except in AE). So I have chosen this value. However QuickSync is controversial here since it results in a quality loss. So, if anything, AMD's win here is UNDERstated. Note that TechPowerup disagreed slightly with Puget Systems, resulting in a near-tie.

5. Standard productivity is a mix of various benchmarks that do not fall into the other buckets, and could just as easily become "misc" or "general". In this we see a roughly similar number of Intel victories vs AMD victories. There is sometimes great variation from application to application. Some notes: AMD wins one of the browser benchmarks, Intel wins the other two. AMD is very strong on Excel, weak in Word. It's back and forth like that for most of the rest.
 
Last edited:
I believe that the 9900K is still the better overall CPU. X570 still has bugs to work out. The 3900X can game of course, but Intel, if they ever manage to pull their head out of their ass and install a half-decent CEO, could crush AMD without much effort. Historically, every win that AMD had in the past was when Intel fucked up. And this time around Intel is far from a Pentium 4 - level fuck-up.

You have two separate claims here.

1. "9900k is still the better overall CPU".

No. This is false. For "overall" or "general" results, we must look at all applications/use-cases. The 3900X wins a majority of them no matter how you slice it and dice it, and where it loses, the loss margin is small. Zen 2 has caught up enough in single thread performance that the remaining deficit no longer outweighs both raw core count, and superior SMT yield. In some throughput cases, AMD even wins outright in single thread performance. As an "overall" CPU, the 3900X is clearly superior. However in specific use cases, gaming being the most obvious, the 9900k is superior. Some video applications can benefit from QuickSync or high single thread performance, and this allows Intel a few victories in some video editing scenarios, but the 3900X's raw core count usually wins the day here as well.

I agree, however, that the 9900k is not a fuck up, or a P4 situation. It's a great CPU. If you're building the ultimate gaming rig (and don't intend to do much else with it) you should buy it. If the teething issues of new launch/platform bother you, then sure, the 9900k is a great alternative there too.

2. "Could crush AMD without effort."

Perhaps. Perhaps not. I've mentioned before that AMD and Intel have very different approaches from a business level. Intel has a lot of money, but it also has a lot of baggage. AMD's advantage is that some smart wizard in the basement can come up with some hare-brained idea and run with it, resulting in occasional flares of unexpected brilliance at AMD. AMD's weakness is exactly the same thing. Look at Bulldozer as a hare-brained idea that turned to crap. Look at Zen 2's chiplet design as a hare-brained idea that turned to gold. AMD is, at its core, inconsistent, but not stupid and not easy to get rid of on a whim.

They can't simply do that. Their CPUs are made of monolithic dies, so anything ported to the mainstream platform has to have the AVX 512 silicon cut out, otherwise the dies are too big. They could probably glue two 9900K dies together and release something like that... But it will be power hungry.

Where Intel fucked up and they don't want to talk about it is that they slacked and failed to upgrade their fans. Intel was looking to diversify in the past few years, and this shows. They thought that PC demand will fall off a cliff. Sadly, they don't have to show much for it. And their CEO is absolutely useless. Knowing Intel, they will get their fans in order and a better CEO and then they will get back on top again. I like how things are right now, they needed to be brought down a few pegs.

Correct. The HEDT parts can't just come down to the mainstream because of power requirements and a host of other things. This is where AMD's CCX and later, CCD/chiplet concept proved excellent. This allowed for rapid, cheap, and (relatively) high clock core scaling. It was a great idea. Intel doesn't have an equivalent yet, and has continued to rely on the fact that they've really perfected their 14nm process, and really perfected Skylake. That and AMD's single thread weakness kept AMD from really running away with a victory. But the single thread weakness is almost gone - and is now almost solely due to clockspeed. And I doubt we see much higher - if any higher - clocks from Skylake derivatives. If Intel drops a 10 core - and it seems like they will - AMD has a 16 core answer. The status quo is preserved. The status quo being, as per the first part of this reply, at almost any given price tier Zen 2 competes in, it's faster generally (but not necessarily specifically).
 
you forgot power usage...jk
We all know its lower

I didn't forget it, but I decided not to compile any averages based on it, because frankly, it's really hard to compare reviews on this due to different methods of measuring, different programs used for "load" or "stress" testing, etc... Too much variation. More efficient? Yeah, probably. Can I compile averages? Probably not.
 
Bulldozer was a major factor in the decline of AMD's CPU business. Management aiming a double-barrelled shotgun at both feet and simultaneously pulling the triggers almost finished the job until no-bullshit Lisa Su stepped in.
 
Bulldozer was a major factor in the decline of AMD's CPU business. Management aiming a double-barrelled shotgun at both feet and simultaneously pulling the triggers almost finished the job until no-bullshit Lisa Su stepped in.

Agree but it was also part of AMD’s salvation. Without Bulldozer, there would be no Ryzen
 
I've heard a variation on this theme before and I don't buy it. The only reason why Ryzen seems so good (even first gen) is because Bulldozer/Piledriver were utter failures. That AMD managed to pull out of the tailspin before crashing and burning is very much due to realizing that they had to go in a completely different direction to survive.

Imagine where AMD would be now if they hadn't made Bulldozer.
 
I believe that the 9900K is still the better overall CPU. X570 still has bugs to work out.

Gonna have to disagree here based on my understanding of what "overall" means

x570 does have some bugs, but that's a chipset, not a CPU. Beyond that, the bugs aren't game breaking. A few boost/voltage issues here and there. I have those issues on mine, it hasn't affected it's usability one bit. If not for monitoring software, I wouldn't even know about them.
 
I would love to see some benchmarks that will actually mirror my usage.

Cities Skylines over 100k cities.

4k gaming and 4k youtube.

4k gaming AND simultaneous 4k real time x265 conversion. Possible? Probably not, what I want? YES! However I doubt i'll ever see anyone testing it.
 
I've heard a variation on this theme before and I don't buy it. The only reason why Ryzen seems so good (even first gen) is because Bulldozer/Piledriver were utter failures. That AMD managed to pull out of the tailspin before crashing and burning is very much due to realizing that they had to go in a completely different direction to survive.

Imagine where AMD would be now if they hadn't made Bulldozer.
That makes zero sense. Ryzen doesn't seem good compared to Bulldozer, it seems good compared to Intel too.
 
What he's (my interpretation) saying is, if Bulldozer/Pilederiver were actually good, then the gap/jump to Ryzen would have been smaller, thus not viewed as a substantial gain but an incremental change (like Intel has been doing for years). It was such a large jump in performance from their previous attempt because their previous attempt was such crap.
 
What he's (my interpretation) saying is, if Bulldozer/Pilederiver were actually good, then the gap/jump to Ryzen would have been smaller, thus not viewed as a substantial gain but an incremental change (like Intel has been doing for years). It was such a large jump in performance from their previous attempt because their previous attempt was such crap.

True. But Ryzen would still occupy its current niche regardless.

Though there are a lot of folks (myself included) who are happy to see competition again. And maybe that gives AMD a little boost it might not otherwise have gained. However, it wouldn't have gone to pot because of Bulldozer either.

Either way, it's all speculation.
 
Not if you like sensible file sizes and quality.............(CANNOT have both with a GPU encoder, already tried numerous tests with NVEC, AMD VCE, and Quicksync)
I mean that is the whole point of x265................

Correct. GPU encoding incurs a quality hit. It's just not as flexible as software encoding can be. Don't get me wrong, though, it's a nice option to have when top quality/best compression is not required. But it's not a catch-all solution.
 
I've gone through a gazillion benchmarks (Toms, Anand, FPS, GN, various Youtubers (Linus, Jay, etc...), Puget Systems and have *approximate* values re: use cases.

3900X vs 9900k stock.

1080p Gaming (high end GPU): -6%

4k Gaming: -2%

Photoshop (all tests): +2%

Premiere (all tests): +15%

After Effects (all tests): -5% disclaimer: high variance between tests

3D Rendering (CPU, all): +45%

3D Rendering (GPU, all): +15%

Standard Productivity (Office, Browsers, etc...): ~tie

Compression/Decompression: +25% disclaimer: would be much higher w/o WinRAR

Compiling (common, all): +30%

Encoding (CPU, no GPU/QuickSync): +20% disclaimer: high variance between tests

I can explain the After Effects results. During my testing for my 3900X article, I had only one RTX 2080 Ti video card and a bunch of configurations to test. In order to speed things along I tried to see what the results were with some other video cards I had laying around. The idea being that I could use a different graphics cards for non-GPU oriented tests and test concurrently. If your doing WinRAR, Cinebench, etc. it makes no difference what graphics card you use. However, After Effects CC has GPU acceleration. You absolutely must test this application with the same GPU, or your results may vary considerably. I saw results running between 800 and 1020 depending on the memory bandwidth, clock speeds, and core counts of the test configurations. For example, an RTX 2080 Ti and a Titan X (Maxwell) score about the same here. You get a few points on the RTX card. However, for whatever reason I saw a result of 2200 when I tested with a GTX 780 Ti. :eek:. Obviously, not a comparable result.

If you look at the test configurations, many reviewers tested with GTX 1080 Ti's and other, RTX cards like the 2080 or the 2070 etc. This is what skewed the results to where they aren't remotely comparable. I noted that one of the things I saw was physical memory size in After Effects. This was noted in the Threadripper system test, which had 32GB of RAM compared to 16GB on everything else. In short order, I determined the best thing to do was to quit sleeping and just use the one card I had on hand. For the retest after the AGESA code update, I had a second RTX 2080 Ti to test with, so that made things much easier. Unfortunately, that card was on loan to me for a few days and now I'm back down to the one.
 
Last edited:
That makes zero sense. Ryzen doesn't seem good compared to Bulldozer, it seems good compared to Intel too.

Not first gen. It was a good release and better than most people expected (including myself) but these very forums are replete with evidence that Intel was still a viable choice for a lot of workloads. Two gens later, there's almost no use case where Intel should even be considered but any objective observer has to admit that first gen wasn't a total and complete win.
 
Not first gen. It was a good release and better than most people expected (including myself) but these very forums are replete with evidence that Intel was still a viable choice for a lot of workloads. Two gens later, there's almost no use case where Intel should even be considered but any objective observer has to admit that first gen wasn't a total and complete win.

In a pure gaming rig, I can see going Intel. Especially if you are one of the 1920x1080P - Low settings guys who runs a 240Hz monitor.
 
I can explain the After Effects results. During my testing for my 3900X article, I had only one RTX 2080 Ti video card and a bunch of configurations to test. In order to speed things along I tried to see what the results were with some other video cards I had laying around. The idea being that I could use a different graphics cards for non-GPU oriented tests and test concurrently. If your doing WinRAR, Cinebench, etc. it makes no difference what graphics card you use. However, After Effects CC has GPU acceleration. You absolutely must test this application with the same GPU, or your results may vary considerably. I saw results raning between 800 and 1020 depending on the memory bandwidth, clock speeds, and core counts of the test configurations. For example, an RTX 2080 Ti and a Titan X (Maxwell) score about the same here. You get a few points on the RTX card. However, for whatever reason I saw a result of 2200 when I tested with a GTX 780 Ti. :eek:. Obviously, not a comparable result.

If you look at the test configurations, many reviewers tested with GTX 1080 Ti's and other, RTX cards like the 2080 or the 2070 etc. This is what skewed the results to where they aren't remotely comparable. I noted that one of the things I saw was physical memory size in After Effects. This was noted in the Threadripper system test, which had 32GB of RAM compared to 16GB on everything else. In short order, I determined the best thing to do was to quit sleeping and just use the one card I had on hand. For the retest after the AGESA code update, I had a second RTX 2080 Ti to test with, so that made things much easier. Unfortunately, that card was on loan to me for a few days and now I'm back down to the one.

That makes a lot of sense.
 
True. But Ryzen would still occupy its current niche regardless.

Though there are a lot of folks (myself included) who are happy to see competition again. And maybe that gives AMD a little boost it might not otherwise have gained. However, it wouldn't have gone to pot because of Bulldozer either.

Either way, it's all speculation.

I'm not disagreeing, just responding and trying to help clarify what I thought he meant. I am excited to see AMD competing again. After the GPP fiasco I have AMD in every single one of my desktops even though I could have probably gotten a little better from nvidia (my old trust r9 fury, 2 460's, 1 470 and an old 280x). I actually have all intel CPU's because at the time I did my upgrades, AMD didn't have anything I was looking for. My next build will be an AMD system as I do more development (compiling) and video encoding than I do gaming nowadays. The point though was that it looked like a big leap and catch up for AMD precisely because AMD was so far behind. If they weren't, Intel would have been forced to start adding cores long ago and the landscape would be completely different.
 
Yeah. Some dude - I forget who - was saying something like only gaming performance mattered and who really does any of these multithreaded tasks anyway? I think the implication was people were fanboying and pretended they did [insert multithreaded workloads here] to justify their fanboy purchases.

Frankly, I wish I had the time to game more often than I do work. It took me 2 months to get through the main storyline of Witcher 3 because I had to play in 30 min to 1 hour chunks here and there as I found the time. So it's really the other way around for at least some of us. I'm older now, can't game 10 hours a day, and frankly the faster I get my work done, the more time I have left for games.

So in a really weird kind of way, Zen CPUs really are better gaming CPUs for me. I get my work done a little faster, and thus have a few extra minutes for gaming, regardless of the FPS (and I play in 4k anyway, so there's no real difference worth mentioning anyhow).
 
Out of curiosity, does that bug only apply to Zen 2 on X570? Or does it also apply to Zen 2 on X470/X370? Haven't read up on it.
AFAIK, it's a Zen 2 thing regardless of chipset used. There is an instruction related to generating random numbers (RDRND or something like that) that is not working as desired. I think it's also why Linux had systemctl issues during bootup on some distros because it wants to generate UUIDs early in the boot process. But I don't have all of the details and have not been following closely.
 
Out of curiosity, does that bug only apply to Zen 2 on X570? Or does it also apply to Zen 2 on X470/X370? Haven't read up on it.

The bug applies to any system equipped with a Ryzen 3000 series processor. As I understand it, Destiny 2 makes use of the RDRAND instruction and there is some sort of issue with doing that on the Ryzen 3000 series processors. As the fix can't be issued via a game patch, it has to be something about the implementation as it relates to the 3000 series specifically. This is also why the issue is motherboard / chipset agnostic. Unfortunately, this requires an AGESA code update (and therefore BIOS update) on any system using a Ryzen 3000 series processor to correct the problem.

AFAIK, it's a Zen 2 thing regardless of chipset used. There is an instruction related to generating random numbers (RDRND or something like that) that is not working as desired. I think it's also why Linux had systemctl issues during bootup on some distros because it wants to generate UUIDs early in the boot process. But I don't have all of the details and have not been following closely.

This is correct. I don't use Linux myself, but that is in fact the reason it doesn't work.
 
Last edited:
I am seeing a lot of freaking out on the amd subreddit over this bug. Personally, I don't really care. From what I'm reading, there is a systemd workaround/fix for Linux already, Destiny 2 is the only game using this feature, and AMD had a beta BIOS that fixed it already - but they and the mobo vendors decided not to release because of other bugs. So a fix is coming.

You buy day one hardware, you takes your chances with stuff like this.
 
I kind of wonder if this isn't why AMD released desktop Zen 2 in volume before Rome (in volume - I know it's sampling). Let the desktop users be unofficial beta testers ;).
 
I have the game and never played it. What’s the issue? I’ll try it out tonight.
It hangs/won't run.
"When i click play on Battlenet it says its launching the game," writes NoPantsoN. "Battlenet minimizes and the game never comes up. It will show up in task manager and stay there for a minute or 2 then its just gone. Battlenet will say at that time that the game is still running."
From this article: https://hothardware.com/news/amd-ryzen-3000-zen-2-destiny-2-bungie-crash
 
Back
Top