5800X3D gaming review

Idk, I don't get the problem some have with this CPU. Wouldn't be the 1st time a company really dialed in a cpu for a specific workload. If it happens to be the best at gaming so be it.
Oh I have no problem with it. I think it makes a lot of sense for some.

I just am not keen on price when I got my 5900X for less than $400 and on average the 5800X3D not that much better. Sure there are the specific games that greatly benefit. Most seem marginal.

Talking 1440p.
 
The fact that some game seem to explode (despite a bit of a lower clock) do seem to show that there is something to it
 
Would be hard to go faster than the cheaper 5900 if your compilation workload is quite parallelized (not sure if they count has load bearing):


Yes, I was more interested in what the bigger cache does for code compilation (at same core count). Looks like not much. Thanks for the link.
 
The fact that some game seem to explode (despite a bit of a lower clock) do seem to show that there is something to it
AMD does not have great memory schedulers on their processors, the larger cache size makes up for them, it is not hard to put a high core count AMD processor into a state where a core will spend more time waiting for a memory channel to become available then it does actually transferring the data from Ram to its Cache. The increased cache size reduces the number of times the cores have to go back to the Ram for more data and causes those calls to get spread out easing the pressure on the memory schedulers. I don't know how well this is going to scale up at this stage but it will be very interesting to see what AMD does with it for sure.
 
Considering you will be more GPU bound at 1440p/4k, it would not show how much more performance a CPU would be since all the CPU's would get right around the same framerate.

That's what a GPU bottleneck is.

That's kind of the point, though. For "the best gaming CPU" on the market, benefits are minimal for gamers who are playing new games at modern resolutions. If you want to play GTA5 or older Ubisoft titles at 1080p, you might as well just go play on a last-gen console. It's why it's hard to get excited about any of these reviews.
 
That's kind of the point, though. For "the best gaming CPU" on the market, benefits are minimal for gamers who are playing new games at modern resolutions. If you want to play GTA5 or older Ubisoft titles at 1080p, you might as well just go play on a last-gen console. It's why it's hard to get excited about any of these reviews.
It should still help out in the lows though, AI and physics calculations are cache intensive and those are what tend to cause the dips more than not on AMD when suddenly the CPU has to have 2 of the cores dump their cache to go fetch a new set to suddenly calculate a crash or some partial effect. So while at 1440p or 4K you are still going to be GPU bound for the most part you are still going to see a smoother performance as a result, but if you are already on a 5000 series CPU you could see the same uplift with a relatively minor overclock.
 
AMD does not have great memory schedulers on their processors, the larger cache size makes up for them, it is not hard to put a high core count AMD processor into a state where a core will spend more time waiting for a memory channel to become available then it does actually transferring the data from Ram to its Cache. The increased cache size reduces the number of times the cores have to go back to the Ram for more data and causes those calls to get spread out easing the pressure on the memory schedulers. I don't know how well this is going to scale up at this stage but it will be very interesting to see what AMD does with it for sure.
In the last few days a few articles have come out saying Zen 4 will have a focus on faster RAM access ("memory overclocking" is the phrase used). We'll have to wait to see how that goes, of course.
 
In the last few days a few articles have come out saying Zen 4 will have a focus on faster RAM access ("memory overclocking" is the phrase used). We'll have to wait to see how that goes, of course.
That's good to know, but they need it, with their existing setup on Zen 3 DDR5 would be wasted on that platform unless they make significant changes.
 
Yes, I was more interested in what the bigger cache does for code compilation (at same core count). Looks like not much. Thanks for the link.
Curiously, how much code does one compile that has you looking for more performance than a 5800x provides?
 
That's kind of the point, though. For "the best gaming CPU" on the market, benefits are minimal for gamers who are playing new games at modern resolutions. If you want to play GTA5 or older Ubisoft titles at 1080p, you might as well just go play on a last-gen console. It's why it's hard to get excited about any of these reviews.
Not really. We are still not sitting comfortably in 4K gaming. Any recent title can bring even a 3090 TI to it's knees. DLSS / FSR help but those will do the same thing. They reduce the GPU load. When that happens the CPU starts to matter more.

Then there's also the reality of what this chip is going up against. It's trading blows with a chip that costs $400 more. If you're already on the platform and you're primarily a gamer it's a decent upgrade.
 
Then there's also the reality of what this chip is going up against. It's trading blows with a chip that costs $400 more. If you're already on the platform and you're primarily a gamer it's a decent upgrade.
This right here ^^^
Not to mention the platform costs of DDR5 to go with it if you actually want it to perform on par with a 5800X3D.
Makes a 12900k with DDR4 look like a waste of money for gaming...
 
Curiously, how much code does one compile that has you looking for more performance than a 5800x provides?

It isn't so much the total amount of code. But linking for example can be very slow (and single-threaded) for a bunch of projects. And repeated often. Even if an incremental build avoid most of the source code file rebuilds you almost always have to link. And I like to keep incremental compile times under 7 seconds so that my brain doesn't drop context.

As for generally large codebases I have to do, they include the Linux kernel, FreeBSD `make world`, various Common Lisp compilers (which often have a sequential build) and the big pig in the room - llvm. LLVM during linking is absolutely brutal if you have debug on, I bet that the large cache helps here.

Having said that, for regular C compilation outside linking I expect the large cache to be near-useless.
 
I think I’m better off seeing what Unreal 5 based games are needing for 120+ FPS at 1440p then building to that spec.
I used to do that but really is a moving goalpost.
 
I recant my previous statement to a point. *For the money, and the power consumption* this is a phenomenal product. However, I don't know if I'll dump my standard 5800x just to get this. But if you're on Ryzen 3000, like a 3600. I'd get this in a heartbeat.
 
5900x is down to $38~ on Amazon now. I wonder how much lower it will go.
 
I used to do that but really is a moving goalpost.
Well yeah, really 90+ fps, my existing setup does 60-90 reliably, but maintaining 90+ would be ideal. 120+ is a bit extreme, didn’t think that through.
 
It should still help out in the lows though
Indeed, it does. For Hardware unboxed tests: All of the games had quite a bit better minimum framerates. And for half of the games tested----the minimum framerates for the 5800x3D are higher than the average framerates for the 5900x and 5950x. Pretty interesting stuff. Equally interesting is that a 12900k with DDR5 6400 had similar results.
 
Well yeah, really 90+ fps, my existing setup does 60-90 reliably, but maintaining 90+ would be ideal. 120+ is a bit extreme, didn’t think that through.
Agreed you give a good realistic range. Having grown up on crts 80 to 90 on them is a good minimum, I'd assume oled would be similar.
 
Better resale value percent in 3 years:
This at $450 or the 5900x at $400?
depends on the workflow. 5900x is more useful for most productivity workflows. 5800x3D in gaming, often gives minimum framerates which are as good or better than 5900x's average framerates.
 
Interesting question, 5900x should be more abundant. 5800x3D should be rare enough to maybe warrant some kind of collector value. Hmm, never thought I'd view CPUs like game loot drops.
 
Curiously, how much code does one compile that has you looking for more performance than a 5800x provides?
Some type of project (specially if you have to build all your dependency from time to time) can get ridiculously large.

Chromium for example:

That 52 minutes on a 5950x versus vs 93 minutes on a 5800x, easy to see people going for 16 cores or threadripper for compilation, specially those who need to compile and run test after for many platform (android, windows, Ubuntu, etc...) on many virtual machines.

For my biggest solution going from an intel 3550k -> 2600x -> 3900x, a clean build I went from over 20m -> 8m -> 3.xm or something like that. Back around 2010-2012 on a Q6600 type of cpu it was getting close to an hour and building some dependency was a total day type of work for it.

It is one thing that is quite almost purely CPU workload and where more core and speed scale really well outside linking process that tend to be monothread, but you can have many of different one made usually.

For example, running this command:
vcpkg install opencascade[freeimage]

Launch all of those:
Computing installation plan...
The following packages will be built and installed:
* freeimage[core]:x64-windows -> 3.18.0#21
* jxrlib[core]:x64-windows -> 2019.10.9#3
* lcms[core]:x64-windows -> 2.12#2
* libraw[core]:x64-windows -> 201903#5
opencascade[core,freeimage]:x64-windows -> 7.6.0
* openexr[core]:x64-windows -> 2.5.0#3
* openjpeg[core]:x64-windows -> 2.4.0


It took over 22 minutes, it would have been well under 10 minutes on some bigger workstation system. And if your project use 8 of those, it adds up.

There is no need to chase the latest greatest and there is no breakpoint like never going under your VRR lower bound like it can for some with gaming where it make a big difference between 2 relatively close cpu, taking 2 minutes 55 or 3 minutes 7 is not a big deal like that.

For an example (cache talk around 9 minutes, for someone that maybe thought a 5800x3d could have helped):




Better resale value percent in 3 years:
This at $450 or the 5900x at $400?
For a reference a $109 core i3 12100F beat a 3700x in almost all games, one could expect the core i3 14100 will beat the 5800x3d in most games in just 2 years or at least the 14400 and would be relatively cheap used once the 15 series is out in 3 year's.

https://static.techspot.com/articles-info/2409/bench/Average.png

Average.png

While the 3900x still hold his own in parallel task and still beat a 12600k/5800x in a lot of things.

I have no idea if the market follow that logic too (possible that people for who large multithread performance matter goes for higher priced newer stuff anyway and gamers with AM4 will want that upgrade, there will be a lot of really nice AM4 platform going around at good price and gamers that will want to push the switch to ddr5 far off has it will take time to be worth while $ wise and the 5800x3d could stay relatively good among DDR4 cpu for a good amount of time if both company have their next cpu launch the last one supporting DDR4)
 
Last edited:
I have no idea if the market follow that logic too (possible that people for who large multithread performance matter goes for higher priced newer stuff anyway and gamers with AM4 will want that upgrade, there will be a lot of really nice AM4 platform going around at good price and gamers that will want to push the switch to ddr5 far off has it will take time to be worth while $ wise and the 5800x3d could stay relatively good among DDR4 cpu for a good amount of time if both company have their next cpu launch the last one supporting DDR4)
That part right there. It's not cheap to switch platforms. So until DDR5 pricing comes down I'm going to hold off as long as possible.
 
First run of new platform memory IC always a dog latency wise anyway better to wait for that improvment to occur before adopting new memory standards historically.
 
Glad to hear that the 5800x3d seems to be another AMD hit.

See my sig: I have a forlorn 2700x that is just begging to be replaced. Obviously, using the Trickle Down Upgrade Path (TDUP), I'll drop a new 5800x3d into my 5800x rig, use THAT in one of the 3700x, and then the 3700x replaces the 2700x.

One new CPU, three upgrades. At a list of $450 for the 5800x3d, that means I'm only paying $150 per cpu upgrade. Win, win, win. ;)
 
Glad to hear that the 5800x3d seems to be another AMD hit.

See my sig: I have a forlorn 2700x that is just begging to be replaced. Obviously, using the Trickle Down Upgrade Path (TDUP), I'll drop a new 5800x3d into my 5800x rig, use THAT in one of the 3700x, and then the 3700x replaces the 2700x.

One new CPU, three upgrades. At a list of $450 for the 5800x3d, that means I'm only paying $150 per cpu upgrade. Win, win, win. ;)
That really is where AMD shines currently, the long lived AM4 platform allows you to do that.
 
Since 720/1080p gaming is the talk of the town, grab a 6xxx series cpu with RDNA2 and save money not buying a gpu.
 
The 5950X will definitely fill my needs more than this, but god damn do I wish they had a 5950X3D. =(

That said, the 5950X has been about $20 away from my breaking point of "under $500" so I'll likely end up picking one up VERY soon. Was waiting on the 3D reviews before i succumbed.

See my sig: I have a forlorn 2700x that is just begging to be replaced. Obviously, using the Trickle Down Upgrade Path (TDUP), I'll drop a new 5800x3d into my 5800x rig, use THAT in one of the 3700x, and then the 3700x replaces the 2700x.
The 5950X will be replacing my 3900X, my wife will be getting the the 3900X to replace her 1700, and that 1700 is gonna be going into a dedicated server box methinks. I'm not sure where yet, but i'll make it work.
 
Last edited:
More games tested.

This makes it look more like a software support difference than anything to do with the hardware. 1% better at 1080p on average, but with highly non-uniform results. About half of the games show it actually being slower.

At 1440p, the same curiosity is present, but the number of games where the 5800X3D is slower than the 12900K increases. At 4K, still a 1% avg improvement, but many games are equal and there's roughly an equal number of games where the 5800X3D is slower or faster.

So, in summary, AMD has released* the world's fastest** gaming CPU. Its performance improvements*** are focused around dying games being played at dying resolutions at useless framerates****, and it gets this improvement by accelerating certain calculations which are being moved off of CPUs as time goes by. Sick.


* Availability TBD
** Except the 50% of games where it is 3rd or 4th place
*** Where present
**** Valorant at 670fps? R6 at 580fps? lmao. Huge wins there.


It would be interesting to see the average difference for games running under 300fps.
 
This makes it look more like a software support difference than anything to do with the hardware. 1% better at 1080p on average, but with highly non-uniform results. About half of the games show it actually being slower.

At 1440p, the same curiosity is present, but the number of games where the 5800X3D is slower than the 12900K increases. At 4K, still a 1% avg improvement, but many games are equal and there's roughly an equal number of games where the 5800X3D is slower or faster.

So, in summary, AMD has released* the world's fastest** gaming CPU. Its performance improvements*** are focused around dying games being played at dying resolutions at useless framerates****, and it gets this improvement by accelerating certain calculations which are being moved off of CPUs as time goes by. Sick.


* Availability TBD
** Except the 50% of games where it is 3rd or 4th place
*** Where present
**** Valorant at 670fps? R6 at 580fps? lmao. Huge wins there.


It would be interesting to see the average difference for games running under 300fps.
The games where it performed the worst were actually older. The newest ones showed the biggest gains. Anything that wasn't confined to a single core is where you'll see benefit even at 2K.
 
Back
Top