5800X3D gaming review

This makes it look more like a software support difference than anything to do with the hardware. 1% better at 1080p on average, but with highly non-uniform results. About half of the games show it actually being slower.

At 1440p, the same curiosity is present, but the number of games where the 5800X3D is slower than the 12900K increases. At 4K, still a 1% avg improvement, but many games are equal and there's roughly an equal number of games where the 5800X3D is slower or faster.

So, in summary, AMD has released* the world's fastest** gaming CPU. Its performance improvements*** are focused around dying games being played at dying resolutions at useless framerates****, and it gets this improvement by accelerating certain calculations which are being moved off of CPUs as time goes by. Sick.


* Availability TBD
** Except the 50% of games where it is 3rd or 4th place
*** Where present
**** Valorant at 670fps? R6 at 580fps? lmao. Huge wins there.


It would be interesting to see the average difference for games running under 300fps.
so Fortnite, War Thunder, Asseto Corsa, Far Cry 6, Apex, Rainbow Six Seige, etc, etc are all dying games?

and no OVER 50% were faster, that's why it won. the highest being Asseto Corsa @24%

lets not forget this is AMD current gen vs Intel next gen on DDR5 how do you think AMD's gonna fare when they drop their next gen DDR5/PCIE5 stuff in another 6 months?? and lets not forget this chip is drop in replacement for motherboards that came out like 4 years ago.

sounds like someone's a little salty. sorry bro, but they still hit another one out of the park with this one.
 
For a drop in part to an existing platform it is impressive. I have no plans to buy all new components for at least a year or two while the new platforms are debugged and optimized. I wouldn't be surprised if many others intend to do the same so if I were still sporting a 3000 series cpu or older on AM4 this would clearly be my choice over a complete system overhaul!
 
man screw GN he's been doing nothing but dogging amd lately and the last one was because amd released a chip that's TWICE AS FAST as the two garbage chips intel released like 2 weeks before that and saying they are a garbage company and should have just thrown those (working) cores in a landfill instead of sell them. really?? he is so intel biased it's not even funny. still testing ryzen 5000 on 2x 3200 memory?? man gtfo here. everyone in the know is getting at least 3600 ever since amd reccomended 3733 for best ryzen 3000 performance.

oh and not to mention the fact that any of intel's new latest "gaming" cpus with peeweeE core's are getting crap 1% lows vs a comparable Ryzen even if the avg frames are higher. even if i was gonna go intel 12th gen i'd stay far away from the 12900 for a gaming rig. but he just tries to slide past that and cherry pick numbers that benifit intel. wow.
 
For those who still prefer reading.

Even in the titles where this CPU is not faster than 12900k, it is still 15%+ faster than 5950x, especially when looking at 1% lows.

Will be interesting to see 30 game review short haired Steve said he is working on.
i can almost guarantee it'll favor intel. (see my previous post)
 
This makes it look more like a software support difference than anything to do with the hardware. 1% better at 1080p on average, but with highly non-uniform results. About half of the games show it actually being slower.

At 1440p, the same curiosity is present, but the number of games where the 5800X3D is slower than the 12900K increases. At 4K, still a 1% avg improvement, but many games are equal and there's roughly an equal number of games where the 5800X3D is slower or faster.

So, in summary, AMD has released* the world's fastest** gaming CPU. Its performance improvements*** are focused around dying games being played at dying resolutions at useless framerates****, and it gets this improvement by accelerating certain calculations which are being moved off of CPUs as time goes by. Sick.


* Availability TBD
** Except the 50% of games where it is 3rd or 4th place
*** Where present
**** Valorant at 670fps? R6 at 580fps? lmao. Huge wins there.


It would be interesting to see the average difference for games running under 300fps.
It’s not so much software support, it’s core dependency. In modern titles utilizing modern core counts you have far more system interrupts generated where a core is requesting access to a memory channel. There are many cases in an AMD processor where those cores will have to wait a statically significant time for a memory channel to become available. By increasing cache they reduce the number of times the cores need to stop what they are doing and fetch back for memory, and better yet it helps space those requests out better reducing the frequency in which they have to wait for access to a memory channel or at least reduce their wait time.

Older titles that aren’t utilizing larger core counts don’t have this problem because they don’t need to wait nearly as often for memory access and titles using simpler physics and partly effects aren’t as cache heavy reducing the number of times the CPU has to dump the cache and fetch new instructions.

I would be very interested to see results of this CPU in dual vs quad channel memory configurations.
 
  • Like
Reactions: kac77
like this
man screw GN he's been doing nothing but dogging amd lately and the last one was because amd released a chip that's TWICE AS FAST as the two garbage chips intel released like 2 weeks before that and saying they are a garbage company and should have just thrown those (working) cores in a landfill instead of sell them. really?? he is so intel biased it's not even funny. still testing ryzen 5000 on 2x 3200 memory?? man gtfo here. everyone in the know is getting at least 3600 ever since amd reccomended 3733 for best ryzen 3000 performance.

oh and not to mention the fact that any of intel's new latest "gaming" cpus with peeweeE core's are getting crap 1% lows vs a comparable Ryzen even if the avg frames are higher. even if i was gonna go intel 12th gen i'd stay far away from the 12900 for a gaming rig. but he just tries to slide past that and cherry pick numbers that benifit intel. wow.

Is 3600 that worth it? The CAS 18 stuff isn't too bad price wise but the CAS16 3600 ram starts getting up there in price, especially for RGB stuff.
 
and no OVER 50% were faster, that's why it won. the highest being Asseto Corsa @24%
For those of us using traditional math:
1080p: 5800X3D was faster in 53% (21/40). 12900K was faster in 45% (18/40)
1440p: 5800X3D was faster in 43% (17/40). 12900K was faster in 53% (21/40)
4K: 5800X3D was faster in 30% (12/40). 12900K was faster in 35% (14/40)

Overall:
5800X3D was faster in 42% (50/120). 12900K was faster in 44% (53/120)

so Fortnite, War Thunder, Asseto Corsa, Far Cry 6, Apex, Rainbow Six Seige, etc, etc are all dying games?

Fortnite was released 5 years ago (2017). Global player counts are high because it will run on literally anything.
War Thunder was released 10 years ago (2012)
Asseto Corsa was released on 8 years ago (2014)
Rainbow Six Seige was released 2 years ago but runs on an engine released 8 years ago (2014)

So, not exactly new stuff.
 
Yet, it is quite impressive that a CPU slotted in a 4 year old platform using half the power budget can be even considered as 1% faster.

Not to mention that platform for platform (in this configuration), it's basically half price to boot.

If you cheapen on AL platform the gap grows from here.

I agree that the price is certainly right relatively speaking. The reviews I looked at showed it beating the 12900k in almost everything if the 12900k was tied to a DDR4 platform. Start adding in a processor that's $100 more expensive, a more expensive motherboard, and more expensive RAM, and the 12900k starts to lose its luster. Although, I don't know that I'd buy the 5800X3D if I had to buy a whole new platform at this point. As a drop in replacement, it makes a lot more sense.

Granted that's only for gaming. The 12900k seems to be better positioned as all all around well performer in mixed productivity/gaming tasks (as are the 5900/5950x).

If nothing else, for the first time in what seems like several years there are plenty of good options out there for everyone's nuanced use cases.
 
Fortnite was released 5 years ago (2017). Global player counts are high because it will run on literally anything.

So, not exactly new stuff.
Fortnite Chapter 3 was the first Unreal 5 game. It makes my 6900XT/3900X cry in 1080p. Its like 5 months old. It stay huge as Epic is constantly re-inventing the game.

Mind you the other benchmark I care about is Davinci Resolve.
 
Fortnite Chapter 3 was the first Unreal 5 game. It makes my 6900XT/3900X cry in 1080p. Its like 5 months old. It stay huge as Epic is constantly re-inventing the game.

Mind you the other benchmark I care about is Davinci Resolve.
UE5? Really? That's pretty interesting.... Although they are the developers for both so it should come as no surprise...
 
Wow you all with 5XXX cpus should try the auto optimizer for curve optimizer in the new ryzen master utility. Surprised at how much my settings differed from the auto adjust/test values it spit out at the end. Test takes awhile depending on how many cores ya have but appears to work quite well in short couple hours I've been using the values it gave me after nearly two hours of running it.
 
Wow you all with 5XXX cpus should try the auto optimizer for curve optimizer in the new ryzen master utility. Surprised at how much my settings differed from the auto adjust/test values it spit out at the end. Test takes awhile depending on how many cores ya have but appears to work quite well in short couple hours I've been using the values it gave me after nearly two hours of running it.
Less heat or higher boost?
 
I been running my 3700x @ 3600Mhz CL 18 since Oct 2021 and it's been so stable on the MSI x470 Gaming Plus and it's the Olyo brand that cost me $63 in RGB for 16Gb kit of Black Owl. I didn't know if I could take the 2019 cpu that high on the older chipset, plus I bought the board from Newegg refurbished for $75 buxs in June 2019 and had to flash with 2200g for 3700x,

It would Be so sweet to drop a 5800x 3D on my $75 board and still see the uplift like moving from a RX5700 to a RTX 3070 showed.
 
They had to OC using BCLK which isn't really viable for everyday use and it looks like they stuck with 1.2x volts which makes me question if that OC was at all stable.
So technically overclockable I guess in the sense that anything can be overclocked if the OC'er is brave / determined enough...
Being that he was a pro MSI OC'er, he had access to a chip/s and plenty of mobo's that may or may not have had some kind of secret sauce and black magic to pull it off, but still smells like a suicide run to me.
Especially since none of those HWBOT OC's would last 20s in the real world.
In the end, no matter the means, it still qualifies as an OC.
 
Wouldn't be surprised if AMD overclocks this chip and shrink it for the next consoles.
Consoles have to hit a pricepoint. Cache is expensive. PS5's Zen 2 based chip has a lot less cache than its Zen 2 desktop relatives. For the next consoles----cache will still be exensive. They are most likely to keep the same model of relying on the core architecture improvements. And otherwise trimming down this or that, to keep the price down.
not in gaming, they make it worse (1% lows)
Gamer's Nexus is still using Windows 10 for benchmarking. And since MS has decided not to update the thread scheduler for W10----you can get some seemingly random situations where the threads are incorrectly scheduled to the E-cores (you can see in GN's benchmarks where occasionally a 12 series intel will show numbers which are uncharacteristically lower than they should be).
For gamers, I would recommend using W11 or turning off the E-cores while gaming (and since the cache is shared amongst all the cores, this means you get more cache per core, with the E-cores off).
 
Maybe wait a bit longer until AM5 comes out. If AM5 lasts anywhere near as long as AM4 that would be worth it.
 
He's still using a 2500K, I don't think the platform being EOL is going to matter to him. We'll have AM9 socket when he's ready to upgrade again.
The 2500K had a long life because for a long time AMD was so far behind that there was little reason to upgrade to anything even from Intel's side. That is no longer the case and I will still advise against upgrading to an EOL platform.
 
You guys make valid points. It will be hard to get rid of such a well working setup (all build parts from original construction still working - swapped video card of course) but I'm clearly at a CPU ceiling. I like upgrading when something truly unique comes along. Yeah the generations from 3rd gen to todays 12 made obvious advancements, they just aren't wow enough for me. Trying a newer approach of stacking cache with trade off of pure clock at least is an attempt to try something new. I appreciate that. I got a 2600x for my son, a 10th gen for my HTPC but my main machine needs something unique. I'm not talking threadripper or I9 unique though. I know, I'm weird.
 
The 2500K had a long life because for a long time AMD was so far behind that there was little reason to upgrade to anything even from Intel's side. That is no longer the case and I will still advise against upgrading to an EOL platform.

I agree completely. If I was building for a customer I would say the same. I replace when the machine breaks and isn't financially smart to replace failed parts or when my compute experience starts to suffer dramatically. I am at the suffer dramatically stage obviously. Therefore tested and true AM4 may be beneficial for my upgrade gaps than bleeding edge hope for the best. Same reason I buy games after being out for a year, I am not a beta tester.
 
I would advise against upgrading to an end of life platform. This chip really is only a reasonable upgrade for those already on AM4.
Buying a solid AM4, the minimum known upgrade path is to a 5950x (not sure if it will ever have some revision to it), depending on what you do, by the time that is not a really good CPU what you would have bought into instead and a much larger price (specially if we talk going DDR5) will it be a much more relevant platform ?

The least EOF you can buy right now will support next year release of 13xxx intel CPU and I imagine that it's ?
 
I have not seen any reviews using Civilization or similar turn based strategy game turn processing time benchmarks.

These types of games are pretty much the only reason why I upgrade my CPU at this point.
 
I have not seen any reviews using Civilization or similar turn based strategy game turn processing time benchmarks.

These types of games are pretty much the only reason why I upgrade my CPU at this point.
Pretty sure the latest HWUB 5800X3D benchmark video has Civ and Factorio benchmarks.
 
Last edited:
You guys make valid points. It will be hard to get rid of such a well working setup (all build parts from original construction still working - swapped video card of course) but I'm clearly at a CPU ceiling. I like upgrading when something truly unique comes along. Yeah the generations from 3rd gen to todays 12 made obvious advancements, they just aren't wow enough for me. Trying a newer approach of stacking cache with trade off of pure clock at least is an attempt to try something new. I appreciate that. I got a 2600x for my son, a 10th gen for my HTPC but my main machine needs something unique. I'm not talking threadripper or I9 unique though. I know, I'm weird.

Honestly, there's a lot of pros for either path. You'll be getting the latest and greatest with either AM5 or Raptor Lake. But, you're also getting new platform teething pains and ancillary costs (DDR4 vs DDR5, motherboards). AM4 brings a more stabilized platform and the benefits of lower ancillary costs (RAM again, and a much larger pool of MBs with a wide range of prices).

In the end, if you're the guy who buys cutting edge and rides it for a decade (2500k doesn't seem to indicate that) than waiting for AM5 might be the right choice. If not, you could build a fast, solid system for not a lot of money and have a huge upgrade vs your 2500k right now.
 
Pretty sure the latest HWUB 5800X3D benchmark video has Civ and Factorio benchmarks.
Factorio, but not Civ (unless they forgot to put a link to Civ in both videos they did.) Factorio's updates per second were 316 for the 5800x3d vs 203 for the 5800x, and 247/249 for the 12900K with ddr4-3200 and ddr5-6400. (The 12900KS with ddr5-6400 got 250.)
 
Honestly, there's a lot of pros for either path. You'll be getting the latest and greatest with either AM5 or Raptor Lake. But, you're also getting new platform teething pains and ancillary costs (DDR4 vs DDR5, motherboards). AM4 brings a more stabilized platform and the benefits of lower ancillary costs (RAM again, and a much larger pool of MBs with a wide range of prices).

In the end, if you're the guy who buys cutting edge and rides it for a decade (2500k doesn't seem to indicate that) than waiting for AM5 might be the right choice. If not, you could build a fast, solid system for not a lot of money and have a huge upgrade vs your 2500k right now.
Maybe wait a bit longer until AM5 comes out. If AM5 lasts anywhere near as long as AM4 that would be worth it.

yeah, but remember that's first gen am5, that would be like getting an x370/b350 which was latest at the time but didn't end up necessarily being the best am4 chipset in the long run. and then you also have to cough up cash for 1st wave ddr5

That_Sound_Guy prob can't go wrong either way. then again, next gen amd may be mind blowing?? guess that's the game we play as computer guys. that said, these chips prob won't be cheap in the future so if your gonna get one, i'd do it while you can still get 'em at retail.
 
Back
Top