Anyone “upgraded” from 5900X to 5800X3D for gaming?

Should I get a 5800X3D?


  • Total voters
    105
Again. You're gonna have to do your own research. In the above example the answer is a resounding No.

yes I understand but why did you choose to specifically highlight that Factorio game which is obviously a very niche, very GPU light game...of course that will look amazing with the 5800X3D...why not highlight a more mainstream popular game?
 
yes I understand but why did you choose to specifically highlight that Factorio game which is obviously a very niche, very GPU light game...of course that will look amazing with the 5800X3D...why not highlight a more mainstream popular game?
Because reviewers already do that. I'm just saying that for someone to deem is the extra V cache is a good idea, they need to evaluate what they're going to play because the mainstream games might not reflect the actual improvement someone might see. Reviewing the same old mainstream lineup in fps just doesn't cut it, really.
 
Another thing I’ve continually read regarding the 5800x3d is that you have to factor in 1% lows more than just overall fps. Essentially everyone seems to say even games where your average fps is virtually identical to the non-x3d chips, the cache really helps to smooth out the 1% dips, so the experience feels smoother in the end.

I chose to try the switch from a 5900x because 3 of the games I play the most (wow, eft, and to a lesser extent, destiny 2) are all things that benefit a decent amount with the cache. I’m definitely interested to see if the whole “smoother” thing in other games pans out, too.
 
as far as the 5800X vs 5800X3D I found the FPS review interesting...

When you look at the Ryzen 7 5800X3D as a whole, it does have some misses in several areas, mostly down to its lower clock frequency in comparison to the Ryzen 7 5800X. It’s also limited in its ability to utilize AMD Precision Boost Overdrive (PBO) or overclocking. We need to bring AMD PBO up because the AMD Ryzen 7 5800X fully supports PBO, while the Ryzen 7 5800X3D does not.

With AMD PBO you can turn this on in the BIOS or with Ryzen Software on the Ryzen 7 5800X. This can auto-overclock the Ryzen 7 5800X CPU by as much as 200MHz. When you take into consideration this overclock, we are now looking at a large 400MHz clock difference between the Ryzen 7 5800X and Ryzen 7 5800X3D. A Ryzen 7 5800X with PBO will improve its performance in everything, from applications to productivity, to rendering, to gaming. Whereas, the Ryzen 7 5800X3D really only has one claim to fame, potentially better gaming performance.

Even then, it is entirely game dependent on if that extra L3 Cache will actually provide a performance improvement in your game. We found that the game needs to be CPU dependent, to begin with, and or at a low resolution with a fast GPU. If you are playing games at a high resolution, like 4K, you are going to be GPU bound no matter what. If you aren’t running a GeForce RTX 3080 Ti or greater, then you are most likely also going to be always GPU bound at 1440p. That leaves only 1080p where it might benefit performance.

When it comes down to it, the AMD Ryzen 7 5800X is the better all-arounder CPU, as it provides better performance in everything you do with your PC beyond just gaming. If you want it to go faster, you can enable PBO and that will increase performance in everything non-gaming related, plus gaming related. That PBO increase will also help your gaming performance as the frequency is increased, perhaps even negating some of the impact of the 5800X3D’s extra L3 Cache...

https://www.thefpsreview.com/2022/05/19/amd-ryzen-7-5800x3d-vs-amd-ryzen-7-5800x-a-cache-value/10/
As much as I like Brent and miss him on these forums, that review is old and a bit flawed. He's blatantly GPU limited on most of those tests running ultra/high (even in 1080p). He's 2 models behind the flagship GPU at the time as well. It tells us more about the 3080ti capabilities than the CPUs in more than half of those game tests.

You shouldn't be looking anywhere near the 5800x3D if you prioritize non-gaming tasks. His intro leans into that, and sure enough the results validate it. I'm all for more available information and appreciate thorough testing, but I would wager a vast majority of potential 5800x3D buyers aren't looking for that type of benchmarking in a review. It's a very specific chip.

Regardless, the extra cache overwhelmingly benefits in many games compared to a couple of hundred mhz; this has been on display in nearly every other 5800x3D review out there. The image in the post above your shows that. Not just averages too, but better lows. There are some mainstream outliers like CSGO that benefit the 5800x extra frequency and I'm sure others exist too, but I haven't seen many examples.

PBO2 with the 5800x3D can be supported on a lot of boards now with a bios update. I've been using curve optimizer on mine. Sure Ryzen Master doesn't work, but I'll hard pass on any software managing overclocks.

<-- former 5950x user upgrading/downgrading to a 5800x3D
 
You shouldn't be looking anywhere near the 5800x3D if you prioritize non-gaming tasks.

I'm not necessarily specifically targeting your post but I have seen this general idea thrown around a lot lately. It kind of promotes the idea that if you aren't primarily gaming, then you are somehow benefiting from having tons of cores. There are actually very few tasks, gaming or not, that are going to use more than 8 cores. You almost have to go out of your way to come up with convoluted scenarios that will actually utilize that many cores. Things such as software/cpu video encoding (when in the real world 99% of people simply use the hardware video encoder on their videocard to do that), or having some weird reason to run a dozen VMs at the same time, or artificial benchmarks like Cinebench which obviously are not representative of most real-world scenarios. For most people the only thing that would actually utilize more than 8 cores would be something like compressing a zip file, and they might do that what, once per day at most?
 
I'm not necessarily specifically targeting your post but I have seen this general idea thrown around a lot lately. It kind of promotes the idea that if you aren't primarily gaming, then you are somehow benefiting from having tons of cores. There are actually very few tasks, gaming or not, that are going to use more than 8 cores. You almost have to go out of your way to come up with convoluted scenarios that will actually utilize that many cores. Things such as software/cpu video encoding (when in the real world 99% of people simply use the hardware video encoder on their videocard to do that), or having some weird reason to run a dozen VMs at the same time, or artificial benchmarks like Cinebench which obviously are not representative of most real-world scenarios. For most people the only thing that would actually utilize more than 8 cores would be something like compressing a zip file, and they might do that what, once per day at most?
Yeah, if you need more than 8, you already know why. No need to be told you might.
 
You almost have to go out of your way to come up with convoluted scenarios that will actually utilize that many cores.
Or just compile code, some high quality encoding/render, compiling c++ code is not that special/uncommon of a scenario, there about what 200k programmers in north america and I would imagine one of the most popular occupation of people that work on powerful mutlicore computer and would worry about high core count, people that bake-render-encode would be an other quite frequent one. Sure many does scripts or low compiling type of workload, but still quite common and very out of the box, nothing to do, you need to take step for 100% of your machine even if you have 24 threads available to not get bug down by compiling.

g, then you are somehow benefiting from having tons of cores
The regular 5800x will tend to be better has well, it is not just cores count but frequency. I imagine some work task could go the other way around loving cache, but in general.
 
I'm not necessarily specifically targeting your post but I have seen this general idea thrown around a lot lately. It kind of promotes the idea that if you aren't primarily gaming, then you are somehow benefiting from having tons of cores. There are actually very few tasks, gaming or not, that are going to use more than 8 cores. You almost have to go out of your way to come up with convoluted scenarios that will actually utilize that many cores. Things such as software/cpu video encoding (when in the real world 99% of people simply use the hardware video encoder on their videocard to do that), or having some weird reason to run a dozen VMs at the same time, or artificial benchmarks like Cinebench which obviously are not representative of most real-world scenarios. For most people the only thing that would actually utilize more than 8 cores would be something like compressing a zip file, and they might do that what, once per day at most?
I didn't necessarily say anything about needing more cores with that statement. A 5600x would likely (slightly) beat out a 5800x3D in some single or low-multithreaded tasks. The 5800x3D existence is for gaming, so this is a bit different than the general core debate. If you're not prioritizing gaming, the chip is not a great fit for you. There are better options in either direction regarding non-gaming performance or price.
 
Pure Workstation Mode 5950x all day long. General mixed station 5800x3D all day long. Pure Gaming, 5800x3D or 5600 on a budget
 
Would never trade my 5950x for something with fewer cores. Software design always catches up, sooner or later, sometimes quicker or slower - but, EOD, it's there.

And... if I had the $$$, and had not a lick of debt, I'd update to the 7950X3D / AM5 platform in a heartbeat. Quality CL30 32gb 16x2 kits are decently priced and even the AM5 boards are hardly worse than the "Intel HEDT tax" from years ago. Got an MC close by in the DFW area here... so yeah.
 
Would never trade my 5950x for something with fewer cores. Software design always catches up, sooner or later, sometimes quicker or slower - but, EOD, it's there.

I've been building my systems with more cores than I needed for over 20 years now ever since I built my first dual-xeon system over 20 years ago. Every single time I banked on the idea of software or games "catching up" to the hardware, the result was that I ended up upgrading again before that ever actually happened. The only times when I actually saw the software "catch up" were on systems that got used for 10+ years (office systems mostly), which pretty much excludes most enthusiasts who upgrade every 1-3 years. Sounds like you are already itching to go to your local Microcenter so I'd say the chances of you keeping your 5950x until 16 cores are commonly utilized is basically zero.
 
I've been building my systems with more cores than I needed for over 20 years now ever since I built my first dual-xeon system over 20 years ago. Every single time I banked on the idea of software or games "catching up" to the hardware, the result was that I ended up upgrading again before that ever actually happened. The only times when I actually saw the software "catch up" were on systems that got used for 10+ years (office systems mostly), which pretty much excludes most enthusiasts who upgrade every 1-3 years. Sounds like you are already itching to go to your local Microcenter so I'd say the chances of you keeping your 5950x until 16 cores are commonly utilized is basically zero.
You can't reason with core-counters in gaming. The same people were arguing 2 years ago that the 5600X couldn't keep up with consoles because it only had 6 cores. Just let them buy what they want. The last time this issue was relevant on Hardforum was Q6600 vs similarly priced EoL C2D, which ultimately tailed off in value over time, and people have been living off that outlier ever since.
 
Would never trade my 5950x for something with fewer cores. Software design always catches up, sooner or later, sometimes quicker or slower - but, EOD, it's there.

And... if I had the $$$, and had not a lick of debt, I'd update to the 7950X3D / AM5 platform in a heartbeat. Quality CL30 32gb 16x2 kits are decently priced and even the AM5 boards are hardly worse than the "Intel HEDT tax" from years ago. Got an MC close by in the DFW area here... so yeah.
By the time that happens that system will be completely obsolete for gaming, and the 5800X3D would have been the better choice during the years of relevance anyways.

Also, from all the reviews i'm seeing, the new 3D parts really aren't worth the money over a 5800X3D anyways when it comes to gaming. You're paying out the ass for very minimal gains.
 
I'd absolutely LOVE to see a 5950X PBO CO with Tweaked memory vs. a 5800X3D benchmark comparison. I feel like the 3D cache makes up for a lot of the downfall of the chiplet design and memory latency / access penalties incurred because of it. I spent a lot of time tweaking my setup to get the PBO boosting really well, and I tweaked every single timing on my memory as tight as possible to reduce latency quite a bit. I've run countless benchmarks in games and programs because I enjoy doing that, but part of me is like, Hmmmm, do I still haver performance on the table with this platform?

I suppose the only option at this point might be buying a 5800X3D and figuring it out myself within the return period at MicroCenter... lol. Also gaming at 4K, all of this might be a moot point and a waste of time & money... lol
 
I'd absolutely LOVE to see a 5950X PBO CO with Tweaked memory vs. a 5800X3D benchmark comparison. I feel like the 3D cache makes up for a lot of the downfall of the chiplet design and memory latency / access penalties incurred because of it. I spent a lot of time tweaking my setup to get the PBO boosting really well, and I tweaked every single timing on my memory as tight as possible to reduce latency quite a bit. I've run countless benchmarks in games and programs because I enjoy doing that, but part of me is like, Hmmmm, do I still haver performance on the table with this platform?

I suppose the only option at this point might be buying a 5800X3D and figuring it out myself within the return period at MicroCenter... lol. Also gaming at 4K, all of this might be a moot point and a waste of time & money... lol
It really depends on the game even at 4K from what I've seen (since the 5800X, 5900X, and 5950X all have very similar game performance except for those top down turn based games that absolutely love cores) - https://www.techpowerup.com/review/rtx-4090-53-games-ryzen-7-5800x-vs-ryzen-7-5800x3d/

I'm not sure how much you've tuned your ram vs that 5800X test setup, but it still gets spanked by the X3D at 4K in quite a few games.
 
It really depends on the game even at 4K from what I've seen (since the 5800X, 5900X, and 5950X all have very similar game performance except for those top down turn based games that absolutely love cores) - https://www.techpowerup.com/review/rtx-4090-53-games-ryzen-7-5800x-vs-ryzen-7-5800x3d/

I'm not sure how much you've tuned your ram vs that 5800X test setup, but it still gets spanked by the X3D at 4K in quite a few games.
Yeah, I don't know, even that review it's not the best comparison because the memory is so far off.

5800X setup =
Thermaltake TOUGHRAM, 16 GB DDR4
@ 4000 MHz 20-23-23-42 1T
Infinity Fabric @ 2000 MHz (1:1)

5800X3D setup =
2x 16 GB DDR4-3600
14-14-14-34 1T
Infinity Fabric @ 1800 MHz 1:1

That 5800X setup has insanely loose timings and would perform worlds better at 3600 CAS 14 or even 3800 CAS 16.

That is what has annoyed me so much with these reviews.... no thought into the memory or PBO. Not quite a 1:1 comparison.

I'd probably not gain much on average, but maybe 1% lows would be better for sure... such a hard choice really. I should just turn off the FPS counter in games and see if I even notice an issue first... lol.
 
I'd absolutely LOVE to see a 5950X PBO CO with Tweaked memory vs. a 5800X3D benchmark comparison. I feel like the 3D cache makes up for a lot of the downfall of the chiplet design and memory latency / access penalties incurred because of it. I spent a lot of time tweaking my setup to get the PBO boosting really well, and I tweaked every single timing on my memory as tight as possible to reduce latency quite a bit. I've run countless benchmarks in games and programs because I enjoy doing that, but part of me is like, Hmmmm, do I still haver performance on the table with this platform?

I suppose the only option at this point might be buying a 5800X3D and figuring it out myself within the return period at MicroCenter... lol. Also gaming at 4K, all of this might be a moot point and a waste of time & money... lol
Tweaked RAM will still never touch the latency of Vcache.

 
Tweaked RAM will still never touch the latency of Vcache.


Yeah, wish they showed the 1% lows for 4K tho... but I get how this CPU is aimed at the 1080P and 1440P crowd mostly... Gains seem small at 4K, minus some very specific titles. This 4090 is pushing pretty high frames even at 4K, so I wonder if that is a factor now too on other games... guess i need to do some more research to see if it is worth it in games I specifically play.
 
i swapped my 5950x to 5800x3d today, cause 5950x limits 4090 even at 3440x1440 a lot. best decision and some big fps gains in a lot of games.
WoW Dragonflight
nice boost, mmo scale great with 3d cache.
  • 5950x -> 88-95 fps
  • 5800x3D -> 113-116 fps
New World
running around capital at brimstone
  • 5950x -> 45-55fps
  • 5800x3D -> 60-80fps
Far Cry 6 benchmark
  • 5950x -> avg 97 min 79
  • 5800x3D -> avg 130 min 98
Assassins Creed Odyssey benchmark
  • 5950x -> avg 146 min 36
  • 5800x3D -> avg 157 min 52
Assassins Creed Valhalla
  • 5950x -> avg 169 min 87
  • 5800x3D -> avg 174 min 91
Cyberpunk
Dlss quality raytracing psycho
  • 5950x -> avg 102,78 min 54,22
  • 5800x3D -> avg 99,51 min 57,01

nativ raytracing psycho
  • 5950x -> avg 61,57 min 40,03
  • 5800x3D -> avg 60,04 min 45,08
Hogwarts is around +10fps, BF 2042 +20fps, Witcher 3 around +10fps, Sons of the Forest from 80-100 at my base to 130-140. so it was well worth it paired with a 4090 this is a awesome cpu.

Would never trade my 5950x for something with fewer cores. Software design always catches up, sooner or later, sometimes quicker or slower - but, EOD, it's there.

if you only game that is just ... pointless. i mean you can keep the extra cores and wait like 10 years for them to get utilized, fingers crossed for you, meanwhile game with much lower fps cause the 5950x not a great gaming cpu.
 
i swapped my 5950x to 5800x3d today, cause 5950x limits 4090 even at 3440x1440 a lot. best decision and some big fps gains in a lot of games.
WoW Dragonflight
nice boost, mmo scale great with 3d cache.
  • 5950x -> 88-95 fps
  • 5800x3D -> 113-116 fps
New World
running around capital at brimstone
  • 5950x -> 45-55fps
  • 5800x3D -> 60-80fps
Far Cry 6 benchmark
  • 5950x -> avg 97 min 79
  • 5800x3D -> avg 130 min 98
Assassins Creed Odyssey benchmark
  • 5950x -> avg 146 min 36
  • 5800x3D -> avg 157 min 52
Assassins Creed Valhalla
  • 5950x -> avg 169 min 87
  • 5800x3D -> avg 174 min 91
Cyberpunk
Dlss quality raytracing psycho
  • 5950x -> avg 102,78 min 54,22
  • 5800x3D -> avg 99,51 min 57,01

nativ raytracing psycho
  • 5950x -> avg 61,57 min 40,03
  • 5800x3D -> avg 60,04 min 45,08
Hogwarts is around +10fps, BF 2042 +20fps, Witcher 3 around +10fps, Sons of the Forest from 80-100 at my base to 130-140. so it was well worth it paired with a 4090 this is a awesome cpu.



if you only game that is just ... pointless. i mean you can keep the extra cores and wait like 10 years for them to get utilized, fingers crossed for you, meanwhile game with much lower fps cause the 5950x not a great gaming cpu.
That's fairly impressive... out of curiosity, what are your memory speed & timings on both setups? This is become quite a tempting purchase... and i love my 5950X at 4K paired with my 4090, but if I am leaving performance on the table... hmmmmm.....
 
That's fairly impressive... out of curiosity, what are your memory speed & timings on both setups? This is become quite a tempting purchase... and i love my 5950X at 4K paired with my 4090, but if I am leaving performance on the table... hmmmmm.....
just basic 32gb 3600 cl16 it´s this one, just with xmp not manually oc or something
https://www.amazon.de/gp/product/B083TRXZ9N/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

yeah the 5950x was/is a impressive work+gaming cpu paired with previous 3090 there was no need to upgrade but with the 4090 cpu limit became a thing even at 4k. a lot of people say you also feel much improvement with 1% lows but i had no time yet to really game with it, just benchmarked and checked if it was worth it. Only thing i checked in this regard was Jedi Fallen Order this game was a notorious stutterer on all my pc´s (thought it had a problem with 3440x1440) with 9700k/2080ti, 9900k/2080ti/3090, 5950x/3090/4090 it always was a bit stuttery, with 5800x3d it is smooth as hell with 144fps (cap).
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I play games at 1440P to 4K on my G7 1440P monitor. The 5900X was struggling. 5800X3D does it better. Waiting for 7800X3D to see if it obliterates Intel 13900K or not.
 
  • Like
Reactions: noko
like this
So after a couple days with the 5800x3d in my rig, I’ve been extremely impressed. I’m running 1440p UW and a 4090, and even then the difference has been noticeable. My 2 most played games are WoW and EFT. Both of them are known to benefit from the cache, and it’s nearly a night a day difference with them. WoW fps nearly doubled in Valdrakken (main city of current expansion)—went from 50ish fps to mid 90s. EFT gains are more map dependent, but I would say I’ve seen probably about a 30% fps increase on average. Even random games I’ve booted up just to mess with all seem to have at least a small fps bump, so I’m thinking the 5800x3d is reducing the 4090’s bottleneck more than I expected. My MSI board has a feature called Kombo Strike specifically for the 5800x3d that throws an UV on it. I have it set to 3, which is apparently a -30 offset across all cores. CPU runs perfectly stable, everything is pegged at 4.45ghz, and tops out at like 60C under my AIO while gaming. Overall, I’m extremely pleased.

Definitely worth the swap, imo. Took me literally 10 minutes to do, and the gains are noticeable. On a side note, go buy my 5900x in the FS/FT section lol.
 
Need to do more testing, but I disabled CPPC and CPPC Preferred Cores in the Bios on my 5950X, basically, allowing Windows to assign whatever it wants to any core on any CCX it wants without preference. My theory was somewhat confirmed that windows was forcing small bullshit tasks to the best core when gaming, causing the 1% lows to dip and micro stutter. Just ran some quick tests, and while I'm not boosting to 5.1Ghz+ in games, it's a solid 4.6Ghz~4.8Ghz and 1% lows are up. Even more interesting, my AIDA64 memory latency is 3ms lower, CONSTANTLY, which might also be helping as well. This is going to be fun to play with, and makes me wonder if I should just go for an all core OC at 4.7Ghz.

Just trying everything I can to see what I can do to remove more of a CPU bottleneck on my 4090. Worst case though, its looking like even at 4K people are enjoying the 5800X3D and noticing gains... so might have to snag one sooner than later.
 
I ran BF2042 4K using DLDSR yesterday. It was so smooth that I couldn’t tell the difference between 1440P v 4K. I was playing at 1440P because I needed my 240 fps. However, since I didn’t have my fps counter on I have no idea what fps I was running at. Just that I was shooting idiots exactly the same way as 1440P only with crisp graphics this time.

I love the 4090 and am glad that the 5800X3D doesn’t hold it back at any resolution.
 
So after a couple days with the 5800x3d in my rig, I’ve been extremely impressed. I’m running 1440p UW and a 4090, and even then the difference has been noticeable. My 2 most played games are WoW and EFT. Both of them are known to benefit from the cache, and it’s nearly a night a day difference with them. WoW fps nearly doubled in Valdrakken (main city of current expansion)—went from 50ish fps to mid 90s. EFT gains are more map dependent, but I would say I’ve seen probably about a 30% fps increase on average. Even random games I’ve booted up just to mess with all seem to have at least a small fps bump, so I’m thinking the 5800x3d is reducing the 4090’s bottleneck more than I expected. My MSI board has a feature called Kombo Strike specifically for the 5800x3d that throws an UV on it. I have it set to 3, which is apparently a -30 offset across all cores. CPU runs perfectly stable, everything is pegged at 4.45ghz, and tops out at like 60C under my AIO while gaming. Overall, I’m extremely pleased.

Definitely worth the swap, imo. Took me literally 10 minutes to do, and the gains are noticeable. On a side note, go buy my 5900x in the FS/FT section lol.

You’re definitely going to have less of a bump at 4K, but it’s there nonetheless. Even if most games you basically break even, when you do run across a game that likes the cache, the difference can be massive.

I have a feeling the 5800x3d is going to go down in the same tier of chips with things like the i5/i7 2500/2600k. Legendary tier lol.
 
  • Like
Reactions: noko
like this
I ran BF2042 4K using DLDSR yesterday. It was so smooth that I couldn’t tell the difference between 1440P v 4K. I was playing at 1440P because I needed my 240 fps. However, since I didn’t have my fps counter on I have no idea what fps I was running at. Just that I was shooting idiots exactly the same way as 1440P only with crisp graphics this time.

I love the 4090 and am glad that the 5800X3D doesn’t hold it back at any resolution.
When you get some FPS numbers on it, let me know! I play that game at 4K, fully maxed settings on my 5950X and 4090. :)
 
As a follow up (for those interested), moved on to a 4.725Ghz ALL CORE OC... not using PBO or any CPPC. So far, all benchmarks are higher and FPS lows are up too. Should have done this years ago vs. PBO! Also, what might help is that L3 cache bandwidth jump and memory latency drop! The higher bandwidth number is my locked OC vs. PBO. almost a 50% increase in L3 cache... insane.
cachemem_5950x_3750_1_AllCoreOC_CCPC_OFF_Win11.png
cachemem_5950x_3750_1_CstatesON_Win11.png

Here is another snapshot from Superposition... note the min FPS jump between the 2 tests. The lower one is PBO and the higher one is all cores locked at 4.725Ghz (no PBO).

Superposition_Benchmark_v1.1_35019_1678063096.png
Superposition_Benchmark_v1.1_34615_1676753426.png


Part of me wonders if aside the cache size, that the 5800X3D works so well because its pretty much frequency locked as if it was an all core OC... I also kind of want to run a bunch of tests now and compare this configuration to some 5800X3D benchmarks to see how close I can get.
 
Should have done this years ago vs. PBO!
Yup, ever since PBO introduce for ryzen 2000 series, I've never used it for daily settings.
Better use all core overclock even though the single core performance suffer a bit (~3% max).
I've noticed the pattern became apparent when using ryzen 5600x pbo (single core boost to 4850mhz) against all core oc to 4650mhz. Overall performance is better on all core OC.

Not to mention, the new batch of ryzen 5000 has better bin thus better oc potential (higher boost with lower vcore requirement).
 
I never used hard clocks. Was too much performance loss. On my 5600X and 5900X I used PBO with -20 all core. Ran like a dream. Single core to 5150 and all core to 4650-4750. Was good enough.

5800X3D I didn’t touch.
 
I never used hard clocks. Was too much performance loss. On my 5600X and 5900X I used PBO with -20 all core. Ran like a dream. Single core to 5150 and all core to 4650-4750. Was good enough.

5800X3D I didn’t touch.
Yeah, same for almost 2 years now until today. I even took the time to do a per core PBO way back and also tweaked all the power settings. I was getting 5.1~5.2Ghz on 1 or 2 cores and 4.7~4.9Ghz all core when gaming.

While my "single core" may be lower now, the multicore is much better and I can confirm the system feels soooo much smoother gaming. Minimum frames are way up so far in my testing. I only posted some quick AIDA64 and Superposition above to quickly get my point across, but after gaming tonight, it felt smooth as butter at 4K.

I'll have to do more game tests after work tomorrow and report back for everyone.

The only reason I went down this path was my 4090 at 4K was starting to hit CPU limits, so wanted to see how I could tweak further and may have found a very viable route that fixes a lot of inherent problems with PBOs algorithm that may have been nerfing performance in games all along because of latency issues and scheduling.
 
As a follow up, I tested SOTTR, Forza Horizon 5, CP2077, MSFS2020, COD:MW2 and BF2042 with my all core OC at 4.750Ghz. This speed on my 5950X requires 1.35V with LLC level 3, so you need some decent cooling and I'd advise not running Prime95 tests without watching the temperatures... after about 10 minutes i had to stop because temps were hitting almost 90C, but for gaming, I never break 65C, which is all I care about for this specific configuration. For production workloads, I just throw PBO back on.

In every single game I tested, minimum FPS was way up... as a matter of fact, I got to testing BF2042 last night and it felt like I had a brand-new CPU in my system. My FPS was pegged (at 4K, max settings, RTAO on) between 120~144 FPS, SOLID. The CPU FPS was always in that range, so it was not holding my 4090 back like it was previously (it would dip into the 90's).

CP2077, COD:MW2, SOTTR, FH5 all had minimum FPS up and it felt stutter free. Not as big of a jump as BF2042, but in these games I always had solid performance anyway.

MSFS2020 was a mixed bag, overall, it felt smoother with less FPS drops over NYC (I am using DLSS 3.0 Frame Gen in this game). However, it was not "mind blowing", minimum dips were still in the 40's when it was loading items into the area (NYC is a pounding CPU test in this game). I also play this game fully maxed at 4K, if I knock the texture and shadow details down some, it becomes very smooth over NYC.

So my thought on this is, if you are on the fence like me about a 5800X3D and own a 5950X, give the ALL CORE OC a shot first and experiment for yourself. After all this testing, I am now passing on the 5800X3D and will continue with the all core OC for gaming on my 5950X.

I'm not entirely sure the root issue, but I suspect PBO has an effect on memory latency and also has a performance hit on L3 cache. As I stated previously and showed, my memory latency is down a constant 3ms and my L3 cache bandwidth went up over 50%. This likely helps mitigate the issues inherent in AMD CPUs with the IO die and cores being on separate die's using the infinity fabric when it comes to DDR4 speeds and this specific Ryzen platform.

All in all, wish I did this when I got the CPU originally, but hey, now it feels like a new PC.... lol.
 
Because I think some of you may like to see the raw data, FH5 benchmark is a great tool to illustrate what I am talking about (sadly game, windows and driver update between 2 weeks ago and today, I can rerun with PBO if someone does not believe me)... not only are frames higher, stutter down, check out the CPU plot... it is all over the place with PBO. With an all core OC, much higher and tighter grouping, above the GPU more frequently, right where you want it...

I do not mean for this to be a high jacking on how to overclock the Ryzen 5000 CPUs from the initial discussion on the 5800X3D, but damn do I believe this could help a lot of people save money if they already have a 5900X or 5950X. I'd actually love to see someone with a 4090 and 5800X3D who games at 4K do some comparisions with me! :)

1678407415327.png
1678407438997.png
 
Because I think some of you may like to see the raw data, FH5 benchmark is a great tool to illustrate what I am talking about (sadly game, windows and driver update between 2 weeks ago and today, I can rerun with PBO if someone does not believe me)... not only are frames higher, stutter down, check out the CPU plot... it is all over the place with PBO. With an all core OC, much higher and tighter grouping, above the GPU more frequently, right where you want it...

I do not mean for this to be a high jacking on how to overclock the Ryzen 5000 CPUs from the initial discussion on the 5800X3D, but damn do I believe this could help a lot of people save money if they already have a 5900X or 5950X. I'd actually love to see someone with a 4090 and 5800X3D who games at 4K do some comparisions with me! :)

View attachment 555077View attachment 555078
Nice dude!

I already sold off all of my AM4 and swapped for AM5 and broke even. Never going back. 😎

Once I used 5800X3D I ditched all the other chips, too. Glad the numbers are good but for me 5800X3D had a much better feel. I’ve never been a tuner on the AMD side. That’s always fun!
 
I ran the above bench and scored 98 FPS. However, my stutter count was 1. This is with zero overclocks on graphics or CPU.
Now going to overclock a bit and see what's up.

So set PBO to -20 all core which makes the CPU stick to 4450 all core. Also clocked my graphics card a bit. Scored 103 FPS in FH.

I remember this game not really benefitting from 3D cache so maybe another game might be a better experiment.
 
Last edited:
I ran the above bench and scored 98 FPS. However, my stutter count was 1. This is with zero overclocks on graphics or CPU.
Now going to overclock a bit and see what's up.

So set PBO to -20 all core which makes the CPU stick to 4450 all core. Also clocked my graphics card a bit. Scored 103 FPS in FH.

I remember this game not really benefitting from 3D cache so maybe another game might be a better experiment.
For reference, I ran all tests the way I game on my 4090. Thats +1250Mhz memory and +150Mhz core, which for the core nets me 3045Mhz solid when gaming if you are trying to compare apples to apples.

I believe CP2077 and SOTTR benefit from 3d cache more than FH5 if you want to run those. SoTTR has a great readout like FH5 for comparing lows and CPU vs. GPU render times. However your stutter count 1 is less than my 10, which both are way less than the 30+ i was getting with PBO instead of all core OC.

Just let me know if you can compare more games and I can post those up for reference too. I'd love to compare more as I am very curious how limited PBO was vs this all core OC now in terms of 1% lows, memory and L3 performance and also how much 3d cache can help at 4K with a 4090.
 
Oh with such a high overclock on GPU of course your fps will be high.

I can download other games if that helps.

Let me know which ones. I will download SOTTR over night. CP2077 I also have. Post settings and I can run these benches.
 
Back
Top