Anyone “upgraded” from 5900X to 5800X3D for gaming?

Should I get a 5800X3D?


  • Total voters
    105
Oh with such a high overclock on GPU of course your fps will be high.

I can download other games if that helps.

Let me know which ones. I will download SOTTR over night. CP2077 I also have. Post settings and I can run these benches.
Awesome... I'll do some runs in SoTTR and CP2077 tomorrow after work and snag some screenshots and settings. I don't expect this trick to match or beat a 5800X3D in games that specifically take advantage of 3d cache, but I am extremely surprised how this all core OC smoothed everything out. Night and day man, still impressed so far a week later.
 
Oh with such a high overclock on GPU of course your fps will be high.

I can download other games if that helps.

Let me know which ones. I will download SOTTR over night. CP2077 I also have. Post settings and I can run these benches.

All settings under "Graphics" were set to the maximum possible for reference. I have posted PBO vs. All Core OC for comparison. All core OC blows PBO out of the water if you look at the minimum and maximum frames rendered. I guess even the average was much better too for the CPU. PBO was under the previous NVidia driver (I have not run PBO since). But I always ran benchmarks between driver upgrades, so it is recent.

PBO:
1678741854912.png


ALL CORE OC:
1678741679073.png
 
For reference my score on SOTTR with everything maxed out is 175 fps. This is with card overclocked to run at 2790 MHz and memory at +1350 MHz. I doubt the overclock on your card is making almost a 15 fps difference. If you want we can compare stock to stock performance as well.
So it seems the SOTTR likes the extra cores on your CPU.

Also for Cyberpunk 2077 I scored 108.36 fps with psycho quality + DLSS quality + frame gen at 4K.
 
For reference my score on SOTTR with everything maxed out is 175 fps. This is with card overclocked to run at 2790 MHz and memory at +1350 MHz. I doubt the overclock on your card is making almost a 15 fps difference. If you want we can compare stock to stock performance as well.
So it seems the SOTTR likes the extra cores on your CPU.

Also for Cyberpunk 2077 I scored 108.36 fps with psycho quality + DLSS quality + frame gen at 4K.
I'll run CP2077 this evening for you... I can re-run SOTTR at Stock video card settings if you want. +1350Mhz on the memory is pretty good OC on your end though (and has the best FPS boost in my testing), but yeah, not sure my core at 3045Mhz will make a 15FPS difference, but maybe that combined with the +1250Mhz memory it does?

If you can, post your graphs from the runs. Those will show the lows, which a lot of times is what will affect the smoothness feeling of the games.
 
I'll run CP2077 this evening for you... I can re-run SOTTR at Stock video card settings if you want. +1350Mhz on the memory is pretty good OC on your end though (and has the best FPS boost in my testing), but yeah, not sure my core at 3045Mhz will make a 15FPS difference, but maybe that combined with the +1250Mhz memory it does?

If you can, post your graphs from the runs. Those will show the lows, which a lot of times is what will affect the smoothness feeling of the games.
FWIW, I've seen and read of bigger gains from a VRAM OC than a core clock OC on my 4090. YMMV.
 
For reference my score on SOTTR with everything maxed out is 175 fps. This is with card overclocked to run at 2790 MHz and memory at +1350 MHz. I doubt the overclock on your card is making almost a 15 fps difference. If you want we can compare stock to stock performance as well.
So it seems the SOTTR likes the extra cores on your CPU.

Also for Cyberpunk 2077 I scored 108.36 fps with psycho quality + DLSS quality + frame gen at 4K.
Here is my CP2077 run.... I'd be curious what your min FPS is in that game vs mine... It should be higher than mine if it is a 3DVcache game, even if the average is slightly lower.

1678930116452.png
 
Here you go. Don't believe Cyberpunk is a Vcache game. If you want to try Vcache game it is mostly Ubisoft stuff like Ass Creed or Farcry 6.
 

Attachments

  • cyberjunk.jpg
    cyberjunk.jpg
    39.1 KB · Views: 1
Here is SOTTR. Also it seems you are rendering an additional 2000+ frames at the end causing your score to spike. Are we running the same benchmark? lol.
You can see my CPU game and render is better than yours (mins).
 

Attachments

  • SOTTR.jpg
    SOTTR.jpg
    350.7 KB · Views: 1
Last edited:
Here is SOTTR. Also it seems you are rendering an additional 2000+ frames at the end causing your score to spike. Are we running the same benchmark? lol.
You can see my CPU game and render is better than yours (mins).
I just run the benchmark man, I dunno after that... lol.

All joking aside, I think both your tests show the benefit of the Vcache, even though it is much smaller at 4K even with a 4090. Your lows are much higher than mine, even if your average frames are slightly lower.

Think about it; you are running 8 cores at what... 4.4~4.5Ghz? Unless you did a BCLK OC... I am running 16 cores at an all core OC of 4.725Ghz, with extremely tight timings on my memory at 3800Mhz speed (using Samsung B-Die kit).

If anything; I think this thread kind of helps both points. I have shown a massive jump in performance from using PBO to just going to an All Core OC on a 5950X. Not sure what a 5900X would look like, but probably improved as well. However, for A LOT of people who do not want to mess with an All Core OC or if they don't have a good overclocker, a 5800X3D is clearly an easy drop in and just "works". For those who want to try first, I would 100% say give the All Core OC a shot. It cleared up the occasional stuttering for me in some games.

At the same time, I'd also argue at 4K, even with a 4090, the difference is fairly small in our short testing so far. I imagine in even newer games, we would both be GPU limited again in most situations, but clearly the 5800X3D (or a ridiculously OC'ed 5950X) is not a limiting factor for playable / stutter free frames on a 4090.

I can always install FC6 if you want to run more tests (I do own it, haven't played in a long time).
 
Let's do FC6 (will need to install as well but its like 5 min download lol). I did put my results w/ a 5900X which was running PBO on page 2 of this thread I think. 👍
However, those were 1440P.
 
Following this thread for later reference. I'm currently using a 3800X that's a strong clocker at 4.5Ghz all core. I've been eyeballing the 5800X3D for a while and especially now that it's arguably the last stop for upgrades on the road map for AM4. I game at 4K so my concern was how negligible the difference would be. Just on a reference 6800XT currently.
 
Following this thread for later reference. I'm currently using a 3800X that's a strong clocker at 4.5Ghz all core. I've been eyeballing the 5800X3D for a while and especially now that it's arguably the last stop for upgrades on the road map for AM4. I game at 4K so my concern was how negligible the difference would be. Just on a reference 6800XT currently.

It’ll be a big boost just from the much better architecture of Zen 3. Honestly, I would advise anyone on AM4 that wants a cheap upgrade to pop in a 5800x3d at this point. It’s that good of a cpu.
 
Following this thread for later reference. I'm currently using a 3800X that's a strong clocker at 4.5Ghz all core. I've been eyeballing the 5800X3D for a while and especially now that it's arguably the last stop for upgrades on the road map for AM4. I game at 4K so my concern was how negligible the difference would be. Just on a reference 6800XT currently.
Just buy a 5800X3D. For you it’s not even a discussion. Doesn’t matter the graphics card you are using.
 
It’ll be a big boost just from the much better architecture of Zen 3. Honestly, I would advise anyone on AM4 that wants a cheap upgrade to pop in a 5800x3d at this point. It’s that good of a cpu.
8/16 is beefy regardless. Still pretty close to a sweet spot even these days.
 
It’ll be a big boost just from the much better architecture of Zen 3. Honestly, I would advise anyone on AM4 that wants a cheap upgrade to pop in a 5800x3d at this point. It’s that good of a cpu.
I went this route upgrading a 2700x after deciding to hold out another gen on AM5 and while expected a nice uplift I'm really surprised at how much better it is. The smoothness from removing some of the lows is especially appreciated but beyond that it completely removed any cpu bottleneck on many games at 120hz.

I'm thinking the next upgrade will likely be when they release something more like 7950x with 3d cache for both CCDs, by then ddr5 and am5 mb prices should be more reasonable too. Of course this is subject to change if AMD drops the ball or Intel comes up with something more compelling.
 
As a follow up to a follow up... DLSS 3.0 is a game changer for anyone with a nvidia RTX 4xxx Series card for games that add it. FH5 just added it today so I ran a quick test... all settings maxed, DLSS Frame Generation On and NVidia Reflex ON + BOOST. I have a Frame Cap set for 144 FPS, but I imagine this would be much higher as well without it, but I'm not a fan of screen tearing so I have a cap and ran the test the way I would game. Either way, 0 Stutter count and absolutely smooth as butter... was pegged at 144 FPS the entire time... not sure where the GPU min or average is coming from on this test other than it may not be built for the frame generation yet as it just came out today. I can assure you it was 100% pegged at 144 FPS the entire time tho... lol. I guess the point is, this technology is going to give life to CPUs as that load is being removed from the CPU itself and sent to the GPU.

1680117259557.png
 
Frame gen will only continue to get better with time. I do see some anomalies here and there but the smoothness and fast fps is worth it to me at least. Will pump FH5 and see what’s up since I have a 240 Hz monitor and I run DLDSR 4K.
 
Frame gen will only continue to get better with time. I do see some anomalies here and there but the smoothness and fast fps is worth it to me at least. Will pump FH5 and see what’s up since I have a 240 Hz monitor and I run DLDSR 4K.
I imagine it will run great for you! I did a test with no frame limits and I got 187 FPS at 4K, fully maxed out... I'd suspect you might be able to top 200+.

To run it "properly", you should turn vSync on in the NVCP for just Forza 5, leave frame limits off and use Reflex ON + Boost in game. nVidia reflex should see vsync in the NVCP and manually cap your frame below your max refresh rate automatically to ensure the lowest latency possible with Frame Gen on. In further testing, that gave me the absolute smoothest experience so far and is the recommended settings for Frame Gen ON in DLSS 3.0.
 
Thx III_Slyflyer_III for sharing all of this. You gave me a couple of ideas that really helped me get my system where I needed.

Following your lead, I've been playing with an all-cores OC on my 5950x, trying to optimize for efficiency mostly since I'll need it to do long simulations that can run for many days and I wanted/needed a silent system for that. After a bit of experimentation I'm able to do 4.4GHz all-cores at 1.15V with LLC3 with minimal heat being generated, at a higher frequency even then what PBO was giving me when running at full load.

Like you I also play on this machine; since I have a Dark Hero I started using that Dynamic OC Switcher so that PBO still runs most of the time, but with cppc preferred core disabled as to maximize the cache speed when the switching goes into an all core OC. From my own testing it's really cppc preferred core being disabled that increase the L3 Cache speed; even with cppc enabled (and sometimes with PBO going even, although that's inconsistent) I still have increased speed on the L3 in that aida64 benchmark with only preferred core disabled. I'm not even sure what cppc alone does though.

I'll test out this setup in the next few weeks, see how stuttering feels in games; like you I really have a feeling that those prefered cores are being overloaded by the scheduler. Next optimisation step would probably be to use Process Lasso to do a better job at using those good cores. Regardless, if our feeling is right, the difference between "good" cores and bad ones isn't worth the stutter. I may also give an all-core at ~4.6Ghz+ ~1.25V a try eventually if I can tune my NH-D15S and case fans speed somehow for lower noise.
 
Thx III_Slyflyer_III for sharing all of this. You gave me a couple of ideas that really helped me get my system where I needed.

Following your lead, I've been playing with an all-cores OC on my 5950x, trying to optimize for efficiency mostly since I'll need it to do long simulations that can run for many days and I wanted/needed a silent system for that. After a bit of experimentation I'm able to do 4.4GHz all-cores at 1.15V with LLC3 with minimal heat being generated, at a higher frequency even then what PBO was giving me when running at full load.

Like you I also play on this machine; since I have a Dark Hero I started using that Dynamic OC Switcher so that PBO still runs most of the time, but with cppc preferred core disabled as to maximize the cache speed when the switching goes into an all core OC. From my own testing it's really cppc preferred core being disabled that increase the L3 Cache speed; even with cppc enabled (and sometimes with PBO going even, although that's inconsistent) I still have increased speed on the L3 in that aida64 benchmark with only preferred core disabled. I'm not even sure what cppc alone does though.

I'll test out this setup in the next few weeks, see how stuttering feels in games; like you I really have a feeling that those prefered cores are being overloaded by the scheduler. Next optimisation step would probably be to use Process Lasso to do a better job at using those good cores. Regardless, if our feeling is right, the difference between "good" cores and bad ones isn't worth the stutter. I may also give an all-core at ~4.6Ghz+ ~1.25V a try eventually if I can tune my NH-D15S and case fans speed somehow for lower noise.
Keep us posted and glad I was able to give you some ideas to help you tune things out! Every single one of my games was smoother (why I also posted numbers as proof), so I'd say hands down the all core OC for a majority of people would be worth it over PBO and single core boosting. Plus, as you mentioned, the stuttering PBO seems to add is definitely not worth anything a single core boost can bring to the table.

1.25V is perfectly safe, so don't be shy at boosting a bit more to 4.5Ghz or 4.6Ghz if those are stable for you. I think most sites say 1.35V should be your max 24/7 voltage. I am using 1.35V with LLC level 3 for 4.725Ghz 24/7, but I am also cooling with an H150i. Stays cool in games (low 60's tops), but Prime95 will shoot that puppy up to the upper 80's really quick... lol.
 
Last edited:
Keep us posted and glad I was able to give you some ideas to help you tune things out! Every single one of my games was smoother (why I also posted numbers as proof), so I'd say hands down the all core OC for a majority of people would be worth it over PBO and single core boosting. Plus, as you mentioned, the stuttering PBO seems to add is definitely not worth anything a single core boost can bring to the table.

1.25V is perfectly safe, so don't be shy at boosting a bit more to 4.5Ghz or 4.6Ghz if those are stable for you. I think most sites say 1.35V should be your max 24/7 voltage. I am using 1.35V with LLC level 3 for 4.725Ghz 24/7, but I am also cooling with an H150i. Stays cool in games (low 60's tops), but Prime95 will shoot that puppy up to the upper 80's really quick... lol.
I've been trying to figure out the best settings for that all-cores OC 24/7 and here's what I figured out so far. Reading around the web and observing the various parameter in HWiNFO, it seems like it's better to have higher vcore along side low LLC levels. I'm mentally set to not go over 1.35V but it seems to me that the way those CPU are working, idle or light load could very well support much higher vcore as long as your heavy load voltage stay below 1.35V.

So far, I've been able to run at 4.6Ghz at 1.3425V with LLC on auto; at load the vcore drops to 1.22-1.26V or so. CPU Package is holding at ~79Celcius on my work load with an acceptable fan curve, noise wise. Gaming is in the low 60celcius at zero fan noise (beside the GPU). I could probably make it work at 4.7Ghz, but I'd have to set that vcore past 1.35 or use higher LLC, which I'd rather avoid. Now starts the long game of making sure it's 100% stable 24/7!

Else, I'm really impressed by the L3 Cache speed increase with this all-core on the 5950x: it's almost a 50% speed boost compared to using PBO; that's significant and may very well cancel any performance loss from the lower frequency as shown by the 5800X3D performance.
 
I've been trying to figure out the best settings for that all-cores OC 24/7 and here's what I figured out so far. Reading around the web and observing the various parameter in HWiNFO, it seems like it's better to have higher vcore along side low LLC levels. I'm mentally set to not go over 1.35V but it seems to me that the way those CPU are working, idle or light load could very well support much higher vcore as long as your heavy load voltage stay below 1.35V.

So far, I've been able to run at 4.6Ghz at 1.3425V with LLC on auto; at load the vcore drops to 1.22-1.26V or so. CPU Package is holding at ~79Celcius on my work load with an acceptable fan curve, noise wise. Gaming is in the low 60celcius at zero fan noise (beside the GPU). I could probably make it work at 4.7Ghz, but I'd have to set that vcore past 1.35 or use higher LLC, which I'd rather avoid. Now starts the long game of making sure it's 100% stable 24/7!

Else, I'm really impressed by the L3 Cache speed increase with this all-core on the 5950x: it's almost a 50% speed boost compared to using PBO; that's significant and may very well cancel any performance loss from the lower frequency as shown by the 5800X3D performance.
It's my theory that the 5800X3D performance was such a jump for AM4 because it was overcoming the issues inherent in the chiplet design of the CPU and the separate IO die, which added memory latency, which would affect CPU bound scenarios negatively. The more L3 cache, the more data you can store next to the chip to overcome this.

However, with the all-core OC on the 5950X, you are running the cache at full speed all the time as well with the CPU clocks, there is no variance or "lag" if you will with boosting cache speeds with the CPU (which were a lot of times lower than the CPU speed anyway). Thus, as you have seen, you have increased your L3 cache bandwidth by 50% (or more), so what you lack in L3 cache size on the 5950X vs. the 5800X3D, you have almost fully made up for in L3 bandwidth. So, while you store less data at 32Mb vs. 96Mb, you have more/faster access to the memory directly overcoming that obstacle and latency.

As for your voltage, I read similar, but if I do not run LCC of 3, I have to raise voltage past 1.35V to remain stable at 4.7Ghz. Auto ended up being LLC3 for me, so it is the same in my specific use case. Under full load my voltage drops to about 1.32V~1.33V, so there is a little vDroop left at least... lol.
 
Else, I'm really impressed by the L3 Cache speed increase with this all-core on the 5950x: it's almost a 50% speed boost compared to using PBO; that's significant and may very well cancel any performance loss from the lower frequency as shown by the 5800X3D performance.
Yep, from my last AM4 builds (and also when I built my friend's PCs), I've never used PBO.
I'd rather set manual oc, for example:
1. ryzen 3600 @4400mhz 1.25v
2. ryzen 5600x @4650mhz 1.22v
3. ryzen 5600g @4522mhz 1.4v

All have slightly ~1% lower CBR23 single core score but all have better average / 1% low fps on all games I've played.
 
Nope, i'm more than happy with my 5900x, coming from a Zen 1 TR1920x. My next cpu upgrade will either be a zen 5 x3d or vanilla zen 6 cpu, assuing that zen 6 will work on AM53
 
I upgraded form an R7 3700x to the 5800x3d, ad everything was fine, however in March 2023 a new BIOS was released for my Aorus X570 Pro. Performance seem to dip a little and I cold see higher CPU temps and more fan activity. It looks like AMD and board manufacturers a finding the x3d chips a bit of a challenge to maintain, currently have a ticket with Gigabyte that is a couple of months old now, they claim to be looking into it. Any attempt to undervolt seems to eventually lead to WHEA warnings and random restarts.

I need my machine for dev as well as gaming etc so decided not to wait, recently got a 5900x, it is faster and runs cooler than the 5800x3d in Godlike Mode VR (Pico 4) running MSFS, all using default optimal settings.
 
I upgraded form an R7 3700x to the 5800x3d, ad everything was fine, however in March 2023 a new BIOS was released for my Aorus X570 Pro. Performance seem to dip a little and I cold see higher CPU temps and more fan activity. It looks like AMD and board manufacturers a finding the x3d chips a bit of a challenge to maintain, currently have a ticket with Gigabyte that is a couple of months old now, they claim to be looking into it. Any attempt to undervolt seems to eventually lead to WHEA warnings and random restarts.

I need my machine for dev as well as gaming etc so decided not to wait, recently got a 5900x, it is faster and runs cooler than the 5800x3d in Godlike Mode VR (Pico 4) running MSFS, all using default optimal settings.
I have a x570 master, and not a pro, although I'm sure the BIOS's are very similar. The most recent release allowed PBO2/CO support of the 5800x3d. It's been fine on my unit. Run -15 on all cores, and everything will peg at 4.45GhZ full load. It maxes around 75C with a 360mm AIO, but most gaming is in the low 60's. One thing I discovered while tweaking, if you enable virtualization on these mobos, if screws with the bclk and base frequencies (at least since I've switched to the x3D from my 5950x).
 
Did upgrade to the 5800x3D to get rid of some micro stutters in gaming. It is definitely smoother than the 5900x when running games, especially in MP games with high framerates and in very demanding games.

It runs a lot hotter than the 5900x. I did a baseline with Cinebench and it gave me around 67-68 degrees celsius average in steady state while I would had 54-55 degrees celsius on the 5900x under the same conditions so hotter for less performance in productivity workloads.

It is generally fast enough for productivity, but I do notice a slight drop in performance. It is worth it as getting rid of annoying micro stutters in games was the main goal.

Slotted the 5900x into a B-350 itx motherboard I had lying around and it's VRMs handle it just fine. Getting to use the B-350 was part of the calculation when getting the 5800x3D and that one is going to a friend.
 
Just bite the bullet and swapped my 5800x for a 5800x3d, what an impressive cpu!!! my 5800x max tuned was able to do 13288 timespy cpu score, and 285fps avg cpu in mw2 benchmark this thing just destroy my 5800x.. 285fps mw2 vs 345-350 fps, timespy actually 140 points higher! memory will clock WAY higher.. max on my 5800x was 3733mhz before whea errors etc so max around 55 read vs 59 read! only in R23 theres a slight bit slower.. 15651 multicore vs my 5800x doing 16284

im blown away by this cpu.. sooo much better then i actually imagined since my lows was already REALLY great since my ram was running 3733mhz cl14 + subtimings, so the 1% lows in cod mw2 is actually around the same, exept the max fps, and the middle drops is signifigant higher! :D

oh yea!! almost forgot.. for gaming i only get to around 55-65c as a MAX temp.. even 10 min r23 i max see temps around 77 vs my 5800x that was around 82c

maxed out 5800x did 138w r23 = 82c after 10 min
maxed out 5800x3d (4560allcore) pulling around 112w = 77c after 10 min r23
 

Attachments

  • timespy score 13100 cpu.jpg
    timespy score 13100 cpu.jpg
    85.7 KB · Views: 0
  • cachemem.png
    cachemem.png
    93.4 KB · Views: 0
  • cod mw2.jpg
    cod mw2.jpg
    232.6 KB · Views: 0
  • 5800x3d test.jpg
    5800x3d test.jpg
    224.2 KB · Views: 0
Just bite the bullet and swapped my 5800x for a 5800x3d, what an impressive cpu!!! my 5800x max tuned was able to do 13288 timespy cpu score, and 285fps avg cpu in mw2 benchmark this thing just destroy my 5800x.. 285fps mw2 vs 345-350 fps, timespy actually 140 points higher! memory will clock WAY higher.. max on my 5800x was 3733mhz before whea errors etc so max around 55 read vs 59 read! only in R23 theres a slight bit slower.. 15651 multicore vs my 5800x doing 16284

im blown away by this cpu.. sooo much better then i actually imagined since my lows was already REALLY great since my ram was running 3733mhz cl14 + subtimings, so the 1% lows in cod mw2 is actually around the same, exept the max fps, and the middle drops is signifigant higher! :D

oh yea!! almost forgot.. for gaming i only get to around 55-65c as a MAX temp.. even 10 min r23 i max see temps around 77 vs my 5800x that was around 82c

maxed out 5800x did 138w r23 = 82c after 10 min
maxed out 5800x3d (4560allcore) pulling around 112w = 77c after 10 min r23
I am assuming B-Die based on those timings.... what voltage on the memory? Thats impressive...
 
Just bite the bullet and swapped my 5800x for a 5800x3d, what an impressive cpu!!! my 5800x max tuned was able to do 13288 timespy cpu score, and 285fps avg cpu in mw2 benchmark this thing just destroy my 5800x.. 285fps mw2 vs 345-350 fps, timespy actually 140 points higher! memory will clock WAY higher.. max on my 5800x was 3733mhz before whea errors etc so max around 55 read vs 59 read! only in R23 theres a slight bit slower.. 15651 multicore vs my 5800x doing 16284

im blown away by this cpu.. sooo much better then i actually imagined since my lows was already REALLY great since my ram was running 3733mhz cl14 + subtimings, so the 1% lows in cod mw2 is actually around the same, exept the max fps, and the middle drops is signifigant higher! :D

oh yea!! almost forgot.. for gaming i only get to around 55-65c as a MAX temp.. even 10 min r23 i max see temps around 77 vs my 5800x that was around 82c

maxed out 5800x did 138w r23 = 82c after 10 min
maxed out 5800x3d (4560allcore) pulling around 112w = 77c after 10 min r23
Love hearing stories like this - X3D is made for online competitive FPS.
 
I am assuming B-Die based on those timings.... what voltage on the memory? Thats impressive...
Yea b-die g.skill tridentz neo 3600cl16-16-16-16-36 1t stock 1.35v now running above timings with 1.56v added a little fan above them, keeps them at max 43-45c at full load for 2 hours, the screenshot was taken without fan, but with gpu fan running minimum 30% (aprox 900 rpm in the front) toxic extreme 360 aio with push pull.

heres full timings etc strapped all i can get xD
 

Attachments

  • OC 5800x3d.jpg
    OC 5800x3d.jpg
    416.8 KB · Views: 1
Love hearing stories like this - X3D is made for online competitive FPS.
True shit.. and i even benefit alot by the higher ram speed for fps aswell, the timings wont do much for gaming on the x3d, but the higher ram speed does, so if people are able to take advantage of the ram speed to a higher level regards of the timings they will see gains to fps aswell
 
If you got money to burn sure lol
For the main game I play, upgrading from my 5900X to my 5800X3D was actually a bigger performance boost than upgrading from my RTX 2080 to my RTX 4080. And, I sold my 5900X and made most of my money back. For some games, the 3D cache is amazing, and there are enough benchmarks out there that it's not difficult to figure out if makes sense for you or not.

You say that the upgrade is only for those with "money to burn". From my perspective, giving an existing AM4 rig one last upgrade to a 5800X3D can result in amazing gains for the money. Considering that the only other upgrade paths involve a CPU+Motherboard+Ram full platform upgrade; want to talk about burning money?
 
Anyone still considering this “upgrade” must be thick in the skull. There is enough evidence posted online and in this thread to make the move.
 
Back
Top