Anyone make the switch from 5800X3D to 7800X3D?

Move from 5800X3D to 7800X3D?


  • Total voters
    90
Got stuff installed. Added 4 TB Gen 4 SSD space as well. Now for a total of like 8.75 TB.

Some photos for viewing pleasure. Everything booted up fine with my old setup. Even my fans and colors are all the same. Just need to fix ram colors maybe. Will get to it after I play some games.

For now, this is right up there in an upgrade I didn’t need but one I deserved after 3 years of running AM4 with 5900X/5800X3D.

Dope af.
 

Attachments

  • IMG_5503.jpeg
    IMG_5503.jpeg
    460.8 KB · Views: 0
  • IMG_5504.jpeg
    IMG_5504.jpeg
    652.8 KB · Views: 0
  • IMG_5505.jpeg
    IMG_5505.jpeg
    522.1 KB · Views: 0
  • IMG_5506.jpeg
    IMG_5506.jpeg
    467.8 KB · Views: 0
Got stuff installed. Added 4 TB Gen 4 SSD space as well. Now for a total of like 8.75 TB.

Some photos for viewing pleasure. Everything booted up fine with my old setup. Even my fans and colors are all the same. Just need to fix ram colors maybe. Will get to it after I play some games.

For now, this is right up there in an upgrade I didn’t need but one I deserved after 3 years of running AM4 with 5900X/5800X3D.

Dope af.
Previously I used Intel i3 12100F (boosted 4290mhz) and Ryzen 3100 (4300mhz) daily.
Just upgraded to Ryzen 7500F (rest of the parts can be seen on my signature).

Still using rx 6600 yet I can feel the upgrade(s) from smoothness in the windows, bench(s), and game.

I will post my results later on separated thread, sort of like mini review for "value build of DDR5".
 

Attachments

  • IMG_20230909_130605_028.jpg
    IMG_20230909_130605_028.jpg
    329.2 KB · Views: 1
Couple of quick questions. I can see SOC voltage in HWINFO stuck at 1.299V. Is that enough or shall I manually reduce it? What is process of manually reducing SOC voltage on this platform as I can't find the setting like AM4 where it said point blank SOC voltage in the bios. This is also after me updating the bios to latest version from August. Prior to that it was at 1.349V so definitely something improved.

How do I stop memory training every single time the board boots. Dammit the boot time is very long. It is really pissing me off.

Finally, this CPU definitely runs a bit hotter than 5800X3D. I saw 75 C while playing Cyberpunk 2077 whereas, 5800X3D would hover around 69-72C. Cinebench pushed it to 85 C which is similar to 5800X3D. So too early to say if heat will be an issue.
 
Couple of quick questions. I can see SOC voltage in HWINFO stuck at 1.299V. Is that enough or shall I manually reduce it? What is process of manually reducing SOC voltage on this platform as I can't find the setting like AM4 where it said point blank SOC voltage in the bios. This is also after me updating the bios to latest version from August. Prior to that it was at 1.349V so definitely something improved.
1. For EXPO speed under 6800, 1.2 ~ 1.25 will be enough. I use 1.22v for 7600 38-47-47 and very stable.
2. For ASRock board, the SoC voltage is SoC/Uncore Voltage
How do I stop memory training every single time the board boots. Dammit the boot time is very long. It is really pissing me off.
Enable MCR (Memory Context Restore) on the bios.
Finally, this CPU definitely runs a bit hotter than 5800X3D. I saw 75 C while playing Cyberpunk 2077 whereas, 5800X3D would hover around 69-72C. Cinebench pushed it to 85 C which is similar to 5800X3D. So too early to say if heat will be an issue.
Ryzen 7000 is known to go reaching it's temp. limit to maintain it's boost clock.
You can always thermal limit the cpu from bios including set the PBO curve.
 
Ok will try this stuff out. Honestly, 85 C is not a problem and I am not going to be monitoring temps if they stay 75 C or so.
 
Ok will try this stuff out. Honestly, 85 C is not a problem and I am not going to be monitoring temps if they stay 75 C or so.

I would not worry about the temp, these chips push themselves right up to the limit and should live well beyond the next time you upgrade your system. Or you could make it a excuse to build a custom water cooling setup to drop the temps by about 10 C.
 
lol custom water cooling. I forgot how to remove the top 360 AIO in the Lian Li case. I no longer have the motivation to do this stuff. This build was just to see if I can still do this stuff. Surprisingly this worked out but I half expected to have a bad day today. This whole thing I was done with updates etc. in less than 2 hours so not bad.
 
Couple of quick questions. I can see SOC voltage in HWINFO stuck at 1.299V. Is that enough or shall I manually reduce it? What is process of manually reducing SOC voltage on this platform as I can't find the setting like AM4 where it said point blank SOC voltage in the bios. This is also after me updating the bios to latest version from August. Prior to that it was at 1.349V so definitely something improved.

I decided to reduce mine to 1.25. I haven't had any stability at all whatsoever, so I would suggest doing it. The BIOS option should be pretty straightforward. Should literally be called SOC, just have to dig it out of the right submenu. You set it to "AMD Overclocking" instead of "Auto" and then in the option that comes down afterwards, you just dial it in using the number row or numpad (there's no GUI number selection or indication that you're actually changing the value, you just have to start typing it in, which had me a little confused for a half minute).

How do I stop memory training every single time the board boots. Dammit the boot time is very long. It is really pissing me off.

You don't, as far as I know. To get optimal POSTing stability with any kit that's at ~6000, most people have said you just have to eat that boot time...

Finally, this CPU definitely runs a bit hotter than 5800X3D. I saw 75 C while playing Cyberpunk 2077 whereas, 5800X3D would hover around 69-72C. Cinebench pushed it to 85 C which is similar to 5800X3D. So too early to say if heat will be an issue.

Wait, it ran that hot while just playing Cyberpunk? That's kind of odd. I fired up Cyberpunk, and my temperature is like 61C on the CPU... and I'm using a much weaker cooler: https://pcpartpicker.com/product/DMjG3C/noctua-nh-u14s-8252-cfm-cpu-cooler-nh-u14s

Granted I'm using a 3080 Ti as my GPU. I don't know if CPU usage scales with GPU usage.
Some photos for viewing pleasure. Everything booted up fine with my old setup. Even my fans and colors are all the same. Just need to fix ram colors maybe. Will get to it after I play some games.

I'm sure your GPU has been fine for now, but I would be careful with that adapter like you have it. I remember watching some videos that stated the most dangerous thing you can do with that connector long term is apply any sort of lateral (ie side to side) pressure on the connector. You have it slightly diagonal going down. Honestly despite reviewers claiming that it's user error, I'm still on the fence on getting a 4090, and a large part is that connector. I've been hearing stories of peoples cards burning down because of that thing, and Buildzoid pointed out how stupid it was to have that much power flowing through that tiny thing.
 
I had high temps until about a week later. Thermal paste probably needed some time to settle or something. Here's my temps and power draw while gaming at high(ish) frame rate. This thing just sips power and delivers ridiculous performance for watt, something Intel has failed to do for many generations now.
 

Attachments

  • 20230729_005143.jpg
    20230729_005143.jpg
    541.3 KB · Views: 0
With regards to thermal paste, what I do with my CPUs these days is I just literally spread it out with my finger. I put a blob on and manually make sure that every portion has a nice even coat. I think that may also help with these CPUs considering they have an odd shape, though I kind of doubt they put any actual chips in the outlier areas. Either way, it works for me, so I keep doing it. Shrug. Some people scoff at the idea and just put a blob on and let the heatsink do its work, but I figure if I'm only doing this once, what's the point of leaving anything to chance?
 
So found SOC voltage and set it to 1.25 V.
Set memory context restore to enabled.
Set PBO to -10 all core (now all core boosts to 4875 instead of 4850, big whoop).

Temps are the same. 85 C Cinebench 20, 80 C CPUZ bench and 75 C gaming.

CPU Z bench did gain points with multi score jumping from 6750 to about 7250. Single core up from 635 to about 665.

I think 1.25V if it remains stable is definitely peace of mind. Memory training skip is another one. Temps I am not bothered about tbh. My 5900X used to run about 80 C gaming. Since I moved to AMD 3 years ago, high temps seem to be normal. 5800X3D was a real exception as it used to run 68-72 C which was dope af.

Overall I only ran one bench of Cyberpunk 2077 with path tracing and frame gen. Got the same result as 5800X3D. I am downloading about 600 GB of games to bench since I had uninstalled all of them so will take time to post some comparisons.
 
Wow, this is quite a worthless upgrade. I mean I knew this going in but hot damn.

For 4K there is literally no difference in anything thus far. If anything I am 1-2 fps lower because I haven’t tuned memory or done all those tricks to get those 1-2 fps that I used to get with 5800X3D. Only issue is I never tried Starfield so I don’t know what to expect but from my previous results in this thread…

Cyberpunk 2077 is 71 fps same as 5800X3D
SOTTR is 172 fps 2 less than 5800X3D
FH5 is 171 fps 4 fps higher than 5800X3D
FC6 is 110 fps 3 fps lower than 5800X3D
WDL is 101 fps 7 fps faster than 5800X3D

This is borderline hilarious. Well at least now the owners of 5800X3D know. 👀
 
Wow, this is quite a worthless upgrade. I mean I knew this going in but hot damn.

For 4K there is literally no difference in anything thus far. If anything I am 1-2 fps lower because I haven’t tuned memory or done all those tricks to get those 1-2 fps that I used to get with 5800X3D. Only issue is I never tried Starfield so I don’t know what to expect but from my previous results in this thread…

Cyberpunk 2077 is 71 fps same as 5800X3D
SOTTR is 172 fps 2 less than 5800X3D
FH5 is 171 fps 4 fps higher than 5800X3D
FC6 is 110 fps 3 fps lower than 5800X3D
WDL is 101 fps 7 fps faster than 5800X3D

This is borderline hilarious. Well at least now the owners of 5800X3D know. 👀

I mean we told you that it highly depends on what you play and that going from a 5800X3D to 7800X3D is most likely not worth it so I'm not sure what is there to be surprised about. Also, all the games you posted are GPU limited at 4K even on a 4090, and you didn't include the differences in 1% lows. But seriously If you were hoping to get a massive performance gain in GPU BOUND games at 4K then I don't know what to tell you.
 
Last edited:
I wasn't expecting anything. Maybe a diff in at least something I am playing. Oh well, I did get 4 TB of extra space out of this and am on the current platform which hopefully sticks around. So it's not all bad. Rest is really a wash.

1% lows and mins are very similar as well in the games I have tried.
 
Looking at the difference in starfield, maybe there will be a bunch for which it will matter.
 
Maybe. But Starfield is also just a trash optimized game made on a 20 year old engine so I don't think we should seriously use it as an indication of future performance.

Starfield is questionable (although I think relevant; it's probably not going to be the last game that runs like that), but I think Hogwarts Legacy might be a better indication of future games. It has probably the best looking Raytracing implementation I've seen (as in, I instantly noticed it when I turned it off), and just overall looks pretty good, but has crap optimization and takes a reasonably large hit with Raytracing on, when going to an older generation CPU.
Cyberpunk 2077 is 71 fps same as 5800X3D
SOTTR is 172 fps 2 less than 5800X3D
FH5 is 171 fps 4 fps higher than 5800X3D
FC6 is 110 fps 3 fps lower than 5800X3D
WDL is 101 fps 7 fps faster than 5800X3D

I think basically every game you tested (granted I don't know some of those abbreviations) is one that doesn't really care about the CPU at all when scaling up to 4k especially.

Your minmaxed ramsticks from the previous gen are also actually quite relevant, as some games appear to be highly memory sensitive. I think you should try some newer games, and, if you want, go ahead and try to minmax subtimings on your DDR5 ram sticks. Apparently the open box ram kit that I got has Hynix M die, so apparently I could get pretty tight timings on them if I wanted. I'll try that some other time, but for now I'm satisfied with the uplift over my 5950X regardless.
 
Yes I will need to look up ram timings. My sticks are already 30-38-38-96 so pretty decent ones compared to other stuff I have seen around.
5950X to 7800X3D would be similar uplift that I saw from 5900X to 5800X3D and hence, the satisfaction :).
 
Starfield is questionable (although I think relevant; it's probably not going to be the last game that runs like that), but I think Hogwarts Legacy might be a better indication of future games. It has probably the best looking Raytracing implementation I've seen (as in, I instantly noticed it when I turned it off), and just overall looks pretty good, but has crap optimization and takes a reasonably large hit with Raytracing on, when going to an older generation CPU.

Part of the reason why Hogwarts Legacy looks so good with RT on is because the developers simply didn't bother trying to make the game actually look decent with RT off, otherwise the difference would've been more subtle just like it is in most games that don't go full on path tracing. I agree that future games are probably going to be just as horribly optimized when it comes to the CPU side of things.
 
Only game I tried that actually benefitted from the new PC seems to be AC Valhalla where *I think* my fps went from 95/96 to now about 110 at 4K. That is massive. It seems Starfield would've been the same and also Hogwarts Legacy.
 
Last edited:
Part of the reason why Hogwarts Legacy looks so good with RT on is because the developers simply didn't bother trying to make the game actually look decent with RT off, otherwise the difference would've been more subtle just like it is in most games that don't go full on path tracing. I agree that future games are probably going to be just as horribly optimized when it comes to the CPU side of things.
Yeah, but I think that's kind of the beauty of Ray Tracing to begin with, correct me if I'm wrong. It's that it takes much less effort for developers to add ray tracing to a game vs all of the standard tricks they have to employ for rasterization. If having to spend less time on doing all of those graphics tweaks leaves them with more time to work on the world and the rest of the immersion, I'm all for it. The problem is that even the 4090 on the best CPUs struggles with it, especially at 1440p or below. I really do think that it's a sign of things to come. Granted, DLSS exists anyway.

Only game I tried that actually benefitted from the new PC seems to be AC Valhalla where *I think* my fps went from 95/96 to now about 110 at 4K. That is massive. It seems Starfield would've been the same and also Hogwarts Legacy.

I think you're going to keep noticing that in a lot of other games, especially going forward. That's my gut instinct. That's why I returned my 4090 back then. I knew I couldn't make it do its thing on a 5950x. Now I'm kind of halfheartedly looking to climb back up to a 4090, but this 3080 Ti with this new CPU has been working quite marvelously as well, especially with DLSS on.
 
Kit Link
Does anyone have good timings for the memory kit I am using.

Thus far I set PBO to -15. I have noticed anything lower than -20 reduces stability and scores in benches. So -15 is doing well for me.
Also set TRC from 134 down to 120. Haven't touched core timings. Would be nice if someone can share their timings so I can just set it and forget it.
 
Kit Link
Does anyone have good timings for the memory kit I am using.

Thus far I set PBO to -15. I have noticed anything lower than -20 reduces stability and scores in benches. So -15 is doing well for me.
Also set TRC from 134 down to 120. Haven't touched core timings. Would be nice if someone can share their timings so I can just set it and forget it.

If the manufacturer is Hynix, you could just use this:
https://www.patreon.com/posts/low-effort-rank-77403831

I tried it on my open box Hynix M die dual rank kit (2x32) yesterday (although I left the voltage at 1.39 just in case since it's dual rank), and (knock on wood) haven't had any oddities. Had Starfield running all day and various browsers, media players, etc all running. Did an hour of prime95 with no issues. Definitely noticed some difference in snappiness in the machine, too, and I don't think it's my imagination. Since yours is single rank, you shouldn't even have to worry about my concerns. Just check if it's Hynix via CPU-Z SPD tab first.
 
Only game I tried that actually benefitted from the new PC seems to be AC Valhalla where *I think* my fps went from 95/96 to now about 110 at 4K. That is massive. It seems Starfield would've been the same and also Hogwarts Legacy.
I for one am shocked that your new cpu isn’t making a difference in GPU bound games.
 
Kit Link
Does anyone have good timings for the memory kit I am using.

Thus far I set PBO to -15. I have noticed anything lower than -20 reduces stability and scores in benches. So -15 is doing well for me.
Also set TRC from 134 down to 120. Haven't touched core timings. Would be nice if someone can share their timings so I can just set it and forget it.
This is my friend setup using ryzen 7600 + ASRock B650M PG Riptide:
1695002596795.png


Voltage settings:
• VDDIO 1.32v
• VDD 1.37v
• VDDQ 1.37v
• VSoC 1.2v (he was using older AGESA so it needs 1.3v, also my PC running at 7600mhz only need 1.22v)
• FCLK set to 2067, max 2100 as this is usually the upper limit for ryzen 7000
• Powerdown set to disabled

If you want to play with termination:
• ProcOdt set to 43.6 or 48
• GDM disabled
 
Kit Link
Does anyone have good timings for the memory kit I am using.

Thus far I set PBO to -15. I have noticed anything lower than -20 reduces stability and scores in benches. So -15 is doing well for me.
Also set TRC from 134 down to 120. Haven't touched core timings. Would be nice if someone can share their timings so I can just set it and forget it.
Check out hardware unboxed Zen4 memory video. Featuring timings from Buildzoid.



View: https://youtu.be/MOatIQuQo3s?si=v_q8c75WRNXXPJF4
 
How is it going? Was just sitting around and thinking maybe I should order 7800X3D, GSkill 6000 C30 and Asus Strix X670E-A motherboard.

Do I need it? No.
Do I want it? Yes.
Does any game need extra frames? Not really unless I replay Witcher 3 and Cyberpunk RT overdrive which I won’t.
Will it matter at 4K? Probably not.
Will I notice a difference in 4K DLDSR or 1440P 240 Hz gaming? Probably not.

So what are other people experiences with this switch
Migrating from a 5800x to a 7950x, GSkill 6000 C30 on a ASUS Strix X670E-E, no real data as it isn't my primary yet...
 
Ok will try this stuff out. Honestly, 85 C is not a problem and I am not going to be monitoring temps if they stay 75 C or so.
You can simply set thermal throttle max temp in bios then use curve optimizer to max your Mhz at the lowest voltage that's stable and forget about it.
 
Got stuff installed. Added 4 TB Gen 4 SSD space as well. Now for a total of like 8.75 TB.

Some photos for viewing pleasure. Everything booted up fine with my old setup. Even my fans and colors are all the same. Just need to fix ram colors maybe. Will get to it after I play some games.

For now, this is right up there in an upgrade I didn’t need but one I deserved after 3 years of running AM4 with 5900X/5800X3D.

Dope af.
Changed fans and stuff.
lolwtf
 

Attachments

  • IMG_5870.jpeg
    IMG_5870.jpeg
    366.7 KB · Views: 1
  • IMG_5875.jpeg
    IMG_5875.jpeg
    373.6 KB · Views: 1
  • IMG_5874.jpeg
    IMG_5874.jpeg
    327.2 KB · Views: 1
I think I may or may not have said that I would post my completed build here. If not, whatever. I think I finally finished upgrading everything I'm going to on this thing, so I posted it on PCP and now here:
https://pcpartpicker.com/b/Dcz7YJ
Overall pretty happy with how everything turned out. I think I posted it at the wrong time on PCP and didn't have enough RGB in it so it certainly isn't much of a looker though. Oh well.
 
Kind of an old bones reply, but to answer the OP's original poll question, I am skipping this gen...

Currently on an X470 system that initially housed a 2700X, and now a 5900X. I don't see the huge performance jump needed to justify a new PC yet, so even though I was planning a 5-year upgrade, I think I'll hold off for another 18-24 months... by that time I guess 9000 series will be out!
 
Kind of an old bones reply, but to answer the OP's original poll question, I am skipping this gen...

Currently on an X470 system that initially housed a 2700X, and now a 5900X. I don't see the huge performance jump needed to justify a new PC yet, so even though I was planning a 5-year upgrade, I think I'll hold off for another 18-24 months... by that time I guess 9000 series will be out!
I doubled my core count 5800x to 7950x, for no good reason other than to build something. My daughter got my old build...
 
Currently on a 5800x. I'd like to go to a 7800x3d but if I score a good used 7700x I'll take it.
 
Back
Top