Who else is waiting on Zen 4 x3D before upgrading?

To be honest, if you have enough disposal income to buy an extremely overpriced 4090, i don't think people will stop at the small price delta of the x3d
Wasn't the 4090 only a $100 increase over the 3090? It's the 4080 with it's increase from $699 (3080) to $1199 (4080) MSRP that shows what the truly overpriced card is.
 
Wasn't the 4090 only a $100 increase over the 3090? It's the 4080 with it's increase from $699 (3080) to $1199 (4080) MSRP that shows what the truly overpriced card is.
I paid the same amount of money for my 4090 as I did for my 3090... about 5~6 months after release. So, no increase in cost for me
 
Who buys a 5800X3D for productivity? Also, who buys a 13900K for gaming? That's like buying the better 7950X just for gaming. The 13700K should be the one you'd go for on that dead end platform - save some money for when Intel releases a new socket.
You mean 13600K
 
Having used a 5800X and 5800X3D and now 7700X and soon 7800X3D I just can't agree with you here but as always time will tell. Yes it may again depend upon the games engine how much improvement we'll see but all the DX11 VR flight sims I run will surely get a very positive result just as they did with the 5800X3D from the larger cache especially in the 1% lows which make or break immersion in VR.
Curious what your experience is with 5800X3d Vs. 7700x, in VR?
 
I've been holding out on the 7950X3D for my upgrade. My Pro Tools sessions which are surround mixes for post production utilize all 32 threads of my 5950X and I could even use a bit more juice in that department, and I also do gaming on the same machine. Plus, the 5950X apparently is a bottleneck when paired with a 4090 even at 4K. I also feel like my 5950X's memory controller has not been doing so well with my 64GB 3600MHz CL14 DDR4 kit. So for my use case, it's perfect. I'm also going to have a ton of radiator surface area for it in my custom loop, which should help with the thermal constraints somewhat.
 
Last edited:
I'm also going to have a ton of radiator surface area for it in my custom loop, which should help with the thermal constraints somewhat.
Ryzen 7000 actually not thermally constraints: due to it's high TDP, it will always try to maximize the boosted clockspeed using any headroom left by your cooling capacity. It will boost until it reaches 95c.

Exception: Ryzen 7000 non-X, because the default TDP is only 65w, and it boosts a bit more conservative compared to it's X variants.
 
Ryzen 7000 actually not thermally constraints: due to it's high TDP, it will always try to maximize the boosted clockspeed using any headroom left by your cooling capacity. It will boost until it reaches 95c.

Exception: Ryzen 7000 non-X, because the default TDP is only 65w, and it boosts a bit more conservative compared to it's X variants.
More cooling means it takes more boosting to hit that max temp, no?
 
2023 and we're still basing performance on $700 CPUs on 1080p benchmarks?

If the x3D chips were crushing Raptorlake, AMD would be yelling it from the rooftops by now.

I'll wait to see the 4k benchmarks, especially the lows, but I have a feeling the entire package may be a bit underwhelming.
This old argument...

1080p performance on yesterday's cards correlates pretty closely with 4k performance in that same software on today's cards.

So it only makes sense to extrapolate that 1080p performance in today's software on today's cards will represent how much performance will hold up when more powerful cards come to market.

When I bought my 2950x, it got just about the same performance in 4k as an 8700k... Mostly because 4k on the software at the time, using the cards available at the time was largely GPU bottlenecked. You'd get the most out of your 1080Ti with either CPU at 4K. however, once you upgrade to a 3080, suddenly the 8700k is actually a TON faster in those same games... and do you know what information predicted that change in performance dichotomy between those two CPUs long before the GPU horsepower existed to exhibit it?

1080p benchmarks. Yep.

Because CPU choice doesn't matter much for 4k on TODAY's cards in TODAY's software. But when you suddenly bump up you gpu horsepower, the CPU now matters more on that same software.

I don't understand how people are still confused about this.
 
I've been holding out on the 7950X3D for my upgrade. My Pro Tools sessions which are surround mixes for post production utilize all 32 threads of my 5950X and I could even use a bit more juice in that department, and I also do gaming on the same machine. Plus, the 5950X apparently is a bottleneck when paired with a 4090 even at 4K. I also feel like my 5950X's memory controller has not been doing so well with my 64GB 3600MHz CL14 DDR4 kit. So for my use case, it's perfect. I'm also going to have a ton of radiator surface area for it in my custom loop, which should help with the thermal constraints somewhat.
I somewhat feel you there, although the bottleneck is really not noticeable outside benchmarks. At least with the games I play at 4K, I'd never know on my 4090 at 4K without staring at the FPS counter and 1% lows the entire game. Everything still runs smooth as butter.
 
ITS ALIVEEEEE

tldr - 7950x3d is doing gods work on gaming. They also simulated a 7800x3d and its just as good in gaming but costs $250 less. Also, a simulation, but it only uses 44w in power.

Still, being that 1440@100@medium/high settings is my performance target, I dont see the reason to upgrade from my 7700x. I'll wait untill the next x3d or so.

7950x3d (real):
https://www.techpowerup.com/review/amd-ryzen-9-7950x3d/

7800x3d (simulated):
https://www.techpowerup.com/review/ryzen-7800x3d-performance-preview/
 
Last edited:
Wow, that TPU review was beyond disappointing. It's like take 4 month old performance today. Thank you AMD.
Need to see more reviews to understand if the performance will increase from here on out.
 
Can someone explain to me wtf is going on with Borderlands 3? I am noticing basically the exact same fps at 1080, 1440, and 4k. How is this possible for this range of processors? The only common variable is the 3080 gpu used. So does it mean that the CPU is limited to such a huge degree that gpu and resolution dont make a difference?

borderlands-3-1920-1080.png


borderlands-3-2560-1440.png




borderlands-3-3840-2160.png
 

Attachments

  • borderlands-3-2560-1440.png
    borderlands-3-2560-1440.png
    53.3 KB · Views: 1
Last edited:
All guesses from skeptics came true:

1. Scheduling is a pain in the franken-CPUs and single CCD is still going to dominate
2. Depending on the game, performance increase from 3D cache on AM5 is generally less than that on AM4 (this means DDR5 is working, not necessarily bad news)
3. 7800X3D likely delayed because it's going to match the more expensive CPUs in gaming
4. 5800X3D still a beast

Now we have to hope the 7800X3D isn't gimped somehow to protect the premium tier.
 
Here is the main point from that review

The intent of the AMD Ryzen 9 7950X3D is a CPU to bridge the gap between wanting to game with the best performance and having 16-cores/32-threads for productivity. Literally, this CPU is a “have your cake and eat it too” CPU. While it certainly fulfills this goal perfectly, we did not experience any advantages in using the Ryzen 9 7950X3D over the Ryzen 9 7950X for gaming. With a GeForce RTX 4090, we experienced no gameplay advantage at 4K, or with Ray Tracing. If you game at a high resolution or have a less powerful graphics card than the most expensive one possible, the advantages are not there.

TL;DR Not worth the price and software hassle for productivity OR games ESPECIALLY at 4K.
 
  • Like
Reactions: TMCM
like this
meh, I would get a non X 7900 and take the rest of the money and put toward a video card if you're a gamer.
 
I'd say it falls where I assumed it would. It'll be interesting to see how many outliers, like Tomb Raider, in the future where it hits a home run. Conversely, the same applies for games where the configuration suffers. Ditto with long term support for this configuration.
 
the 7800X3D seems like the best CPU in terms of gaming price/performance which is why of course AMD delayed its release for 5 weeks...they want people to buy the more expensive 7950X3D...overall it's not as impressive as the 5800X3D at launch but still a decent upgrade

for my needs the 5800X3D would be the better upgrade...I also wouldn't need to upgrade my motherboard and memory
 
Hard pass on these latest X3D CPU's. The non-homogenous CCD design is simply too complex for thread scheduling to have any semblance of reliability and consistency. In order to get the best gaming performance, half your cores have to be "parked" but what if you're gaming + streaming?

Even in situations where 7950X3D is outperforming a 13900K for average FPS, it's often times behind in the 1% and .1% lows, meaning Intel would have the more consistent frame times more often then not.
 
Even in situations where 7950X3D is outperforming a 13900K for average FPS, it's often times behind in the 1% and .1% lows, meaning Intel would have the more consistent frame times more often then not.
Where are you getting that info?
 
Hard pass on these latest X3D CPU's. The non-homogenous CCD design is simply too complex for thread scheduling to have any semblance of reliability and consistency. In order to get the best gaming performance, half your cores have to be "parked" but what if you're gaming + streaming?

Even in situations where 7950X3D is outperforming a 13900K for average FPS, it's often times behind in the 1% and .1% lows, meaning Intel would have the more consistent frame times more often then not.
I have a different take. I'm actually quite pleased because I actively do want a 'best of both worlds' cpu. I do use my personal machine for work occasionally (software development), but even when gaming I have an inordinate number of other apps, browser tabs, videos going on another monitor. That in conjunction with the efficiency win puts this as big win for me. Bear in mind I am talking from the perspective of purchasing an entire new system, not a peicemeal upgrade from an existing system. YMMV if you are coming more from a partial upgrade perspective.

In terms of CCD allocation it appears generally that windows and the AMD drivers do a pretty good job of getting it right, but for those cases where it doesn't, AMD clearly has provided a brute force mechanism to manually direct the application. By default this is exposed via Windows XBox Game Bar. Perhaps not super elegant but at least an available option and the hooks are obviously there that could be taken advantage of by other software (it appears to just be some kind of whitelisting mechanism as the community guessed).

Regarding the 'parking'... I think there is some confusion here. As far as I am aware I dont think you literally have 8 cores disabled and essentially non-functional when gaming. That would be silly. I believe it just means your gaming process is walled off/wont access any of the cores on the other CCD. This seems to be corraborated by the Hardware Unboxed video where actually physically disabling the non cache CCD *did* show a small but measureable benefit for some games as well. If the non cache CCD was already disabled by 'parking' this wouldnt be the case.
 
Last edited:
I've been waiting patiently for the 7800x3d to force me into the AM5 hierarchy. Been running a 1800x since release and I'm finally ready to upgrade. New PSU and GPU already in, just need RAM, mobo, and CPU and then probably jumping over to m.2 drives and I'll be good to go for another 6 or 7 years
 
I have a different take. I'm actually quite pleased because I actively do want a 'best of both worlds' cpu. I do use my personal machine for work occasionally (software development), but even when gaming I have an inordinate number of other apps, browser tabs, videos going on another monitor. That in conjunction with the efficiency win puts this as big win for me. Bear in mind I am talking from the perspective of purchasing an entire new system, not a peicemeal upgrade from an existing system. YMMV if you are coming more from a partial upgrade perspective.

In terms of CCD allocation it appears generally that windows and the AMD drivers do a pretty good job of getting it right, but for those cases where it doesn't, AMD clearly has provided a brute force mechanism to manually direct the application. Perhaps not super elegant but at least an available option.

The problem with this scenario is the 7950X exists and is considerably cheaper. It's just as efficient but you need to manually dial that in. It's just AMD did it out of the box with the newer non X and 3D versions.
 
I've been waiting patiently for the 7800x3d to force me into the AM5 hierarchy. Been running a 1800x since release and I'm finally ready to upgrade. New PSU and GPU already in, just need RAM, mobo, and CPU and then probably jumping over to m.2 drives and I'll be good to go for another 6 or 7 years
Personally I would save the money and go buy a 5800x3d
 
I have a different take. I'm actually quite pleased because I actively do want a 'best of both worlds' cpu. I do use my personal machine for work occasionally (software development), but even when gaming I have an inordinate number of other apps, browser tabs, videos going on another monitor. That in conjunction with the efficiency win puts this as big win for me. Bear in mind I am talking from the perspective of purchasing an entire new system, not a peicemeal upgrade from an existing system. YMMV if you are coming more from a partial upgrade perspective.

In terms of CCD allocation it appears generally that windows and the AMD drivers do a pretty good job of getting it right, but for those cases where it doesn't, AMD clearly has provided a brute force mechanism to manually direct the application. By default this is exposed via Windows XBox Game Bar. Perhaps not super elegant but at least an available option and the hooks are obviously there that could be taken advantage of by other software (it appears to just be some kind of whitelisting mechanism as the community guessed).

Regarding the 'parking'... I think there is some confusion here. As far as I am aware I dont think you literally have 8 cores disabled and essentially non-functional when gaming. That would be silly. I believe it just means your gaming process is walled off/wont access any of the cores on the other CCD. This seems to be corraborated by the Hardware Unboxed video where actually physically disabling the non cache CCD *did* show a small but measureable benefit for some games as well. If the non cache CCD was already disabled by 'parking' this wouldnt be the case.
It's not a best of both worlds CPU though. If you enable both CCD's your gaming performance suffers significantly. If you enable half of them, your productivity/multitasking suffers. That's not the best of anything, that's a compromise.
 
In short, ill be skipping this generation as at 4K after reading some reviews with game benchmarks, my 4090 is barely bottlenecked. Maybe I'll snag a 14900k or Ryzen 8000 when they come out... lol.

If I get the itch, maybe I'll snag a 5800X3D for fun, but its barely better than the 5950X at 4K, and these benchmarks don't have that 5950X running under the tweaks I have done to mine, so it would be a side grade at best most likely.
 
I have a different take. I'm actually quite pleased because I actively do want a 'best of both worlds' cpu. I do use my personal machine for work occasionally (software development), but even when gaming I have an inordinate number of other apps, browser tabs, videos going on another monitor. That in conjunction with the efficiency win puts this as big win for me. Bear in mind I am talking from the perspective of purchasing an entire new system, not a peicemeal upgrade from an existing system. YMMV if you are coming more from a partial upgrade perspective

isn't the whole point of these 3D cache CPU's for gaming?...it's not meant to be an all-purpose CPU
 
for my needs the 5800X3D would be the better upgrade...I also wouldn't need to upgrade my motherboard and memory
dude. I'm at a steady 200+ fps in world of warcraft on 1440@144 with mine. I've never experience such an upgrade like this since I went from on board graphics to a 256mb card for Half Life 1. its unreal.
 
Back
Top