AMD Ryzen 9 7950X3D CPU Review & Benchmarks: $700 Gaming Flagship

The CPU itself is not a scam. its meant for people who want no compromise. With one machine, they can do all of their best gaming and they can also do all of their multicore work.

The 'scam' is that AMD is making us wait over a month, for the 7800X3D.

You can put the same power/TDP limits on the 7950x. MSI has single click profiles to choose from, in their bios. I don't know about other brands. You may have to manually enter the limitations.
Or, you can install Ryzen Master and use "Eco Mode".
But so far everything is indicating that to get any consistent use out of the cache, you must disable the non-cache half of the CPU. So either you're paying more for cache you can barely utilize or you're paying more for half a CPU you're not using. Unless the 7950X3D can reach higher clocks with one half of it disabled than the 7800X3D can with the whole CPU enabled, the 7800X3D is essentially the same no compromise chip, without the extra cost. Unless the no compromise users are willing to restart the PC and BIOS enable/disable one half every time they switch between cache and non-cache workloads.
 
I wonder when review sites will move on from 1080p as a standard for benchmarking CPU performance / scaling in games. I know why 1080p is used, it’s just funny to see graphs scaling out to 1000 FPS for some games.

“When looking at the $2,000 AMD 11950x5d in 1080p, you can see the 7322.8 FPS for Far Cry 12 is 11% faster than prior generation.”
 
  • Like
Reactions: DF-1
like this
I wonder when review sites will move on from 1080p as a standard for benchmarking CPU performance / scaling in games. I know why 1080p is used, it’s just funny to see graphs scaling out to 1000 FPS for some games.

“When looking at the $2,000 AMD 11950x5d in 1080p, you can see the 7322.8 FPS for Far Cry 12 is 11% faster than prior generation.”
Minecraft and CS:GO always get me. Isn't Minecraft like an 8-bit game?
 
Actually, I was refering to 7950X for productivity and productivity+gaming, and 7700X/13600K for gaming, or 7800X3D for top gaming performance.

My biggest annoyance with modern CPU's is that if you want the best clocks you need to buy more cores than you want.

For a machine for gaming, 8 cores is plenty, but I'd want the 5.7Ghz boost clocks of the 16 core models.

It's really frustrating that they are trying to force more cores on you than you need.

Want higher clocks? You have to buy more cores.

Want more PCIe lanes? You have to buy a different platform which comes with a minimum of 24 cores.

It's just dumb. Give me 8 cores, the highest clocks the architecture can handle, and the PCIe lanes of a Threadripper.

Enough already with the cores! I don't need or want more than 8, but I do want more PCIe lanes and higher clocks!

Throwing cores at the problem wasn't the solution in 2011 when bulldozer came out, and it still isn't the solution today.

I'd say they just want to sell more silicon, but that doesn't even make sense, in a market where silicon manufacturing fabs are the limiting factor...
 
Last edited:
Yeah always knew the 7900X3D was going to be odd. 6+6 that specialize at different tasks for a $600 price tag...no. Same price as I got a normal 7900X combo.
My biggest annoyance with modern CPU's is that if you want the best clocks you need to buy more cores than you want.
Well, yeah. The higher binned parts are going to go to the higher priced parts. No way around that. You're not going to get super binned, lower core parts unless they made some expensive lower core cpu...but why? Just go buy the higher core part and call it a day.
 
Yeah always knew the 7900X3D was going to be odd. 6+6 that specialize at different tasks for a $600 price tag...no. Same price as I got a normal 7900X combo.

Well, yeah. The higher binned parts are going to go to the higher priced parts. No way around that. You're not going to get super binned, lower core parts unless they made some expensive lower core cpu...but why? Just go buy the higher core part and call it a day.

I guess that's a side effect of shrinking dies.

It used to be that the more cores you had the lower the clocks you had due tot he thermal envelope, but now that we are down to 5nm, seemingly the thermal envelope isn't really a limiting factor anymore, and they can just go hog wild with the cores.

Still, it's annoying to have to buy and pay for things you don't need or want to get what you want. The PC market used to be about hyper-customization, building exactly what you need with little else, but on several fronts that has been under assault for 20 years now, and it really drives me up a wall.
 
Minecraft and CS:GO always get me. Isn't Minecraft like an 8-bit game?
Minecraft is pretty much infinitely scalable and part of the reason I went for a 16 core CPU.
With the right mods it can become the most taxing game out there. I actually get worse framerates in Minecraft than I do in Cyberpunk because I have my simulation and ray tracing set to an absolutely ridiculous distance and the number of mods i run.
 
Minecraft is pretty much infinitely scalable and part of the reason I went for a 16 core CPU.
With the right mods it can become the most taxing game out there. I actually get worse framerates in Minecraft than I do in Cyberpunk because I have my simulation and ray tracing set to an absolutely ridiculous distance and the number of mods i run.
...

jFzYS-OKiqLS9SBudLTQ=s900-c-k-c0xffffffff-no-rj-mo.jpg


we talking about the same Minecraft? I've never actually researched the game. XD
 
  • Like
Reactions: erek
like this
what the....

that just screwed with my mind. how can a pixelated game look...smooth?

huh. that's so weird. alright, carry on. :D
 
Yep. Though it doesn't quite look like that anymore.



Ah, thanks for that.

I was a little puzzled as well. My only run-in with Minecraft was when my stepson used to play it, but he pretty much abandoned Minecraft for Roblox in ~2016 when he was 8, and then migrated from that to Fortnite when it launched in 2017, so I haven't seen it in a while.

I remembered it running fairly OK on an old parts bin 768MB GTX460 I stuck in his first build, before I upgraded him to my old 6GB Titan (years after it was something to brag about). Now he has a more modern system, but he seems to barely play games anymore.
 
Unless the no compromise users are willing to restart the PC and BIOS enable/disable one half every time they switch between cache and non-cache workloads.
This is not the case like it was with G1 Threadripper. Parking cores doesn't disable them until reboot.

Edit: I read this elsewhere and I can't find that article again, but this one says "Technically, the PPM driver can park either CCD. But in practice it's going to virtually always be the vanilla CCD, pushing games on to the V-Cache CCD. Though should a game ask for more CPU cores (and technically, threads) than a single CCD can provide, then the PPM will allow the other CCD to go active. Parking the CCD doesn't prevent its use – so all 16 CPU cores are available – it just discourages using more than 1 CCD (8 cores) when at all possible."
 
Well, yeah. The higher binned parts are going to go to the higher priced parts. No way around that. You're not going to get super binned, lower core parts unless they made some expensive lower core cpu...but why? Just go buy the higher core part and call it a day.
The frustrating thing about this--and Intel does it too--is that I'd pay extra for something like a 7800X that ran at the speeds of a 7950X. 16 cores are more than I need, but the extra speed would be great.
 
Great to hear its working well for an AM5 build for you!



Huh..I wonder if there are any more rigorous comparisons beyond Amazon reviews? If its really that big a step over the Arctic LF2 I'd be curious as to why. Isn't the EK Nucleus one of the Asetek variants?
No. Ek has their own unique pump designs.
Actually, I was refering to 7950X for productivity and productivity+gaming, and 7700X/13600K for gaming, or 7800X3D for top gaming performance.
The 7950X3D is only a little bit worse than a 7950x, in all core workloads. its only about 2,000 points in Cinibench. Which doesn't translate to a really meaningful difference, at the performance levels we are talking about.
But so far everything is indicating that to get any consistent use out of the cache, you must disable the non-cache half of the CPU. So either you're paying more for cache you can barely utilize or you're paying more for half a CPU you're not using. Unless the 7950X3D can reach higher clocks with one half of it disabled than the 7800X3D can with the whole CPU enabled, the 7800X3D is essentially the same no compromise chip, without the extra cost. Unless the no compromise users are willing to restart the PC and BIOS enable/disable one half every time they switch between cache and non-cache workloads.
I guess you don't know what core parking is.

It doesn't disable half the CPU. It simply puts that half in an idle (but ready) state. You can, at any time (for example), open up OBS and encode a stream of your game on the other "parked" half of the CPU. At which point it will immediately and automatically unpark, and run your stream.

There are likely some kinks to work out. And I have to think that medium term, they will actually utilize a totally different method from current----where they rely on the Xbox gamebar, to tell them a game is running. We'll see.

At worst, you can use process lasso to manually manage it yourself.
 
The 7950X3D is only a little bit worse than a 7950x, in all core workloads. its only about 2,000 points in Cinibench. Which doesn't translate to a really meaningful difference, at the performance levels we are talking about.
It is a bit bizarre according to tomhardware:
https://cdn.mos.cms.futurecdn.net/TqjnVZu5V2FRitdieKpQ2M-970-80.png.webp

It can be quite significatif (too much to not be an issue with the test maybe ?), LLVM compile take 20% more time on the 7950x3d versus the regular one (525 second vs 435) it is closer ot the 12 core 7900x than the 7950x.

And according to igor lab would translate in actual meaningfull rendering speed difference, some blender scene taking 25% more time:
https://www.igorslab.de/amd-ryzen-9...den-core-i9-13900ks-im-alder-lake-versenkt/9/

While it does extremely well (97% of the regular 7950x performance) on phoronix review (using way less power doing so):
https://www.phoronix.com/review/amd-ryzen9-7950x3d-linux/13
From the nearly 400 benchmarks, when taking the geo mean the 7950X3D was at 97% the performance of the Ryzen 9 7950X while on average being at 60% the power consumption rate
 
Last edited:
It is a bit bizarre according to tomhardware:
https://cdn.mos.cms.futurecdn.net/TqjnVZu5V2FRitdieKpQ2M-970-80.png.webp

It can be quite significatif (too much to not be an issue with the test maybe ?), LLVM compile take 20% more time on the 7950x3d versus the regular one (525 second vs 435) it is closer ot the 12 core 7900x than the 7950x.

And according to igor lab would translate in actual meaningfull difference, some blender scene taking 25% more time:
https://www.igorslab.de/amd-ryzen-9...den-core-i9-13900ks-im-alder-lake-versenkt/9/

While it does extremely well (97% of the regular 7950x performance) on phoronix review (using way less power doing so):
https://www.phoronix.com/review/amd-ryzen9-7950x3d-linux/13
From the nearly 400 benchmarks, when taking the geo mean the 7950X3D was at 97% the performance of the Ryzen 9 7950X while on average being at 60% the power consumption rate
Maybe there are some outlier situations where it has issues with scheduling threads or something? I guess we would need to see a deeper look at these cases.

Techpowerup runs a large suite of code compiling, rendering, app performance, etc and didn't find large differences.
 
Last edited:
The 7950X3D is only a little bit worse than a 7950x, in all core workloads. its only about 2,000 points in Cinibench. Which doesn't translate to a really meaningful difference, at the performance levels we are talking about.
Yes, but it costs almost 20% more. What is its market compared to a 7950X? People who care about top productivity performance while willing to have a slight reduction, for a higher price, and at the same time care about top gaming perfomance, but then are not GPU bound in most games at the same tine.
 
Yes, but it costs almost 20% more. What is its market compared to a 7950X? People who care about top productivity performance while willing to have a slight reduction, for a higher price, and at the same time care about top gaming perfomance, but then are not GPU bound in most games at the same tine.
I'm not sure what you are trying to figure out here?

The 7950x is meant to be a no compromise solution. People whom had 5900x and 5950x for multicore work, no longer have to decide between an 8 core gaming super CPU (5800x3D). Or a multicore work CPU. You can have it all in one system.
 
Yes, but it costs almost 20% more. What is its market compared to a 7950X? People who care about top productivity performance while willing to have a slight reduction, for a higher price, and at the same time care about top gaming perfomance, but then are not GPU bound in most games at the same tine.
Yes. This, precisely this.
Why would I build two computers when I can just build one computer that does all of it? I literally use my Do-It-All computer for everything, as gaming and productivity use 99% of the same resources, and I'll take a 5% hit to my rendering performance for a 25% gain to my gaming performance, which isn't exactly a slouch with the non-3D Ryzen parts.
I do stuff on my computer regularly that uses more than 16 threads, so going down to an 8 core part would be a gigantic hit for me both gaming and productivity.
GPUs are used for more than just gaming, but its rare that I need to do GPU rendering and Gaming work simultaneously, so why have two, especially given the cost of them at the moment.

The 7950X3D is *exactly* the CPU I've been dreaming of. However, my 5950X is more than adequate for what I do. (Game at 1440p or 4k, render and transcode at 4k). So the cost of upgrading isn't there this generation, for me. However if I didn't upgrade to a 5950X and 3080 last year, i'd be budgeting out for a 7950X.

You know what's more expensive than $200? An entire second high-specced computer. Hell, just a mid-level GPU costs way more than $200 these days.

*Note, productivity here is all hobby rendering, transcoding, video editing, etc. It's not a job, i'm not getting reimbursed or paid for any of it.
 
Last edited:
Yes. This, precisely this.
Why would I build two computers when I can just build one computer that does all of it? I literally use my Do-It-All computer for everything, as gaming and productivity use 99% of the same resources, and I'll take a 5% hit to my rendering performance for a 25% gain to my gaming performance, which isn't exactly a slouch with the non-3D Ryzen parts.
You mean 4% gain in gaming at 4K, or 12% at 1440p, and that only if you have a 4090.

I'm not sure what you are trying to figure out here?

The 7950x is meant to be a no compromise solution. People whom had 5900x and 5950x for multicore work, no longer have to decide between an 8 core gaming super CPU (5800x3D). Or a multicore work CPU. You can have it all in one system.
You could already have it with a 7950X or a 13900K for cheaper.
 
That is when running Windows, right? Not Linux. I wonder whether Microsoft's scheduler screws up in this case.
TomHardware seem to have been windows 11 only yes, with disabled secure boot, virtualisation support, and fTPM/PTT, same for Igor
 
I think the point of testing 1080p, aside from trying to utilize those 480hz panels is simply to extrapolate what might happen with a GPU upgrade down the road. Sure, there's merit to testing 4K too as it shows how low a CPU you can go at present without being CPU bottlenecked at that resolution, but 1080p is clearly the way to go if you want to compare CPU performance.

Same goes for your 12% gain at 1440p thing. Sure, that might be the case at present but that'll widen when the 5090 or 8900xtx or whatever comes next.

Looking at the reviews all around, it seems that the 7950X3D helps people who actually need CPU performance for gaming. They should really have had a slide with some sim games where the 13900K is destroyed (as is the case with most of them) saying the fastest CPU for games that actually need a fast CPU.
MSFT has gains from where the 7950X3D is anywhere between 10-30% faster in game. And no, it's not from 600 to 700 fps but rather 140 vs 120. I can see why some would want a CPU upgrade for that.
 
Only with factual data.
It seems like you are trying to make the value arguement. Which isn't the point, here.
7950x is a very good gaming CPU, fact. Is strictly worse, fact. And is sometimes a lot worse. 5950x was in the same spot. Worse doesn't mean bad. However, its not the best or second best or third best. 7950x3D gives you more/less no compromise.

Compared to the 7950x: The 7950x3D is no worse in gaming, is often better. And sometimes is a lot better. Plaguetale: Requiem, the 7950x3D has lows which are as good as the average FPS for the 7950x and 7700x. The average FPS was then 30fps better than 7950x and 21fps better than 7700x, in Hardware Unboxed's testing. With only the X3D CCD active, it was 27 FPS better average FPS, than the 7700x. Those numbers could get even better, with a future better videocard. Or an improved driver from Nvidia, which isn't as CPU intensive.

Its a top gaming CPU with some unique benefits to specific games. And as far as I have seen, is only a little worse in multicore work.

Tom's Hardware seems to have produced some strangely low productivity numbers in a test or two. And I hope they give that a proper investigation. Until then, I would have to say that may be an issue with core scheduling. Because it otherwise doesn't make any sense, for that kind of difference with the hardware. And that issue could probably be fixed for now, by manually scheduling for that app.
 
Actually, I was refering to 7950X for productivity and productivity+gaming, and 7700X/13600K for gaming, or 7800X3D for top gaming performance.
So then you are saying it is not mostly useless? It's top 5 in all of those categories.

You mean 4% gain in gaming at 4K, or 12% at 1440p, and that only if you have a 4090.


You could already have it with a 7950X or a 13900K for cheaper.

7950X doesn't really compare that closely. So I guess my option would be 13900K. But then I am stuck with a dead platform. Do you see why the 7950X3D is not "useless"?
 
Switched to prefer cache in the BIOS and Fortnite is a lot smoother now on a consistent basis. Just going to "cheat" this way until further revisions of the core parking software.
 
Switched to prefer cache in the BIOS and Fortnite is a lot smoother now on a consistent basis. Just going to "cheat" this way until further revisions of the core parking software.
Could also use process lasso to keep it on the vcache CCX.
 
I'm very surprised at this point how the efficiency numbers of the 7x3D parts are simply "not a factor" (in media reviews) given the parlay of absolute performance, especially against the Intel13xxx power/performance scheme.

I guess the consumer base is indeed that self-servingly fickle and hypocritical.
 
t this point how the efficiency numbers of the 7x3D parts are simply "not a factor" (in media reviews) given the parlay of absolute performance
It is surprising but in most I would imagine it is absolutly a factor

https://www.tomshardware.com/reviews/amd-ryzen-9-7950x3d-cpu-review
  • +Low power consumption, excellent efficiency

https://www.techradar.com/reviews/amd-ryzen-9-7950x3d
  • +Outstanding performance-per-watt

https://www.pcmag.com/reviews/amd-ryzen-9-7950x3d
Energy efficient

https://www.pcgamer.com/amd-ryzen-9-7950x3d-review-benchmarks/
  • Remarkable efficiency

Is there many review that do not point out that cpu efficiancy ?

https://www.phoronix.com/review/amd-ryzen9-7950x3d-linux/15
From the nearly 400 benchmarks, when taking the geo mean the 7950X3D was at 97% the performance of the Ryzen 9 7950X while on average being at 60% the power consumption rate. The Ryzen 9 7950X3D in these non-gaming workloads was 11% faster than the Intel Core i9 13900K and at around 60% the power.

Overall the AMD Ryzen 9 7950X3D performed very well on Linux for gaming and workloads where the large L3 cache via AMD 3D V-Cache really paid off. But with the lower TDP than the Ryzen 9 7950X and possibly some issues around optimal task placement between CCDs, for some workloads the Ryzen 9 7950X3D was less impressive. Again, see the nearly 400 benchmark results in full to help evaluate whether the 7950X or 7950X3D is most practical for your needs. If power efficiency is a driving factor, the Ryzen 9 7950X3D easily stands as the champion and also as a compelling processor for Linux gamers.


TechSpot:
When it comes to productivity, the issue with the 13900K is power consumption, and while you can enforce power limits, that will reduce performance, extending the 7950X's lead further. So where the 13900K/KS pushed total system consumption to just shy of 500 watts, the 7950X3D ran comfortably under 300 watts, using less power than even the 13600K which is super impressive.
 
It is surprising but in most I would imagine it is absolutly a factor

https://www.tomshardware.com/reviews/amd-ryzen-9-7950x3d-cpu-review
  • +Low power consumption, excellent efficiency

https://www.techradar.com/reviews/amd-ryzen-9-7950x3d
  • +Outstanding performance-per-watt

https://www.pcmag.com/reviews/amd-ryzen-9-7950x3d
Energy efficient

https://www.pcgamer.com/amd-ryzen-9-7950x3d-review-benchmarks/
  • Remarkable efficiency

Is there many review that do not point out that cpu efficiancy ?

https://www.phoronix.com/review/amd-ryzen9-7950x3d-linux/15
From the nearly 400 benchmarks, when taking the geo mean the 7950X3D was at 97% the performance of the Ryzen 9 7950X while on average being at 60% the power consumption rate. The Ryzen 9 7950X3D in these non-gaming workloads was 11% faster than the Intel Core i9 13900K and at around 60% the power.

Overall the AMD Ryzen 9 7950X3D performed very well on Linux for gaming and workloads where the large L3 cache via AMD 3D V-Cache really paid off. But with the lower TDP than the Ryzen 9 7950X and possibly some issues around optimal task placement between CCDs, for some workloads the Ryzen 9 7950X3D was less impressive. Again, see the nearly 400 benchmark results in full to help evaluate whether the 7950X or 7950X3D is most practical for your needs. If power efficiency is a driving factor, the Ryzen 9 7950X3D easily stands as the champion and also as a compelling processor for Linux gamers.


TechSpot:
When it comes to productivity, the issue with the 13900K is power consumption, and while you can enforce power limits, that will reduce performance, extending the 7950X's lead further. So where the 13900K/KS pushed total system consumption to just shy of 500 watts, the 7950X3D ran comfortably under 300 watts, using less power than even the 13600K which is super impressive.

Sure, it's pointed out - that much is absolute - but is it a driving point in the general scheme of media reviews? In my opinion and feels, no.

All I see is... "meh". Versus the 7xxx non-3D parts and Intel 13xxx stuff.


I'll also admit, perhaps too much GN influence. They're definitely on the fence, and perhaps too much kool-aid bandwagon sippin' on my side.
 
All I see is... "meh". Versus the 7xxx non-3D parts and Intel 13xxx stuff.
I think some of the biggest meh is versus the 5800x3d and upcoming 7800x3d, the who want to pay a good amount over a 13900k-7950x, 7800x3d or the 5800x3d if you are on AM4 crowd could be legitimately low
 
Sure, it's pointed out - that much is absolute - but is it a driving point in the general scheme of media reviews? In my opinion and feels, no.

All I see is... "meh". Versus the 7xxx non-3D parts and Intel 13xxx stuff.


I'll also admit, perhaps too much GN influence. They're definitely on the fence, and perhaps too much kool-aid bandwagon sippin' on my side.
It's a flagship part - no one cares about efficiency at that level.
 
It's a flagship part - no one cares about efficiency at that level.
When it is by that much... still probably why we do see more against a 13900k than the ks version, even if the KS is much closer in price, just too much juice.

Noise-cooling budget-simplicity I would imagine are still nice plus, people ready to spend good money and thus a nice cooling-case-etc.. solution that keep the noise reasonable to care less, but it is something like a 2:1 ratio, it start to be a significant plus
 
When it is by that much... still probably why we do see more against a 13900k than the ks version, even if the KS is much closer in price, just too much juice.

Noise-cooling budget-simplicity I would imagine are still nice plus, people ready to spend good money and thus a nice cooling-case-etc.. solution that keep the noise reasonable to care less, but it is something like a 2:1 ratio, it start to be a significant plus
All that - OK cool. But - AM5 and 3 years of upgrades - that’s what I see.
 
Back
Top