5800X3D gaming review

Imagine a 5950X3D....
They would have to bin the living crap out of it though..
There wouldn't really be any gains. The cache in a 5950x is split between CCX. And its likely they would also have to do two separate cache stacks. One for each CCX.
 
Idk, I don't get the problem some have with this CPU. Wouldn't be the 1st time a company really dialed in a cpu for a specific workload. If it happens to be the best at gaming so be it.
Not often that there is any bad product from those types of companies, only bad price, with the 5800x at $339 and the 5900x at $394, 5950x at $536, the $450 5800x3d is not that interesting to some.

At $390 all of a sudden no one would have anything bad to say I imagine.

Could be more to it too, but pricing is often what shift perception.
 
Not often that there is any bad product from those types of companies, only bad price, with the 5800x at $339 and the 5900x at $394, 5950x at $536, the $450 5800x3d is not that interesting to some.

At $390 all of a sudden no one would have anything bad to say I imagine.

Could be more to it too, but pricing is often what shift perception.
Heh. Where's all the people who will pay $500 more for 2% more performance in games if it's a video card?
 
Not often that there is any bad product from those types of companies, only bad price, with the 5800x at $339 and the 5900x at $394, 5950x at $536, the $450 5800x3d is not that interesting to some.

At $390 all of a sudden no one would have anything bad to say I imagine.

Could be more to it too, but pricing is often what shift perception.
Yeah, I mean I get that. It's pricey. But if it's "the best", then I know certain people will pay a premium for it.
 
Heh. Where's all the people who will pay $500 more for 2% more performance in games if it's a video card?
giphy.gif

Hello, 911. There's been a murder.
 
Milan-X proves the increased cache even on multiple CCXs pays dividends. Just not with gaming. Random I/O workloads like virtualization will see it.
 
Milan-X proves the increased cache even on multiple CCXs pays dividends. Just not with gaming. Random I/O workloads like virtualization will see it.
The larger cache in a server with many VM’s will see big improvements, I’m holding off an upgrade especially for them.
 
Ya know, the extra e cores of the 12700k will pay dividends as well...

Sure, but different ones. They didn't help on this test--which, remember, used the i9, which has 4 more e cores than the i7. But this particular CPU was always aimed at gamers.
 
But these are power consumption numbers for Prime95, where's the chart comparing power consumption numbers when gaming?
Don't have one. They have superpi, prime95, and cinebench. Oh, and system idle power, which is just a bit lower for the x3d.
Computerbase did measure 12900KS power consumption during gaming, and it was quite high (no 5800X3D numbers yet as they stick to NDA).
While the 12900K consumed around 100W during gaming, the 12900KS often consumes 140W and more.
https://www.computerbase.de/2022-04...est/3/#abschnitt_leistungsaufnahme_in_spielen
 
The 12700k is just as efficient as the 5800x3d.
He's not saying it's not efficient, he's saying the "efficiency" cores are only good for pulling heat away from the p-cores. Part tongue in cheek, I'm sure, but they do do that. They also take load from lower priority processes off of the p-cores, allowing them to sleep more, improving efficiency...when they're being used as intended, anyway. They don't directly help framerate or frametime, but can help indirectly when there are background processes that would otherwise get in the way.
 
First review for the 5800x3D out. A big meh from me.


Yeah, unless it's first rev firmware issues it's underwhelming. I was going to upgrade for the heck of it from my 5800x but it looks like my effort wouldn't be worth it. If you're on Ryzen 3000 though it seems like a wicked upgrade.
 
If it were at the $400 point, and widely available, yes. However, I think this thing is going to be impossible to get for the short run of it's production life until AM5, and the 5900x is dropping further below $400 every day.
 
Yea don't see the point. I was thinking of down grading to the 5800x3D from my 5950x but seems pointless.
 
Seems nice if you have an older CPU and are looking for something specifically targeted at 1080p gaming. Yet if you aren't, I don't really see the point. Especially if you're gaming at higher resolutions, which more and more people are. I get why the reviews are the way they are (they have to compare one CPU vs. another), but they have to jump through so many hoops to do it. What I always take from CPU reviews for gaming is that the CPU rarely matters. I don't care about 800fps vs. 725fps at 1080p in some game that runs on a potato. In 4K with a target of 60+ fps in modern games, it's always just a pile of graphs showing that the CPU barely matters if at all.
 
Heh. Where's all the people who will pay $500 more for 2% more performance in games if it's a video card?
Where's the video card that is a $500 upgrade but offers only 2% more performance? In EVGA's lineup, for example, $510 takes you from a $489 3060 Ti FTW3 to a $999 3080 FTW3. That's a 50% jump in performance using real games at real resolutions. Maybe the gap is smaller when using 32x32 pixel AMD benchmarks.
 
He's not saying it's not efficient, he's saying the "efficiency" cores are only good for pulling heat away from the p-cores. Part tongue in cheek, I'm sure
Der8auer did a video a while back, playing with just the e cores, and said they're actually pretty impressive for what they are:
 
Last edited:
  • Like
Reactions: Nobu
like this
Where's the video card that is a $500 upgrade but offers only 2% more performance? In EVGA's lineup, for example, $510 takes you from a $489 3060 Ti FTW3 to a $999 3080 FTW3. That's a 50% jump in performance using real games at real resolutions. Maybe the gap is smaller when using 32x32 pixel AMD benchmarks.

He's probably referring to the 3090 and 3080Ti "real world" prices vs. MSRP. Some of those price jumps were huge for minimal gains. Things are normalizing, but a lot of that had to do with simply buying whatever you could get your hands on.
 
LTT's review is out.
Spoilers:
As expected, real world performance does not align with AMD's benchmarks except that it is much faster than AMD's previous best. And this second place gaming effort comes with a substantial cost to everything other than gaming.
 
Where's the video card that is a $500 upgrade but offers only 2% more performance? In EVGA's lineup, for example, $510 takes you from a $489 3060 Ti FTW3 to a $999 3080 FTW3. That's a 50% jump in performance using real games at real resolutions. Maybe the gap is smaller when using 32x32 pixel AMD benchmarks.
https://www.tomshardware.com/news/nvidia-geforce-rtx-3080-ti-review

3080ti 20% better than 3080 for $500 more, 3-5% less than 3090 (which is ~$200 more), prices at time of review. Anyway, they're all more than I'm willing to pay anymore, both AMD and nvidia.
 
He's probably referring to the 3090 and 3080Ti "real world" prices vs. MSRP. Some of those price jumps were huge for minimal gains. Things are normalizing, but a lot of that had to do with simply buying whatever you could get your hands on.
Anyone not buying at MSRP today is a sucker. The shortage is over. EVGA was showing both cards I mentioned as in stock when I posted. There is so much inventory available that the 3060 Ti is even on sale at a discount direct from EVGA.
 
He's probably referring to the 3090 and 3080Ti "real world" prices vs. MSRP.
Well, I kinda just made up numbers. But there's certainly people here who will buy the top card because it's the fastest one out there, without any regard for any kind of price-to-performance ratio (and that's fine) so for anyone like that to complain about the price-to-performance of the 5800x3d would be silly, assuming there was anyone that met all those criteria.
 
LTT's review is out.
Spoilers:
As expected, real world performance does not align with AMD's benchmarks except that it is much faster than AMD's previous best. And this second place gaming effort comes with a substantial cost to everything other than gaming.

They couldn't even 'shop an "over 9000" reference? Weaksauce.
 
Well, I kinda just made up numbers. But there's certainly people here who will buy the top card because it's the fastest one out there, without any regard for any kind of price-to-performance ratio (and that's fine) so for anyone like that to complain about the price-to-performance of the 5800x3d would be silly, assuming there was anyone that met all those criteria.
Think about it this way:
The $450 5800X3D offers a performance boost only at 720p and below. The performance jump there is also only if the user already has a high end GPU. Anyone with a high end GPU isn't playing at 720p. Thus, for someone with a high end GPU, the 5800X3D offers exactly 0% improvement and might even net out to lower performance compared to what the user already has. This is terrible price to performance.

Additionally, anyone gaming at 720p by choice is running Pascal or older. The $450 5800X3D will get those users a few extra fps at 720p but will do nothing at 1080p and up. Bad price to performance. By contrast, the $489 3060 Ti FTW3 will take their fps at 720p but then quadruple it while running double the resolution with ultra settings. This is excellent price to performance.

So, which users does this make sense for? Someone who already has a high end GPU? Nope. Someone who currently has a low end GPU? Also no. A retrogamer playing on a CRT? There's our winner.
 
The $450 5800X3D offers a performance boost only at 720p and below.
Ah, no. "The AMD Ryzen 7 5800X3D primarily focuses its marketing on gaming advancements, which is because the additional L3 Cache layer on the X3D will mostly be visible in gaming scenarios. Impact to certain production applications, like rendering in Cycles (Blender) or Adobe Premiere, will be limited more by frequency and core count than by cache. That said, gaming often benefits from extra cache, and we see that here. The R7 5800X3D puts the brand new Intel i9-12900KS to shame for value, and although the 5800X3D can’t be overclocked, it also doesn’t really need it. Memory tuning is still available, as is Infinity Fabric tuning, and that’s more important for AMD anyway."



It trades blows with the 12900KS at 1080p as well--sometimes ahead, sometimes behind. It does a bit better against the 12900K. It's not as good at non-gaming tasks, but it was never marketed that way, either. Lest anyone think Steve's shilling for AMD, he savages some of their new chips (the 4500, for example) as being a waste of sand.
 
It really only makes since to those with older AM4 CPUs.

12700k is currently $300 at MC. The rest of the platform is more expensive but this is the last hurrah for AM4 so you're buying into a dead platform if you build AM4. If you're building a new system you might as well go with the Alder Lake or wait for AM5.
 
Wow aftering seeing the HUB and Gamers Nexus reviews. The chip really is great for gaming. There are some impressive results in gaming. I was shocked how well it did vs a 12900k/ks with 6400mhz DDR5 memory.

looking forward to see the next gen CPU's :)
 
Yeah, i'm thinking AM5 parts will spank. A 6900X or whatever they call it with this much (or more) available L3 cache will be incredible.
 
Glancing at the various video reviews and most print reviews, everyone keeps showing 1080p.
At least Guru shows some #'s for higher resolutions. Not sure their game choices are the best, but at least it's something.
https://www.guru3d.com/articles_pages/amd_ryzen_7_5800x3d_review,24.html
Considering you will be more GPU bound at 1440p/4k, it would not show how much more performance a CPU would be since all the CPU's would get right around the same framerate.

That's what a GPU bottleneck is.
 
I'm more interested is how this thing does with code compilation, and in garbage collected languages at runtime. I do not expect a big jump, but didn't see any load-bearing numbers yet.
 
He's not saying it's not efficient, he's saying the "efficiency" cores are only good for pulling heat away from the p-cores. Part tongue in cheek, I'm sure, but they do do that. They also take load from lower priority processes off of the p-cores, allowing them to sleep more, improving efficiency...when they're being used as intended, anyway. They don't directly help framerate or frametime, but can help indirectly when there are background processes that would otherwise get in the way.
Basically this! There are some specifics you can assign them but that requires the direct intervention of the developers, which will take a few years to work in, hopefully, AMD gets their own BIG.little designs into the field, but that's not terribly likely, and depending on what they choose to do for their future designs may not be needed as much, but AMD doesn't have nearly the same presence in the OEM space as Intel so the EU and California regulations don't impact them to nearly the same degree, and in the laptop space their chips are already efficient enough there for now at least.
 
I'm more interested is how this thing does with code compilation, and in garbage collected languages at runtime. I do not expect a big jump, but didn't see any load-bearing numbers yet.
Would be hard to go faster than the cheaper 5900 if your compilation workload is quite parallelized (not sure if they count has load bearing):

Code.png
compiler.png
 
Back
Top