Ryzen 7 5800X3D Beats Core i9-12900KS By 16% In Shadow of the Tomb Raider

That makes perfect sense, but I also want to see real world performance gains. Yes it might be 15% faster when CPU bound, but how does that translate to actual game performance? It is a useless thing to measure because no one actually games at those resolutions with settings turned down. I don't mind including it, but they should always include 1080/1440/1440 ultra wide and 4K at max settings. Of course CPUs tend not to make much of a difference there, but it would be nice to see how much or how little the gain would be from an older CPU to inform a purchasing decision.
Exactly, and with this particular part (5800X3D), I'd venture to say the majority of purchases are people upgrading that are already on AM4. I don't imagine many buying an AM4 board right now to use this thing on. They'll either go Intel DDR5, or wait for AM5 w/ DDR5 if buying new.
 
It makes perfect sense to lower resolution and graphics settings as much as possible in order to isoate the CPU performance.
I disagree. It makes perfect sense to lower the graphics settings only to the point where the GPU is not creating a bottleneck. On an old title with a 3090, that's quite possibly as high as 1440p ultra although 1080p ultra is likely more reliable.

It is just barely on the right side of lying to lower the graphics settings to the barest possible level that the software allows.

If the goal is to show a difference in CPU performance that is entirely disconnected from real world performance, why even draw frames at all?

It's useful data for a long term builder. 720p results today show you the difference when you upgrade to RTX 5000 series later on.

I will keep this in mind the next time I place an order for a CPU and ask them to hold onto it for six years before sending it to me.
 
I disagree. It makes perfect sense to lower the graphics settings only to the point where the GPU is not creating a bottleneck. On an old title with a 3090, that's quite possibly as high as 1440p ultra although 1080p ultra is likely more reliable.
If lowering the resolution change absolutely anything to the performance, safe to assume the GPU was creating a bottleneck, no ?

I imagine they draw frame at all because they are forced too to have fps info ? would it be an option to not have them those kind of people would maybe choose that option.
 
Both High res and low res testing is important when you're looking at what cpu to buy but if you're trying to focus on gaming potential which could be useful with a more demanding game or future gpu upgrade you need remove the usual gpu bottleneck and the best way to do that is to test at as low of res as possible until you see the gpu no longer having an effect. High res testing is more important when you're trying to figure out how much performance you need for current titles and then how much headroom you want to have for the future but for pure gaming performance low res is more important.

My issue with this that they only tested one game and it's one that I seem to recall being a bit of an outlier, it's still useful as a data point even if it is but that would mean it's not representative of overall gaming performance.
 
If lowering the resolution change absolutely anything to the performance, safe to assume the GPU was creating a bottleneck, no ?

I imagine they draw frame at all because they are forced too to have fps info ? would it be an option to not have them those kind of people would maybe choose that option.

That is not a safe assumption. If the frame itself and the act of rendering it were entirely independent of CPU performance, then they wouldn't bother drawing frames at all.

The fact is, they improved only a single feature in the CPU. So, they concocted a completely unrealistic test which utilizes that single feature to the exclusion of all others. This would be like Ford claiming their cargo van is the world's fastest race car because it shifts from Park to Drive slightly more quickly than a Corvette. Technically true, realistically worthless.
 
That is not a safe assumption. If the frame itself and the act of rendering it were entirely independent of CPU performance, then they wouldn't bother drawing frames at all.

The fact is, they improved only a single feature in the CPU. So, they concocted a completely unrealistic test which utilizes that single feature to the exclusion of all others. This would be like Ford claiming their cargo van is the world's fastest race car because it shifts from Park to Drive slightly more quickly than a Corvette. Technically true, realistically worthless.
They didn't concoct anything, this is a leak of a benchmark in a known game from the same people that leaked non-gaming benchmarks yesterday that showed it being worse than the original 5800x and 12900ks. I would have liked to see more games tested but but at the very least we've confirmed that non-gaming performance will be worse and that there at least some benefits for gaming performance like they claimed.
 
I don't know anyhting about Shadow of the Tomb Raider. Is it a representative benchmark? Or is it biased like all those sham AotS benchmarks that have been pushed in the past?

Either way, I am glad to see the repeated leapfrogging keep up, and I hope it continues. This benefits the consumer.
Their 6 selected titles (tomb raider being one of them) are a little AMD biased, but AMD’s biggest issue in games is coordinating ram access to the cores. A larger cache cuts down on the amount of times the cores need to access Ram and ease off on the schedulers. Regardless it should have a noticeably positive impact on gaming. It won’t always edge out the 12900k in gaming but it will certainly do better than the existing AMD lineup in games, there are a number of media creation or other heavy workloads that the clock speed decrease will be noticeable.
 
That makes perfect sense, but I also want to see real world performance gains. Yes it might be 15% faster when CPU bound, but how does that translate to actual game performance? It is a useless thing to measure because no one actually games at those resolutions with settings turned down. I don't mind including it, but they should always include 1080/1440/1440 ultra wide and 4K at max settings. Of course CPUs tend not to make much of a difference there, but it would be nice to see how much or how little the gain would be from an older CPU to inform a purchasing decision.

That's why you look at both CPU and GPU benchmarks, and pick the lowest number of the two. As long as you minimize all graphics settings in the CPU test, you can separate the two this way.
 
CL40 4800 DDR5 is super weak. I would like to see a much better kit paired here. 6400 CL32 can have uplifts of 20-25 percent in some titles compared to the JDEC spec 4800.
and the AMD chip would get better perfomance with low latency 3800 mhz. if you gonna change one then change the other. nobody running ryzen on cheap 3200 memory!!?
 
Their 6 selected titles (tomb raider being one of them) are a little AMD biased, but AMD’s biggest issue in games is coordinating ram access to the cores. A larger cache cuts down on the amount of times the cores need to access Ram and ease off on the schedulers. Regardless it should have a noticeably positive impact on gaming. It won’t always edge out the 12900k in gaming but it will certainly do better than the existing AMD lineup in games, there are a number of media creation or other heavy workloads that the clock speed decrease will be noticeable.
I'm curious where you're seeing 6 games tested, the TH article linked here as well as the one they're quoting only mention the TR title.
 
That is not a safe assumption. If the frame itself and the act of rendering it were entirely independent of CPU performance, then they wouldn't bother drawing frames at all.
If I would try to benchmark Tomb Raider with frame per second right now, without rendering frame at all, how would I do that ?
 
I'm curious where you're seeing 6 games tested, the TH article linked here as well as the one they're quoting only mention the TR title.
AMD used 6 games in their slides when they announced the chip. Tomb Raider was one of those 6.
 
That's why you look at both CPU and GPU benchmarks, and pick the lowest number of the two. As long as you minimize all graphics settings in the CPU test, you can separate the two this way.
depends on what you're trying to do or the most important thing to you about your pc. if it's gaming performance, well this is the one to get imo. even if it is slightly slower in production/synthetic benches because of a 200mhz clock difference. plus they were using 3200mhz memory. the regular 5800x numbers could be coming from a system that was using the recommended memory speed? everyone knows that's the best way to boost performance on Ryzen.. especially on one with the 3D chip and it's oc lock

guess we'll find out soon enough. anyone know when the review embargo is up?
 
and the AMD chip would get better perfomance with low latency 3800 mhz. if you gonna change one then change the other. nobody running ryzen on cheap 3200 memory!!?
Been a couple tests with this. When looking at gaming performance, an average kit of 3200mhz stuff on AM4 is just fine. You can spend a bunch of money/time on something better, and there isn't even a measurable difference when it comes to gaming. People assume benchmark increases with better memory translate to real world performance - It doesn't. The reality is that the 3200mhz kit is already decent increase over the base spec of 2133mhz.

On AM4/Ryzen you actually get more gains by making sure your kit is dual-rank, than you do getting anything over 3200.

TLDR - As long as you're 3200 & dual-rank, that's really all that matters with Ryzen.
 
Last edited:
Been a couple tests with this. When looking at gaming performance, an average kit of 3200mhz stuff on AM4 is just fine. You can spend a bunch of money/time on something better, and there isn't even a measurable difference when it comes to gaming. People assume benchmark increases with better memory translate to real world performance - It doesn't. The reality is that the 3200mhz kit is already decent increase over the base spec of 2133mhz.

On AM4/Ryzen you actually get more gains by making sure your kit is dual-rank, than you do getting anything over 3200.

TLDR - As long as you're 3200 & dual-rank, that's really all that matters with Ryzen.
The cache increase compensates for the memory controller. AMD and their many core approach works great for numerous small workloads but struggles when it needs to coordinate all those cores accessing accessing Ram for larger ones. This is an interesting midterm step and I treat it as such.

AMD needs more memory channels and faster speeds, this will scale very well with DDR5 and 4+ channel’s with some tweaks and I think that Intel gen 13 is going to be pressed. The potential that this cache tech opens up looks to me like it’s going to put Intel on its back feet.

Note:
This is being written from the bar between pitchers as my players debate their plan of action (D&D night), they are way over thinking this and they are gonna die….
 
The CPU isn't even out for another, what, two weeks?
I know some places are already selling them in different countries? I also know that reviewers have them in their hands now.
 
Last edited:
I feel like I took a wrong turn in the hallway and entered retaHARDed forums by mistake. WTF peeps it is a benchmark that was set up to isolate CPU's in gaming performance. Take the data as you will. If you need to know how it performs against a 5900X or 5800X then just compare how those CPU's perform in a gaming benchmark to a 12900K.
 
I feel like I took a wrong turn in the hallway and entered retaHARDed forums by mistake. WTF peeps it is a benchmark that was set up to isolate CPU's in gaming performance. Take the data as you will. If you need to know how it performs against a 5900X or 5800X then just compare how those CPU's perform in a gaming benchmark to a 12900K.
I don't know what they are thinking. The only thing I can think of is some people spent allot of money upgrading their rig and most wouldn't be too happy if the platform they were on before all of a sudden beat their current one.

Other than that yes it's nonsensical to test for CPU performance in a GPU limited scenario.
 
Been a couple tests with this. When looking at gaming performance, an average kit of 3200mhz stuff on AM4 is just fine. You can spend a bunch of money/time on something better, and there isn't even a measurable difference when it comes to gaming. People assume benchmark increases with better memory translate to real world performance - It doesn't. The reality is that the 3200mhz kit is already decent increase over the base spec of 2133mhz.

On AM4/Ryzen you actually get more gains by making sure your kit is dual-rank, than you do getting anything over 3200.

TLDR - As long as you're 3200 & dual-rank, that's really all that matters with Ryzen.
guess it depends on who you ask.

these are from 5600x which some can actually hold 1:1 up to 3800mhz

edit: and i thought it was just 4 single rank was better than 2 dual rank sticks?

Screenshot_1.jpg

Screenshot_2.jpg

Screenshot_3.jpg
 
Last edited:
3600 is the bang for buck and easily obtainable sweet spot for Ryzen 5000. You can get 3600 c16 kits for not too much more than 3200 and just set xmp and go. Any gains over this are squeezed out by tuning and overclocking and while there are some to be had most users don't go ham on it. Especially newer builders watching build videos which tend to advise " just set xmp or docp in bios and your good."

I'm sure when the embargo is up we will see some useful benchmarks on the 3D chip at 1080p up to 4k and some head to head stuff such as the fastest ram on AM4 vs ADL DDR4 and DDR5 also with fast ram and so on. The article posted here is just early teaser clickbait really.
 
Another thing to note is running low rez is a good thing, but low settings isn't. Why? You eliminate tons of particle simulations, draw calls, and physics etc. that the cpu is normally responsible for as well, skewing what actual minimums would be.
 
Another thing to note is running low rez is a good thing, but low settings isn't. Why? You eliminate tons of particle simulations, draw calls, and physics etc. that the cpu is normally responsible for as well, skewing what actual minimums would be.
The person that ran the tests linked to this post as an explanation of their testing methods though it's worth noting that these test were done at custom low settings too. The short version of the linked explanation is that resolution is the most reliable way to remove the GPU from the equation without also potentially having an impact on cpu usage due to overhead. If that isn't enough to fully remove the bottleneck then start lowering the most gpu intensive settings until the bottleneck is gone.
 
.;.. I still have an issue with using different video cards in each system...
They could probably swap the GPU's between the test systems, and get results within a half percent of the previous results. But yeah if they want to properly test, the GPU should match.
720p tests are purely academic. Its not actually useful information, because nobody games at 720p. The marketing of this CPU is that its a solid boost for gaming. Actual gaming.
It might be useful for someone still gaming on say an i7-920, or something 10 years old. But come to think of it, the more useful set of data would have been to do this test across a range of CPU's going back as far as that i7-920. Ensuring each system had fast ram for that system, and just use the same GPU in all runs. Would take awhile, but might show off that CPU Cache a little better. In fact they need to add a 5800X to this test vs the 5800X3D. This data plus data from older CPU's (up thru the newest Intel as well of course),would paint an interesting picture.

I suppose they made their point, an argument for the X3D, but I would like to see even more tests. They should add 1 more game that isn't so memory speed dependent as Shadow of the Tomb Raider is, to really show the CPU Caches' impact.

Who is left that could do that suite of tests? LTT probably.
 
Here's an idea. Put up some benchmarks not related to gaming if you are just trying to compare CPU performance. Benchmarks like these are useless click bait.
Tom's Hardware doesn't know this? That's pretty bad. Or at least, have more benchmarks - comparing one game doesn''t say much.
 
Back
Top