What happened to Zen 3+?

Would expect this to be mostly a test-run to mature the v-cache process and to steal some of Intel's glory with regards to who has the fastest gaming CPU.

The 5800x is well into the diminishing returns area with regards to 142w as the performance difference between 130w and 142w is fairly small in my 5800x system, but the temp will differ by a significant margin. Looking at the rated clocks it seems they either ran out of power or are having trouble getting the 5800x3D to run as fast as the 5800x. For the 5900x the ideal power draw seems to be in the 160-170w range so if power draw is the limitation, then 2 sets of v-cache might make the CPU clocks go down too low with 142w of power in all core scenarios. A lot of boards can't go much above the 142w limit due to insufficient powerstages/vrm cooling so increasing the "stock" power draw is most likely not an option at this point.

Other possibilities are that the v-cache mostly scales in games or that it doesn't scale well when the load is distributed between 2 CCDs and v-caches. If it doesn't do much to increase performance in productivity, or even lowers it due to limitations caused by v-cache (clocks, powerdraw etc.), then it doesn't make sense for the 5900x or 5950x to get the upgrade. IMO the 5900x and 5950x don't make sense for games only as their advantages over the 5800x mainly shows themselves in productivity tasks.

My guess is that the v-cache is quite expensive and complex to manufacture compared to the stock CPUs until the process matures, but they may have very few "broken" CCDs. The 5600x and 5900x relies on CCDs with at least on core that is non-functional, otherwise they could make a 5800x/5950x instead. There could also be a high failure rate of the v-cache bonding causing a high price per functional CCD. If almost all working CCDs have 8 functional cores then 5800x3D and 5950x3D makes the most sense. AMD does have limited production capacity though, so they can make 2 5800x3D for each 5950x3D. Going 5800x3D only could double the number of CPUs that they can bring to customers and customers love "in stock".
yep just saw this today too: basically Gordon also brought up the fact that the chips that use multiple ccd's may have increased latency in the cache so may not get as much a benifit as the 5800x and like you said power draw prob too high on the 5900/5950 which would mean lowering clocks too much also negating benifits?
 
Last edited:
yep just saw this today too: basically Gordon also brought up the fact that the chips that use multiple ccd's may have increased latency in the cache so may not get as much a benifit as the 5800x and like you said power draw prob too high on the 5900/5950 which would mean lowering clocks too much also negating benifits?


This may be true in an all core load but a 5950x is basically two 5800’s so 8 core load or less can simply be scheduled to run entirely on one CCD.
 
What is the difference of 5800X3D and 5800X in terms of multithreaded applications? Is it the same or it has some changes.
 
Probably answered already but can I get a 5800X3D to replace my 5800X and just drop it in? Maybe need a bios update at most?
 
What is the difference of 5800X3D and 5800X in terms of multithreaded applications? Is it the same or it has some changes.
depends on what the multithreaded application is, how sensitive it is to latency & bandwidth, but generally won't be as big as the performance gains in gaming.
 
Would expect this to be mostly a test-run to mature the v-cache process and to steal some of Intel's glory with regards to who has the fastest gaming CPU.

The 5800x is well into the diminishing returns area with regards to 142w as the performance difference between 130w and 142w is fairly small in my 5800x system, but the temp will differ by a significant margin. Looking at the rated clocks it seems they either ran out of power or are having trouble getting the 5800x3D to run as fast as the 5800x. For the 5900x the ideal power draw seems to be in the 160-170w range so if power draw is the limitation, then 2 sets of v-cache might make the CPU clocks go down too low with 142w of power in all core scenarios. A lot of boards can't go much above the 142w limit due to insufficient powerstages/vrm cooling so increasing the "stock" power draw is most likely not an option at this point.

Other possibilities are that the v-cache mostly scales in games or that it doesn't scale well when the load is distributed between 2 CCDs and v-caches. If it doesn't do much to increase performance in productivity, or even lowers it due to limitations caused by v-cache (clocks, powerdraw etc.), then it doesn't make sense for the 5900x or 5950x to get the upgrade. IMO the 5900x and 5950x don't make sense for games only as their advantages over the 5800x mainly shows themselves in productivity tasks.

My guess is that the v-cache is quite expensive and complex to manufacture compared to the stock CPUs until the process matures, but they may have very few "broken" CCDs. The 5600x and 5900x relies on CCDs with at least on core that is non-functional, otherwise they could make a 5800x/5950x instead. There could also be a high failure rate of the v-cache bonding causing a high price per functional CCD. If almost all working CCDs have 8 functional cores then 5800x3D and 5950x3D makes the most sense. AMD does have limited production capacity though, so they can make 2 5800x3D for each 5950x3D. Going 5800x3D only could double the number of CPUs that they can bring to customers and customers love "in stock".
If the two CCDs basically limits cache to Infinity Fabric speeds then have the 5950X3D with one chiplet with VCache and the other without for great Multithreading performance with the best gaming performance. Scheduling the first 8/16 threads to the CCD with VCcache first or assigned for games will allow best gaming performance as well as great multithreading performance for those that do more than just game on their computer and need that extra performance. Do this with TR as well, which I doubt they will do. Great single thread performance with very low latency combined with great multithreading performance. AMD decision may have to do with actually how constrained they are in getting manufactured VCache chiplets.
 
If the two CCDs basically limits cache to Infinity Fabric speeds then have the 5950X3D with one chiplet with VCache and the other without for great Multithreading performance with the best gaming performance. Scheduling the first 8/16 threads to the CCD with VCcache first or assigned for games will allow best gaming performance as well as great multithreading performance for those that do more than just game on their computer and need that extra performance. Do this with TR as well, which I doubt they will do. Great single thread performance with very low latency combined with great multithreading performance. AMD decision may have to do with actually how constrained they are in getting manufactured VCache chiplets.

It may also be due to how close they are to the next gen chips coming down. No point in retooling etc for 3DCache chips if within a year they will move to Zen4
 
It may also be due to how close they are to the next gen chips coming down. No point in retooling etc for 3DCache chips if within a year they will move to Zen4
Since Zen 4 is on a different process, 5nm, strapped more so than 7nm, I would think AMD will keep AM4 alive for awhile together for production numbers and market share. Basically Vcache can extend selling period for Zen3+ for all those with AM4 motherboards that will accept them.
 
Since Zen 4 is on a different process, 5nm, strapped more so than 7nm, I would think AMD will keep AM4 alive for awhile together for production numbers and market share. Basically Vcache can extend selling period for Zen3+ for all those with AM4 motherboards that will accept them.
V-cache would most likely have to offer a 15% or so improvement to get 5000 series owners to upgrade. People with older boards will most likely go with a 5800x3D if they mostly play games or if they need productivity then it will still be cheaper for them to go with a 5900x or 5950x rather than switching to intel. I doubt that AMD loses much market share over just releasing the 5800x3D to get the gaming crown back (marketing) and just leaving the rest as is. People who demand the best will most likely just jump on the next gen Intel or AMD this fall, rather than purchasing a new CPU for the AM4 socket just before AM5 comes out. Their current strategy makes sense from a business perspective IMO.
 
15% in something like game would be quite amazing right ? That would be significantly more than the last 2 generation upgrade

1612872897508-png.png


relative-performance-games-1920-1080.png
 
15% in something like game would be quite amazing right ? That would be significantly more than the last 2 generation upgrade

View attachment 440771

View attachment 440772
There are lots of other areas than games where performance matters. Also if you are looking at games, at least use charts with a current gen high-end GPU and sitations that are actually measuring CPU performance. According techpowerup reviews with a 3080 the jump from 3800xt to 5800x is actually aprox 25% and 17% at 720p and 1080p with a 3080. Not sure if 3000 series to 5000 series counts as 2 generations, but feel free to cherry pick graphs.

720p: https://www.techpowerup.com/review/intel-core-i7-11700kf/16.html
1080p: https://www.techpowerup.com/review/intel-core-i7-11700kf/17.html
 
Not sure if 3000 series to 5000 series counts as 2 generations
By generation I meant the last 2 release 2000 to 3000 and 3000 to 5000, some people did the jump seeing a less than 15% jump on the highest video card of the time (well except the 720p and/or low quality players chasing very high fps).
 
So what happens when 4K is mainstream on $200 cards? Do CPUs for gaming basically become irrelevant at that point?
 
So what happens when 4K is mainstream on $200 cards? Do CPUs for gaming basically become irrelevant at that point?
No, because in a scenario where cheap cards are good for 4K60, higher-end cards would be capable of 4K120 or 4K240 and then a fast CPU become relevant again.
It's not that high resolutions are intrinsically less CPU-bound; it's that until recently, GPUs weren't capable of going full high-refresh at 1440P or 2160P so the need for a fast processor to not end up CPU bound at high framerates was predominately seen with 1080P setups.
 
So what happens when 4K is mainstream on $200 cards? Do CPUs for gaming basically become irrelevant at that point?

Other way around I imagine if a $200 is able to do high quality 4k at 60 fps, the $800 version will be able to do 120 fps -140 fps for the giant demanding title and over 200fps for the mid or the old one and 4k will be what 1080p is today or something like that ? and will see huge swing from cpu to cpu at 4K like we can see at lower resolution and by then difference between 6 or 8-10 core could become larger has well.

Has mentioned above it is about target FPS more than resolution at least in the current system which could change over time.
 
Last edited:
Other way around I imagine if a $200 is able to do high quality 4k at 60 fps, the $800 version will be able to do 120 fps -140 fps for the giant demanding title and over 200 for the mid a bit old one and 4k will be what 1080p be today or something like that ? and will see huge swing from cpu to cpu even at 4K and by then difference between 6 or 8-10 core could become larger has well.
Exactly. We're already starting to see a situation developing where 1440 will be the new 1080, high-refresh 4K takes up the segment that high-refresh 1440 has been in, 8K will be the new 4K, weird ultrawides will continue to be a thing, and 1080 will be relegated to very high refresh (like 240-360hz+) or extreme budget. It's already happening- I'm seeing 1440P165 prices dropping, more 4K120 panels coming out, and 4K60 is actually affordable and reasonable to use now.
 
So what happens when 4K is mainstream on $200 cards? Do CPUs for gaming basically become irrelevant at that point?
It is just a matter of how powerful the GPU and CPU is. Even with a 3080 the 12900k has a slight advantage at 4k over anything AMD so in 1-2 generations of GPUs that difference will probably be significant. You of course have the difference between good enough and best performance and people buying a $200 card are looking for good enough.

Games will start to use more CPU once the PS4/Xbox one gen become legacy systmes. Currently it is mainly Borderlands 3 that shows differences at higher resolutions (rest are typically within a few fps), but that will increase as games use more of the current gen consoles' power. The last gen of consoles had a very weak CPU so they had to take into consideration how to run at a fixed 30 or 60fps on that when running CPU tasks.
 
No, because in a scenario where cheap cards are good for 4K60, higher-end cards would be capable of 4K120 or 4K240 and then a fast CPU become relevant again.
It's not that high resolutions are intrinsically less CPU-bound; it's that until recently, GPUs weren't capable of going full high-refresh at 1440P or 2160P so the need for a fast processor to not end up CPU bound at high framerates was predominately seen with 1080P setups.
Fascinating theory -- but how well does it hold up? I am a big, big fan of TechPowerUp and the relative gpu/cpu pages in their reviews. I went to the most recently CPU (intel 12400) and looked to see how it fared against the competition in 4K gaming. Your point that CPUs at 4k60 won't be a big deal is true, there is ~5% difference there. But your point about 4k120-240 made me see if there was any data available.

Surprisingly enough there is. Doom Eternal and CSGO. They both support and detract from your argument. (GPU is 3080).

In Doom Eternal, you see 4K120, with the range being 146-150 fps, from worst to best CPU. Even at high refresh rates, the CPU is completely marginalized. Even the 5+ year old 6700K is in the dead thick of things.

doom-eternal-3840-2160.png


In CSGo, you are finally starting to see CPU separation, but it is taking a 5+ year old cpu 6700K (funny enough, thats what I game on but OC at 4.7ghz) to do it. Even then, the range is only 55 frames, 311-366. Yes 55 is a lot, but is anyone going to notice/care at that level?

csgo-3840-2160.png
 
Fascinating theory -- but how well does it hold up? I am a big, big fan of TechPowerUp and the relative gpu/cpu pages in their reviews. I went to the most recently CPU (intel 12400) and looked to see how it fared against the competition in 4K gaming. Your point that CPUs at 4k60 won't be a big deal is true, there is ~5% difference there. But your point about 4k120-240 made me see if there was any data available.

Surprisingly enough there is. Doom Eternal and CSGO. They both support and detract from your argument. (GPU is 3080).

In Doom Eternal, you see 4K120, with the range being 146-150 fps, from worst to best CPU. Even at high refresh rates, the CPU is completely marginalized. Even the 5+ year old 6700K is in the dead thick of things.

View attachment 441245

In CSGo, you are finally starting to see CPU separation, but it is taking a 5+ year old cpu 6700K (funny enough, thats what I game on but OC at 4.7ghz) to do it. Even then, the range is only 55 frames, 311-366. Yes 55 is a lot, but is anyone going to notice/care at that level?

View attachment 441246
I'm not surprised by the Doom Eternal results; that game doesn't seem to use the CPU much. I think that we're just not at the point yet where 4K frequently gets CPU bound- I did say the situation "developing", not that it's already here. It wasn't that long ago that 1440P charts were pretty flat like 4K charts, now they mostly have a curve like the 1080P charts.
I genuinely think that we will start to see more CPU relevance at 4K high refresh going forward. Not in every game, but in some. As a current example, Cyberpunk 2077 comes to mind. I've been trying to find some more recent CPU benchmarks b/c the review benches are now somewhat invalidated due to the release day versions having serious performance issues, but piecing it together from the old benchmarks + my own new testing I estimate that on a 3090 at 4K there may be a 15% spread in framerate from 2017 CPUs to 2021 CPUs. That's not huge, but a 3090 also isn't much faster than a 3080. It's all splitting hairs at the top end. I think if we wait another year and compare on RTX 4090 + various CPUs a game like Cyber'77 that is both CPU & GPU heavy, and benefits from both high IPC and multithreading, there will be growing differences between CPUs as framerate at 4K gets closer to 100.
 
It may also be due to how close they are to the next gen chips coming down. No point in retooling etc for 3DCache chips if within a year they will move to Zen4
but they have had the 3d vcache tooling ready since zen 2 they just never had the reason i guess? to implement it till now it is prob more down to cost / benefit analysis more then anything. i woulda never thought of that idea noko that would be a very smart approach to this tech. i thought for sure they were adding vcache to TR / Epic guess well see if they release a zen 3+ Based TR.
 
but they have had the 3d vcache tooling ready since zen 2 they just never had the reason i guess?

They only just got it working I think last December. They are launching it as fast as they can.
 
can only hope this first product is enough of a hit they decide to bring the 5950x3D out in future.
 
Fascinating theory -- but how well does it hold up? I am a big, big fan of TechPowerUp and the relative gpu/cpu pages in their reviews. I went to the most recently CPU (intel 12400) and looked to see how it fared against the competition in 4K gaming. Your point that CPUs at 4k60 won't be a big deal is true, there is ~5% difference there. But your point about 4k120-240 made me see if there was any data available.

Surprisingly enough there is. Doom Eternal and CSGO. They both support and detract from your argument. (GPU is 3080).

In Doom Eternal, you see 4K120, with the range being 146-150 fps, from worst to best CPU. Even at high refresh rates, the CPU is completely marginalized. Even the 5+ year old 6700K is in the dead thick of things.

View attachment 441245

In CSGo, you are finally starting to see CPU separation, but it is taking a 5+ year old cpu 6700K (funny enough, thats what I game on but OC at 4.7ghz) to do it. Even then, the range is only 55 frames, 311-366. Yes 55 is a lot, but is anyone going to notice/care at that level?

View attachment 441246
Show me the lows in CSGO and you'll see a totally different picture. Thing is nobody competitive plays at 4K in CSGO.
Even at 1080p and with a 360hz monitor you want to maintain above 360fps with no drops.
 
Show me the lows in CSGO and you'll see a totally different picture. Thing is nobody competitive plays at 4K in CSGO.
Even at 1080p and with a 360hz monitor you want to maintain above 360fps with no drops.
That's a good point. I'm not sure TPU provider them.

As anyone seen these 240/360 monitors side by side with a 120? Is there a huge difference? I myself can't tell the difference between 100 and 120, much less 144 that my monitor can do. There has to be diminishing returns somewhere
 
That's a good point. I'm not sure TPU provider them.

As anyone seen these 240/360 monitors side by side with a 120? Is there a huge difference? I myself can't tell the difference between 100 and 120, much less 144 that my monitor can do. There has to be diminishing returns somewhere
You need a pretty good CPU to maintain that fps in CSGO.

I don't play at 360hz but I do got a 270hz 1440P monitor and there is a huge difference for me gaming at 240/270hz vs 144/165hz.
New monitors with 480hz is coming out in 2023 as wełl and I'm guessing we will see close to 1000 Hz in less than a decade. Yes there is diminishing returns for sure, but if you are a competitive player you'll notice a difference.
I don't think it's worth playing singleplayer games at more than 144hz though.
 
That's a good point. I'm not sure TPU provider them.

As anyone seen these 240/360 monitors side by side with a 120? Is there a huge difference? I myself can't tell the difference between 100 and 120, much less 144 that my monitor can do. There has to be diminishing returns somewhere
This is all anecdotal, but I do notice a slight difference on my laptop (1080P 240hz panel) between 120 and 240 the few times I've done a comparison. But that's in FPS games with mouse/controller sensitivity at the max type situations. From experience on that panel and a 144hz I had, I think the gap has to be like 2x for it to be really noticeable. So 60 -> 75 or 100 -> 120 is negligible, but 60 -> 120 or 120 - 240 will be noticeable for many people in some situations. And there's cases where I've noticed a positive change in game visuals smoothness with a 1.5x increase, like in Civ VI where there's not a lot of fast action, but there may be a lot of fast scrolling and panning and even going from 60 - 90hz there made it look nicer to me in motion.

Another benefit of super-mega-high-refresh rate panels that doesn't get talked about much is their ability to handle lower framerates well without VRR and the potential complications that brings. 55fps on a fixed 240hz panel with some form of VSync has less stutter and tearing than 55fps on a 60hz panel.
 
Another benefit of super-mega-high-refresh rate panels that doesn't get talked about much is their ability to handle lower framerates well without VRR and the potential complications that brings. 55fps on a fixed 240hz panel with some form of VSync has less stutter and tearing than 55fps on a 60hz panel.

They also do a better job with 24fps/25fps/30fps video content.
 
This is all anecdotal, but I do notice a slight difference on my laptop (1080P 240hz panel) between 120 and 240 the few times I've done a comparison. But that's in FPS games with mouse/controller sensitivity at the max type situations. From experience on that panel and a 144hz I had, I think the gap has to be like 2x for it to be really noticeable. So 60 -> 75 or 100 -> 120 is negligible, but 60 -> 120 or 120 - 240 will be noticeable for many people in some situations. And there's cases where I've noticed a positive change in game visuals smoothness with a 1.5x increase, like in Civ VI where there's not a lot of fast action, but there may be a lot of fast scrolling and panning and even going from 60 - 90hz there made it look nicer to me in motion.

Another benefit of super-mega-high-refresh rate panels that doesn't get talked about much is their ability to handle lower framerates well without VRR and the potential complications that brings. 55fps on a fixed 240hz panel with some form of VSync has less stutter and tearing than 55fps on a 60hz panel.
This.
I will say that fast laptop panels usually have a little higher input lag than most monitors.
Obviously not everybody has a need for 240hz or faster monitor, but if we are talking VR headsets I think we could all benefit from faster panels, especially when we reach the realm of 1000hz with low input lag which would be much closer to that 1 to 1 feel we should strive for.
 
My 10+ year old Sony KDL-46Z5100 1080p TV (one step below the XBR at the time) is a 240 Hz panel. I guess they knew what they were doing back then.
Wild... I didn't know 240hz LCDs existed back then, I only knew abt the high-refresh Plasmas. I bet 24fps content looks fantastic on that TV!
 
Wild... I didn't know 240hz LCDs existed back then, I only knew abt the high-refresh Plasmas. I bet 24fps content looks fantastic on that TV!
I guess I need to try playing something at 24fps ;-) The PS5 is capable of that, but it will just be at 1080p since that's all the TV supports of course.
 
Back
Top