Is it true that 7950X3D suffers from stutter on games?

People need to stop saying "best of both worlds" it isn't. It's a compromise. a 7950x3d is still slower than a 7950x in most non-gaming scenarios. Best of both worlds means just that. BEST. It would have to be faster than a 7950x in non gaming workloads and faster than a 7800x3d in gaming workloads.

It's faster than anything but the 7950X in non-gaming workloads, and faster than anything in gaming workloads. It'd be even slower if it had vcache on both dies in non-gaming workloads and no faster in gaming workloads. So it is the best compromise possible with current technology, which is 2-3% faster in gaming than anything, and 2-3% slower in non-gaming than the absolute best. Happy?
 
Here is a stupid question for everyone: Why can't AMD put the vcache on the other CCD as well?
They can but you'd run into the same issue which is the latency between the dies when they need to access information on the other one.
 
As far as CCDs, I don’t really want two CCDs at all. I’d like one 16 core ccd.
In an ideal world, yields would not be a problem with such a big die. Unfortunately we don't live in an ideal world.
 
In an ideal world, yields would not be a problem with such a big die. Unfortunately we don't live in an ideal world.
No we don't. But we do live in a world where technology improves and what wasn't feasible or economical one year may not be the case in the future, which is why i'm waiting to see what Zen 5 brings.
 
The main benefit from having >8 cores for gaming is if you do some form of multi-tasking while gaming. I play Warzone often, both my i7 9700k (8 cores) and R9 5900x (12 cores) can run it fine playing solo. However, when I squad up and playing quads while on a Skype video call, there's a significant performance loss on the 9700k, 5900x doesn't care. This is one of the reasons I personally don't subscribe to "you don't need more than 8 cores for gaming" I do quite a lot with my PC and I personally will benefit from >8 while gaming even if the game, strictly speaking, may not.

This is obviously anecdotal. 7800x3d is far more powerful than a 9700k. I'm CPU limited in WZ on the 9700k even with nothing else going on. It also doesn't have SMT "hyperthreading" to help it out like the 7800x3d does

Hardware acceleration is whatever the default setting is which I’m almost certain is enabled. Still uses cpu cycles, not really a problem on modern CPUs but older 8 cores with no SMT that are already pegged, you notice it.

As far as CCDs, I don’t really want two CCDs at all. I’d like one 16 core ccd.
Indeed, its likely due to the 9700k being an older architecture, as well as no Hypterthreading.


View: https://www.youtube.com/watch?v=Nd9-OtzzFxs
They can but you'd run into the same issue which is the latency between the dies when they need to access information on the other one.
Again I have to say this, show us some data which shows that latency meaningfully affecting game performance?

I just made this post in this thread, on Friday:
https://hardforum.com/threads/is-it...utter-on-games.2030731/page-6#post-1045862505
 
Again I have to say this, show us some data which shows that latency meaningfully affecting game performance?
Very easy for me, at moment I have 7700x@6200 RAM and 7900x@6600 RAM, both tuned.
The 7900x is most of the times few or some avg FPS below 7700x, but 7900x can be better at 0.1% low FPS.
Same apply at x3D variants and you can see it at TPU reviews (or anywhere), this is because Infinity Fabric latency.

Or you think that AMD park non-cache cores during gaming for nothing?
 
Very easy for me, at moment I have 7700x@6200 RAM and 7900x@6600 RAM, both tuned.
The 7900x is most of the times few or some avg FPS below 7700x, but 7900x can be better at 0.1% low FPS.
Same apply at x3D variants and you can see it at TPU reviews (or anywhere), this is because Infinity Fabric latency.

Or you think that AMD park non-cache cores during gaming for nothing?
meaningfully affecting and show it ;)

(And if your DDR5 6600 isn't at 1:1 ratio, then it will likely perform worse in some games, than 6000 at 1:1).

From my post which you seemingly didn't read, but modified for 7900x:

Here is Techpowerup's 1% lows page, from their 14900ks review
https://www.techpowerup.com/review/intel-core-i9-14900ks/21.html

click the toggle, to see individual games. Its only the super high framerate games which have over a 10 fps difference at 1080p. And at that those high frames....it doesn't matter for 99% of gamers.
Some other games were closer, within a handful of frames, or less. And then there are games where the dual CCD chips are faster.

And in the all games average for minimum frames, 7900x is only 4.4 frames slower than 7700x.
For average performance at 1080p, 7900x is 0.9% behind 7700x.

*The end point, is that AMD's dual CCD CPUs are great gaming CPUs.
But also, AMD's product stack is very good. If you want to focus on gaming, you don't have to buy the most expensive CPUs, to get their best gaming CPUs, 7700x and 7800X3D.
If you need productivity, you still get very similar gaming performance, from their 12 and 16 core options.
If you want their best multicore performance, you don't have to buy their most expensive CPU.
You only have to buy their most expensive CPU, if you want (very near) the best of both. And that's how it should be.
 
Horizon_ZD_FHD_42731.png
Horizon_ZD_FHD_43270.png
Aida_54,4.png
Aida64_55ns.png


It's OK?
 
So that's what I mean------4 fps difference in average framerate. And less, in minimums.

So, while there is technically some latency when communicating between CCDs-----its very difficult to find gaming secenarios where it really means something. For most games, ti is a non-issue with Zen 4. (Zen 3 also didn't have a real issue there.. And Zen 2, while a bit worse----wasn't that bad, either.)
 
Yeah, it depends on the game, and most games are like that, okay.
7900x is significantly worse in Metro Last Light 20++ avg less and 0.1% is crap. It seems to work "flawlessly" but all the graphics are bad.
So you can find games that will have the dual CCD problem and it's a communication issue, overall the situation isn't that bad.

But when you get to 79X0x3D, the cache can make things a little bad because you have more cached information. That's why AMD parks the uncached cores.
 
meaningfully affecting and show it ;)

(And if your DDR5 6600 isn't at 1:1 ratio, then it will likely perform worse in some games, than 6000 at 1:1).

From my post which you seemingly didn't read, but modified for 7900x:

Here is Techpowerup's 1% lows page, from their 14900ks review
https://www.techpowerup.com/review/intel-core-i9-14900ks/21.html

click the toggle, to see individual games. Its only the super high framerate games which have over a 10 fps difference at 1080p. And at that those high frames....it doesn't matter for 99% of gamers.
Some other games were closer, within a handful of frames, or less. And then there are games where the dual CCD chips are faster.

And in the all games average for minimum frames, 7900x is only 4.4 frames slower than 7700x.
For average performance at 1080p, 7900x is 0.9% behind 7700x.

*The end point, is that AMD's dual CCD CPUs are great gaming CPUs.
But also, AMD's product stack is very good. If you want to focus on gaming, you don't have to buy the most expensive CPUs, to get their best gaming CPUs, 7700x and 7800X3D.
If you need productivity, you still get very similar gaming performance, from their 12 and 16 core options.
If you want their best multicore performance, you don't have to buy their most expensive CPU.
You only have to buy their most expensive CPU, if you want (very near) the best of both. And that's how it should be.
There is significant reddit activity on Process Lasso helping minimums in certain games with the 5900X in 2023. This does show that if data is forced to cross between CCDs, there is significant latency in doing so. If data isn't crossing CCDs (latest Windows/AMD/Game Bar drivers help mitigate that), performance penalties are minimal and extra cores can sometimes help with background tasks.

Basically there is a latency hit in crossing CCDs. How big of a hit that is depends on the data crossing over and what needs to be processed, i.e. a small auxiliary sound thread will have little to no performance penalty vs the main game thread.
 
Is there really advantage of more cache on 7900X vs 7700X mean anything when the only reason the 7900X has more cache is because it has 2 CCDs vs one but each has same amount of cache. I mean any advantage of more cache only because of an extra CCD?
Not if the game run all on the same CCD ( I think), assuming a 7900x has 2 ccd with 6 core each, game that use 7-8-9 core will use both and both cache.
 
Again I have to say this, show us some data which shows that latency meaningfully affecting game performance?
Have you tried dual CCD before? I'm curious.

Also we're end users, not everyone has an ability to pull some data from their experiences or even knows how to describe what they feel. Even the people doing reviews are finding new limitations of what they do which leaves gaps so don't be quick to discredit experiences without "data".
 
Last edited:
So that's what I mean------4 fps difference in average framerate. And less, in minimums.

So, while there is technically some latency when communicating between CCDs-----its very difficult to find gaming secenarios where it really means something. For most games, ti is a non-issue with Zen 4. (Zen 3 also didn't have a real issue there.. And Zen 2, while a bit worse----wasn't that bad, either.)


It can be an issue if game threads do cross CCDs. Fortunately they usually do not thus your statement that dual CCD CPUs are still great for gaming. But when they do a bnig dip in 0.1% and 1% lows in most games which is not good.,

Though why doesn't AMD have more than 8 core son a CCD as any option. That would be best for both worlds for gaming and productivity compromise. No need for 16, How about 10-12. Same for Intel P cores on single ring.
 
I don't have a 7950X3D, but since the discussion is now about dual CCDs generally, I thought I might be able to muddy the waters a bit with some actual data. The following chart is a 1 second sample of frame times from the Mount & Blade 2 built-in benchmark running on my 5950X. The different thread counts were forced using CPU affinity, so up to 16 threads is on one CCD, and beyond that it spills onto the other CCD.

Frametime%201s.png


Not only does the frame time continue to improve above 16 threads, but the variance improves too. In other words there is less of what might be perceived as stutter. Mainstream reviewers tend to use a selection of games that don't scale this well with core count, but certainly there are games that benefit from extra cores more than they are harmed by inter-CCD latency.
 
Have you tried dual CCD before? I'm curious.

Also we're end users, not everyone has an ability to pull some data from their experiences or even knows how to describe what they feel. Even the people doing reviews are finding new limitations of what they do which leaves gaps so don't be quick to discredit experiences without "data".
I've had both 5950x and 3900x. I had the 5950x for a few months and then downgraded to a 3900x, which I had for over a year.

I played around with a bunch of CPUs the past few years.

Starting in 2020 I Upgraded from an i5-6600k to:

10700f
11700f
5950x
3900x
12700k
13600k
7700x (only used for a couple of days, Zen 4 launch was messy).
7800X3D
 
7900x is significantly worse in Metro Last Light 20++ avg less and 0.1% is crap. It seems to work "flawlessly" but all the graphics are bad.
So you can find games that will have the dual CCD problem and it's a communication issue, overall the situation isn't that bad.
Regarding Metro Last Light:

Is that something you played recently?

The only thing I could find on the internet about that game and Zen 4, is this post from around Zen 4 Launch:
https://twitter.com/CapFrameX/status/1581306086417580032/photo/1
 
Regarding Metro Last Light:

Is that something you played recently?

The only thing I could find on the internet about that game and Zen 4, is this post from around Zen 4 Launch:
https://twitter.com/CapFrameX/status/1581306086417580032/photo/1
It's a benchmark of the game and I use it for exactly that purpose, as a benchmark because it shows extremely well how the frames jump up and down and if there's a latency issue, it visualizes very well.
Yes, it works similar to CapFrameX and shows similar graphs.

In Metro Exodus, the benchmark isn't as sensitive or the game code isn't affected as much by latency.

* Ryzen is very sensitive when you start tuning via curve optimize per core, with dual chips it can get very messy :D
 
Last edited:
People say this a lot. But.....show me some data on some scenarios where that latency actually matters enough that it performs notably worse?--------because I haven't seen it.


Its likely Intel WOULD have 10 P cores on their CPUs, if they didn't use so much power and generate so much heat. But, they haven't been able to fix that for 4 generations...
dual CCD causes no latency penalty if the CCD contains 8x/8x cores... it's explained in the video I posted.
all people that says the opposite does not have a 7950X3D and has no idea on the matter.
 
Last edited:
It's a benchmark of the game and I use it for exactly that purpose, as a benchmark because it shows extremely well how the frames jump up and down and if there's a latency issue, it visualizes very well.
Yes, it works similar to CapFrameX and shows similar graphs.

In Metro Exodus, the benchmark isn't as sensitive or the game code isn't affected as much by latency.

* Ryzen is very sensitive when you start tuning via curve optimize per core, with dual chips it can get very messy :D
Do you have it set as a game, in Xbox gamebar? (my understanding is that this can benefit all Zen 4. Not only the X3D.)
 
Do you have it set as a game, in Xbox gamebar? (my understanding is that this can benefit all Zen 4. Not only the X3D.)
that thing may benefit every CPU because on X3D it forces the game threads on the 3D cache cores,
on non X3D chips it simply higher the priority of the game threads resulting in a more "snappy response" on that threads over the background ones.

it's important to note that even if the xbox game bar forces games thread on the 3D cache cores, the game still have access to all the threads if needed,
infact in Horizon Forbidden West for example, when the game compiles the shader, the game has access to all the threads (32 on my 7950X3D).
 
Last edited:
dual CCD causes no latency penalty if the CCD contains 8x/8x cores... it's explained in the video I posted.
all people that says the opposite does not have a 7950X3D and has no idea on the matter.
Techspot's/Hardware Unboxed's review tells a different story. In their review, Cyberpunk 2077, Plague Tale: Requiem and Watch Dogs: Legion all performed worse on the dual CCD CPUs as compared to single CCD (including non-X3D parts). That's 25% of the games they tested, though admittedly a small sample size of 12 games. On the flipside, in four games (33%) the 7900X3D does beat out the 7800X3D but only in single digit %.

AVATARAT's results refutes your claims for Horizon Zero Dawn- the ability to use 32 threads does not directly translate to better performance if the cores aren't fully utilized. If you're just talking about compiling, that only affects load times and not actual game performance. Mr Evil provided the best proof so far but even his proof demonstrated strongly diminishing gains above 12 threads.
 
Techspot's/Hardware Unboxed's review tells a different story. In their review, Cyberpunk 2077, Plague Tale: Requiem and Watch Dogs: Legion all performed worse on the dual CCD CPUs as compared to single CCD (including non-X3D parts). That's 25% of the games they tested, though admittedly a small sample size of 12 games. On the flipside, in four games (33%) the 7900X3D does beat out the 7800X3D but only in single digit %.
Yeah and in Cyberpunk and Watchdogs Legion, the difference is very small. And that's the point. The majority of the data I have seen, the supposed latency penalty for dual CCD, is small for most games. Small enough it doesn't really matter.

Plague Tale is an anomaly, probably needs a driver/scheduling adjustment from AMD.
 
Yeah and in Cyberpunk and Watchdogs Legion, the difference is very small. And that's the point. The majority of the data I have seen, the supposed latency penalty for dual CCD, is small for most games. Small enough it doesn't really matter.

Plague Tale is an anomaly, probably needs a driver/scheduling adjustment from AMD.
Indeed the difference is small. As small as the performance benefits that more cores provide in the games that do use them when comparing the 7800X3D to 7950X3D.

Basically, the same argument for more cores providing the best performance can also be applied to restricting to only 1 CCD. Some games perform better with 1 CCD, some games perform better with more cores, but the differences are so small that once you have 8 vcache cores, you pretty much have the best performance you're going to get in 99+% of games. 10 vcache cores on a single CCD might do better but the extrapolation of data we currently have suggests gains will be minimal and certainly not linear, i.e. 20% more cores (8 -> 10) might net 10% performance gains in a limited selection of games.
 
Indeed the difference is small. As small as the performance benefits that more cores provide in the games that do use them when comparing the 7800X3D to 7950X3D.

Basically, the same argument for more cores providing the best performance can also be applied to restricting to only 1 CCD. Some games perform better with 1 CCD, some games perform better with more cores, but the differences are so small that once you have 8 vcache cores, you pretty much have the best performance you're going to get in 99+% of games. 10 vcache cores on a single CCD might do better but the extrapolation of data we currently have suggests gains will be minimal and certainly not linear, i.e. 20% more cores (8 -> 10) might net 10% performance gains in a limited selection of games.
And that brings us back to one of the good things about AMD right now, which I have said a few times in various places: Their CPU segmentation is quite good. You don't have to pay their max price, for their best gaming performance (7700x, 7800X3D). Their higher core CPUs have price options, and don't suck at gaming. and their max price CPU is only there, if you want max gaming performance AND max productivity performance.
 
Yeah and in Cyberpunk and Watchdogs Legion, the difference is very small. And that's the point. The majority of the data I have seen, the supposed latency penalty for dual CCD, is small for most games. Small enough it doesn't really matter.

Plague Tale is an anomaly, probably needs a driver/scheduling adjustment from AMD.
From TPU's review Doom Eternal shows the same:

1712908631321.png

_____
Do you have it set as a game, in Xbox gamebar? (my understanding is that this can benefit all Zen 4. Not only the X3D.)

Yes it was market as game.
 
And that brings us back to one of the good things about AMD right now, which I have said a few times in various places: Their CPU segmentation is quite good. You don't have to pay their max price, for their best gaming performance (7700x, 7800X3D). Their higher core CPUs have price options, and don't suck at gaming. and their max price CPU is only there, if you want max gaming performance AND max productivity performance.
And better yet, no random errors!

https://hardforum.com/threads/13900...-shader-compilation-in-two-ue5-games.2031265/
 
Well, AMD recently had to crack down on board partners using dangerous voltages in their BIOS settings, which were killing CPUs. And its still not totally fixed for some boards. So, you still have to manually set those voltages.


At least the CPU was killed right away and you can RMA or return it then correct the SOC to be lower with a new CPU. With Intel CPUs, they may work then BSODs or WHEAs down the road and wonder WTF is going on. Well degradation and random errors and who knows what voltage if nay actually fixes it.

I experienced the same thing with 13900K and shader compilation in which it was stable then a few weeks later it was not. It also passed all other stress tests then boom WHEA a few weeks later during shader compilation. Never have these issues with any Zen 4 CPU on auto and EXPO enabled.
 
At least the CPU was killed right away and you can RMA or return it then correct the SOC to be lower with a new CPU. With Intel CPUs, they may work then BSODs or WHEAs down the road and wonder WTF is going on. Well degradation and random errors and who knows what voltage if nay actually fixes it.

I highly doubt Intel won't be able to fix this via microcode and/or Bios changes. In the end, this will end up a rough spot which they move on from, like the AMD situation. (except for the motherboards which still don't actually employ the changes).

Never have these issues with any Zen 4 CPU on auto and EXPO enabled.

AMD absolutely had an issue and some people ended up with degraded or dead CPUs. Anyone whom didn't, got lucky by either not happening to have an affected board. Or they had an affected board and they REALLY got lucky until their BIOS was updated with correct changes.
 
I highly doubt Intel won't be able to fix this via microcode and/or Bios changes. In the end, this will end up a rough spot which they move on from, like the AMD situation. (except for the motherboards which still don't actually employ the changes).



AMD absolutely had an issue and some people ended up with degraded or dead CPUs. Anyone whom didn't, got lucky by either not happening to have an affected board. Or they had an affected board and they REALLY got lucky until their BIOS was updated with correct changes.

Not so sure on Intel fixing it with micro code update with 13th Gen. I have even manually underclocked and had slight undervolt and differnet voltages applied with a 13700K and styill experienced weirdness issues anb instability at even enough voltage. Yes I think it is a blip and they will move on form it, but with Arrow Lake and their new node. Raptor Lake is crap and these issues are very real 5GHz or above. Alder Lake is actually pretty good (except DDR5 IMC) and more reliable because it does not run out of box above 5GHz unlike Raptor Lake and these chips per overclock.net forums you can find comments from experts that they are truly not designed to run above 5GHz. Though Alder Lake DDR5 IMC is absolute trash. Raptor Lake IMC is much better but still inconsistent. But that is not real issue as IMC is stable with MSI 2 DIMM board on most Raptor Lake chips, but the clock speed and ring and degradation of other things not really IMC. I also think Alder Lake may be nmore reliable as it is in fact bigger die as it is only 8+8 compared to 8+16 on exact same 10nm process node so less fragile. Alder Lake more reliable especially with DDR4 from my experience.

And yeah AMD IMC degraded from the too high SOC. But easy fix. Always put it at 1.25V or lower and it runs 6000 easily and no other issues. And no clock speed or BSODs unrelated to that unlike Intel Raptor Lake.

I think Intel can be better overall than AMD and generally have liked Intel better, but from experience there is some serious issues with their 10nm node and Raptor Lake i7 and i9 parts especially.

I believe and hopefully the 20A process node or TSMC or whatever its going to be on for Arrow Lake and Lunar Lake will be so much better and more reliable.
 
Not so sure on Intel fixing it with micro code update with 13th Gen. I have even manually underclocked and had slight undervolt and differnet voltages applied with a 13700K and styill experienced weirdness issues anb instability at even enough voltage. Yes I think it is a blip and they will move on form it, but with Arrow Lake and their new node. Raptor Lake is crap and these issues are very real 5GHz or above. Alder Lake is actually pretty good (except DDR5 IMC) and more reliable because it does not run out of box above 5GHz unlike Raptor Lake and these chips per overclock.net forums you can find comments from experts that they are truly not designed to run above 5GHz. Though Alder Lake DDR5 IMC is absolute trash. Raptor Lake IMC is much better but still inconsistent. But that is not real issue as IMC is stable with MSI 2 DIMM board on most Raptor Lake chips, but the clock speed and ring and degradation of other things not really IMC. I also think Alder Lake may be nmore reliable as it is in fact bigger die as it is only 8+8 compared to 8+16 on exact same 10nm process node so less fragile. Alder Lake more reliable especially with DDR4 from my experience.

And yeah AMD IMC degraded from the too high SOC. But easy fix. Always put it at 1.25V or lower and it runs 6000 easily and no other issues. And no clock speed or BSODs unrelated to that unlike Intel Raptor Lake.

I think Intel can be better overall than AMD and generally have liked Intel better, but from experience there is some serious issues with their 10nm node and Raptor Lake i7 and i9 parts especially.

I believe and hopefully the 20A process node or TSMC or whatever its going to be on for Arrow Lake and Lunar Lake will be so much better and more reliable.
Due to the descriptions of the issue: I don't think a user arbitrarily changing clockspeed and voltage in the bios, is actually doing anything close to the actual issue. Its probably accidentally a partial mitigation, and that's all.

Raptor Lake has been out for over a year. I owned one for a while. I haven't seen much complaint about them, regarding issues. This seems like a recent thing. and that feels to me like something 'changed'. Recent bios and microcode made changes, which resulted in these problems, etc.

These CPUs are very complex and the issue is going to be fixed by an Intel engineer using their tools to see finite data points. Based on reports, there seems to be certain usage scenarios which are highlighting the issue. Its gonna be some very complex combo of voltage and power curve, along with core logic and utilization, and other things I have no idea about, which Intel does.

The eventual fix may result in less flexibility for overclocking and playing with voltages for the user. Maybe even slightly less clock stability in certain usage scenarios. But, overall, I don't think the fix will be a be real performance hit to these CPUs for stock performance.
 
Due to the descriptions of the issue: I don't think a user arbitrarily changing clockspeed and voltage in the bios, is actually doing anything close to the actual issue. Its probably accidentally a partial mitigation, and that's all.

Raptor Lake has been out for over a year. I owned one for a while. I haven't seen much complaint about them, regarding issues. This seems like a recent thing. and that feels to me like something 'changed'. Recent bios and microcode made changes, which resulted in these problems, etc.

These CPUs are very complex and the issue is going to be fixed by an Intel engineer using their tools to see finite data points. Based on reports, there seems to be certain usage scenarios which are highlighting the issue. Its gonna be some very complex combo of voltage and power curve, along with core logic and utilization, and other things I have no idea about, which Intel does.

The eventual fix may result in less flexibility for overclocking and playing with voltages for the user. Maybe even slightly less clock stability in certain usage scenarios. But, overall, I don't think the fix will be a be real performance hit to these CPUs for stock performance.


Do you water cool or air cool Raptor Lake. Which one do you own and what RAM. What are your max temps?

Just a gut feeling but trying to air cool these things is probably a bad idea. And probably best to keep temps 65C or lower at all times under full load with an excellent water cooler. Which also means no quiet sound dampened case either which is something I like.

Maybe would have been better on auto 1 year ago and now the recent thing auto is also bad. But manual voltage has always been bad the whole time cause only recently did I try auto and still had problems of course with latest BIOS. Still too much heat dumped into the case with am TYX 4090 for an air cooler.

Also with an all air cooled build can afford one power heating device inside case like the RTX 43090 but not 2 which also Raptor Lake i9 under full load is as well and generated form such a small space unlike an RTX 4090 which is a big card and can spread the heat cooling and disputation over a large area. Wonder if such high power draw on a small node is part of problem and not just temps either?? Though to be fair its not as bad as RTX 4090, but its still bad over a tiny LGA 1700 or AM5 sized chip unlike a bigger RTX 4090 die and huge PCB as part of it.
 
Back
Top