AMD at CES 2023: 7950X3D, 7900X3D, 7800X3D - VCache Party In Here

This is what bothered me. Ever since i read about (and probably more than once from you discussing it in previous threads) this limitation with the 5800X3D, the idea was that they didn't launch the 7000 series 3D until they have a fix that wouldn't require the compromises of the previous generation. I'd hoped that they would have been able to find a compound stable enough to make it equivalent to the standard 7000 series in terms of wattage, allow the same OC and frequency, and was hoping for full both CCX unit vcache. I guess the question here is A) how much of a compromise is this , really and B) is this something that could have been solved in another few months of development and/or an admission that this is the best we get for this generation try again in the fall? Will there be a chance that in a few months there will be some sort of 7990X3D that is a 170w 4.5>5.7ghz standard boost, overclockable, 2 CCX 3D VCache on both sides chip?

Of course its likely that even as it is the 7950X3D is going to be an absolute monster as it is, but I have to wonder what problems this will create and if it isn't something that could have been resolved with a bit more time? What will be the overclock potential and how hard will things be locked down? What will be the frequencies of the 3D CCX vs the standard? Will users have the ability to pick which one they want applications to run upon and by default how good will it estimate what to do? Will users be able to easily override any 'default' decisions without having to use proprietary manual thread-pinning applications? Thinking of overclocking , Asus anyway was noteworthy for bringing the "dynamic OC" feature implemented on the 5000s-era Dark Hero into all their X670E boards (or definitely all their ROG or named boards at least) that allows users to both get the best of PBO2 style auto-to-threshold single/few core performance and when needed swap over to many/all core manually set turbo OCs. I wonder how this will be affected by a chip with both CCXs populated but heterogeneous vcache and potentially different max frequencies? Given the quote above that Riev90 cites, the "bare chiplet can access stacked cache in the adjacent chiplet but this isn't optional and will be rare", more info on that situation will be interesting.

Ultimately we'll get more info in the lead up to and after launch, but it will be nice to see it behaves in edge cases and if this design decision was a best-of-all-worlds or a necessitated compromise.
Fundamentally the 3d stacking tech was developed with mobile phone SoC in mind where these temperatures and voltages would never be a thing so that has to be remembered first.
The main reason for the delay was less about the voltage restrictions and more about the physical act of lining up the silicon, the mechanical process that does this was problematic and failed too often for commercial viability and that is why it was delayed.
Ultimately there isn't really a means by which they can use the stacked cache at full voltages and not run into a problem with it melting down, as the melting point of the adhesive must be lower than the maximum temperature of the silicon otherwise just applying it would damage the silicon. They then have to further limit the operation of the CPU so that it can't run itself into a situation where under circumstances it could get hot enough to re-melt that adhesion layer.
I doubt we would see much overclocking potential for the x3D parts, and I trust AMD to have gotten the timings and frequencies to a point where there wouldn't be many benefits in doing so, ram speeds as always are likely going to be the better option.

Seeing this design what I want to know is when the non-stacked CCX accesses the cache on the stacked side is it still faster than if the extra cache wasn't there at all?
If it is still faster, what are the possibilities that down the road AMD makes use of their Infinity Fabric more and like their MCDs on the RDNA 3 stuff and have the extra cache external to the CCX, it may not be as fast but if it is faster overall and less complicated and lets them use a cheaper process then overall it would be better to a degree no? This would be similar in concept to what Intel has decided to do with the new Xeons and offer Ram on the dye to speed up operations.

Either way, the chips are cool even if they are a stepping stone and I really look forward to reviews. the very fact that Intel and AMD are having to innovate and go to such lengths to one-up each other is proof that there is strong competition there and it's resulting in some advancements that the geek side of me is going crazy for.
 
The 13th gen didn't exactly blow away Zen 5. It's a different approach - lower wattage, high performance versus high wattage, high performance. X3D is the cherry on top to push far past what Intel can do.
You're right. It didn't blow away Zen5. But Zen5 doesn't blow away Raptor Lake.

In my opinion 13th gen made AMD look like they pulled up short.

We will see what this cherry on top does for them, but if it's anything like the 5800X3D it will be underwhelming in many areas. I suspect that the 7900X3D and 7950X3D are there to compensate for what the the frequency limited 7800X3D lacks in some workloads.

I just keep thinking they're trying too hard with the stacked cache. Or it's just too much of a PITA. I will be interested to see what the the non VCache CCX's penalty is for accessing the VCache die. As I said before, maybe they could just make a mega cache die that sits "between" the CCX's and they could both access it via Fabric or something.

Any Engineers on here to speak to the feasibility of that?
 
The 13th gen didn't exactly blow away Zen 5. It's a different approach - lower wattage, high performance versus high wattage, high performance. X3D is the cherry on top to push far past what Intel can do.
I don't know about far past, it will be interesting to see how they do at 1440p and 4K.
Ultimately at those resolutions, you are going to be running into more cases where you are GPU bound.
I highly doubt many would be taking a 7950x3D pairing it with a 7900xtx or 4090, to then play at 1080p that doesn't sound like a realistic use case to me, I mean I am sure it may exist but I would assume it to be an edge case not normal ones.
I mean yes when looking at tests where you are intentionally CPU bounding the test it shows AMD a solid 20% faster than Intel in gaming, but it has been a long time since I have played a game that I was CPU bound on, outside single player Civilization and Battletech because computer AI is a hungry devil.

Benchmarks will tell all as they do, but for normal usage, I don't expect them to be so far and above Intel as the slides claim.
 
You're right. It didn't blow away Zen5. But Zen5 doesn't blow away Raptor Lake.

In my opinion 13th gen made AMD look like they pulled up short.

We will see what this cherry on top does for them, but if it's anything like the 5800X3D it will be underwhelming in many areas. I suspect that the 7900X3D and 7950X3D are there to compensate for what the the frequency limited 7800X3D lacks in some workloads.

I just keep thinking they're trying too hard with the stacked cache. Or it's just too much of a PITA. I will be interested to see what the the non VCache CCX's penalty is for accessing the VCache die. As I said before, maybe they could just make a mega cache die that sits "between" the CCX's and they could both access it via Fabric or something.

Any Engineers on here to speak to the feasibility of that?
They made it work for RDNA 3 so it would stand to reason they could make it work for Zen, who knows, there's a lot to unpack here.
 
I don't know about far past, it will be interesting to see how they do at 1440p and 4K.
Ultimately at those resolutions, you are going to be running into more cases where you are GPU bound.
I highly doubt many would be taking a 7950x3D pairing it with a 7900xtx or 4090, to then play at 1080p that doesn't sound like a realistic use case to me, I mean I am sure it may exist but I would assume it to be an edge case not normal ones.
I mean yes when looking at tests where you are intentionally CPU bounding the test it shows AMD a solid 20% faster than Intel in gaming, but it has been a long time since I have played a game that I was CPU bound on, outside single player Civilization and Battletech because computer AI is a hungry devil.

Benchmarks will tell all as they do, but for normal usage, I don't expect them to be so far and above Intel as the slides claim.
I think it's a circular argument. AMD fell short. But by very little. Now they will gain ground - by very little (in general) but by a lot (edge cases such as from 5800X3D). The same gains that are trumpted by 13900K owners over Zen 5 are coming to X3D.

The forgotten element of all of this is when you have the AMD platform you have several generations of this tit for tat that you can enjoy - even within your own ecosystem.

For anyone to claim that AMD didn't bring enough competition is crazy. Look at Intel's 11th gen - enough said. lol.
 
I don't know about far past, it will be interesting to see how they do at 1440p and 4K.
Ultimately at those resolutions, you are going to be running into more cases where you are GPU bound.
I highly doubt many would be taking a 7950x3D pairing it with a 7900xtx or 4090, to then play at 1080p that doesn't sound like a realistic use case to me, I mean I am sure it may exist but I would assume it to be an edge case not normal ones.
I mean yes when looking at tests where you are intentionally CPU bounding the test it shows AMD a solid 20% faster than Intel in gaming, but it has been a long time since I have played a game that I was CPU bound on, outside single player Civilization and Battletech because computer AI is a hungry devil.

Benchmarks will tell all as they do, but for normal usage, I don't expect them to be so far and above Intel as the slides claim.

The various reviewers almost always showcase games that are specifically designed to show the differences between processors. They obviously don't get any clicks if they show that a high-end video card and 4K resolution make those differences minimal and mostly evening out spikes. It's a much more clickable thumbnail to show that a processor is "33% faster!" even if that 33% is the difference between 600 FPS and 700FPS at 1080p in a 15 year old game.
 
I think the best gain for the V-cache chips is the minimum frame times over looking at max frame times. I know I noticed the difference when I moved from a 5900X to a 5800X3D chip. I will be interested to see how the higher core count chips perform with V-cache, they may be the best of both worlds.
 
I think the best gain for the V-cache chips is the minimum frame times over looking at max frame times. I know I noticed the difference when I moved from a 5900X to a 5800X3D chip. I will be interested to see how the higher core count chips perform with V-cache, they may be the best of both worlds.
As am I.
Absolutely. So much of performance optimization in computing is about "not wasting time". The biggest time waste in many cases - waiting for data to arrive. That's often dominated by storage class transitions, which increased caching can mitigate.
 
I think it's a circular argument. AMD fell short. But by very little. Now they will gain ground - by very little (in general) but by a lot (edge cases such as from 5800X3D). The same gains that are trumpted by 13900K owners over Zen 5 are coming to X3D.

The forgotten element of all of this is when you have the AMD platform you have several generations of this tit for tat that you can enjoy - even within your own ecosystem.

For anyone to claim that AMD didn't bring enough competition is crazy. Look at Intel's 11th gen - enough said. lol.
I don't know if "Fell short" is the right way of thinking about it. If Intel or AMD were way ahead or way behind that would undoubtedly be the case, but if they are throwing us their best and each coming pretty close to neck and neck that means they are both pushing the silicon about as hard as they can muster. The fact they thought they were going to smoke each other and landed in about the same places shows there is some healthy competition going on there.
I like that AMD provides backward compatibility with their older platforms but I don't plan on it, I mean look how they burned us with Threadripper if they did it once they certainly can choose to again, but while its there and if its available then I will try to make the best of it.
 
I really feel that with the 7000 series AMD pulled up a bit too short. They didn't really fight hard enough to get the highest performance and they underestimated Intel's ability to squeeze performance out of an inferior node. How you do that, I have no idea. Intel has proven they were capable of extracting every damn iota of performance they could out of 14nm.

I kinda figured that the 7000X3D chips would be the Gaming Shiznit. But if they're only good at extracting maximum 1080P FPS they're yesterday's news. They must compete with the top of Intel's Product Stack at resolutions beyond that. Which is something that the 13900K has done, pulling some incredible FPS gains out of 4K compared to every other processor out there.

So, when the reviews drop I will be impressed if these chips just murder Intel at all resolutions.

We need real competition and it needs to be a gawdamn war every generation, delivering amazing performance and forcing the competition to be competitive, forever.
It isn't really as black and white as you present it.
Averaged out, overall, the 13900k does seem to take the overall win.

But

It depends heavily on the specific game you are playing. There are games where Zen CPUs are the absolute winners (Horizon Zero Dawn is an example).
And there are plenty of games where AMD and Intel's best, were within 10fps of eachother.

What's more interesting to me, is what AMD and Intel have done, with their lower products.

7600x, averaged out, is less than 10fps behind AMD's best. Any Zen 4 = more/less the best gaming performance. You can buy a 7600x for $240 right now.

There is still a small gaming penalty in some games, for the CPUs with two CCX. So, the 7700x beats the 7900x and 7950x in some games. So the trend here with AMD is, if you are focused on gaming and want AMD, you can actually skip 7900x and 7950x. Which is a good thing. Those CPUs should be for creators and work PCs. Gamers can put that money into a GPU or other aspects of the system, like $70 controllers and mice.

Intel's product stack is different. The 13600kf at $280, is a pretty amazing value for a PC user which wants to do some great gaming (matches or beats 12900k), but also do some creating or work tasks, without a lot of compromise for either. (trades with 12700k in creation/work). However, the $240 7600x does edge it out in gaming.

13700k/f is IMO, Intel's smart buy middle ground, for people who want near top gaming and creating/work capability. The KF is $370 right now.

13900k is a true halo product. But it brings me back to the final point on AMD: AMD seemingly has thrown a bone to gamers, allowing them to get top gaming, from the low and middle of the stack. Intel's stack has more balance, but it also drives up the price for a really focused gamer.
Now, the X3D will come in and push up AMD's price of absolute top gaming. But....its also the market right now. If you want to pay more, you can get more.

Or, you can buy a 7600x or 13600k/f and match or beat 12th gen's best and be happy with that?
 
Not the guy you're responding to, but don't diss my 2.1 Cambridge Soundworks setup! It's 25 years old and Will. Not. Die.
I still use the 5.1 version of that (Creative Labs MegaWorks 5.1 THX 550)
Purchased ~2003, still going strong
megaworks.jpg
 
Or, you can buy a 7600x or 13600k/f and match or beat 12th gen's best and be happy with that?
All while putting more money into the GPU and playing at resolutions where all the latest gen chips perform about the same anyways. Lots of text but well put. We’re hyped up about all this 1080p 600fps stuff, but I’m not sure why anymore, honestly. Grab a budget cpu and game at higher res with the money you save. If you’re gaming, don’t bother with the 13900 or 7900+, get the better gpu.
 
Last edited:
Will try to watch the video after work, but what was mentioned in particular about the boards?
Also - wonder how they plan to do that?
Easy. They essentially dialed back the PPT limit on the X3D and non-X parts from the insane values that were set for the 7700X / 7900X / 7950X
Mobos can now be cost reduced (smaller VRM stages, smaller VRM heatsinks)
(eg. the 7900X in my sig runs at ~138W package power vs ~185W stock under full MT load, while getting the same ST/MT performance)

That and further cutting features (eg. no PCI-E 5 M2/slots)
 
Last edited:
AMD also announced their laptop chips and took a jab at Apple. Claiming to be 30% faster then the M1 Pro while also offering 30+ hours of video playback
https://www.macrumors.com/2023/01/05/amd-new-chips-against-m1-pro/

View attachment 540051
While that’s great and all by the time we see laptops out with these chips the M1 will be almost 3 years old granted the M1 only does a little over 16h so it is impressive I just find it odd that the M1 has been the mobile chip they have been comparing themselves against for the last 2 years.
 
Honestly not super impressed getting one of these over a 5800x3d. Just doesn't seem worth the platform upgrade cost at this point. Will wait another generation and hopefully a 8800x3D will be worth it.
 
Honestly not super impressed getting one of these over a 5800x3d. Just doesn't seem worth the platform upgrade cost at this point. Will wait another generation and hopefully a 8800x3D will be worth it.
What did you expect from a one generation jump?
 
  • Like
Reactions: DF-1
like this
The fact that only one CCD gets the extra cache on the 7900X and 7950X is cause for concern.

Some see it as the best of both worlds, where the non-V-Cache CCD can clock much higher for workloads that can benefit from that.

I see a CPU scheduling disaster waiting to happen, speaking as someone who recently got a 12700K and now has to use Process Lasso to keep programs off the E-cores, sometimes off of the SMT side of each P-core too. (My CPU frametimes in DCS drastically increased from that alone, but it's worth noting that it's still bound to one main thread and a second smaller thread for audio.)

I'm banking on the 7800X3D being the sweet spot for this alone - one fully enabled CCD with the cache, no cross-CCD latency issues or mismatched cache sizes to worry about, and lower price with only 8 cores enabled.
I asked them about the scheduling issues as well as Windows 10. I was told that the core preference stuff is not as complex as the Alder/Rocket Lake setup and that the groundwork for it has been under construction for a few years (i.e. other Zen 1/2/3 optimizations and core preference things). Win 10/11 performance will allegedly be the same. Of course, we'll see how this actually pans out when the chips arrive.

I'm not sure it's a concern unless executed poorly - in theory, you get top end gaming performance and top end productivity performance out of the same chip, with your main trade off being a full 12/16 core load (i.e. Cinebench) will be slightly slower than the non-3D parts, but still faster than if you had all 3D cores.

If you want all 3D cores, then you can get a 7800X3D -> those are a single CCD and all cores will have 3D Cache.

So I wonder how the 12 core 7900X3D is going to work then? Will it be 6 + 6 w/cache or maybe a 4 + 8 w/cache configuration? I would think a 6 + 6 w/cache configuration could see some performance hits with gaming compared to the straight 8 w/cache 7800X3D? I have not found this mentioned or discussed anywhere yet?
7900X3D will be a 6+6 configuration (source: my meeting with them yesterday).
 
While that’s great and all by the time we see laptops out with these chips the M1 will be almost 3 years old granted the M1 only does a little over 16h so it is impressive I just find it odd that the M1 has been the mobile chip they have been comparing themselves against for the last 2 years.
Right now Apple has no M2 Pro, and therefore the M1 Pro is a valid comparison. Even when Apple releases the M2 Pro, how would that compare to the new AMD chips? If AMD is correct then x86 has already matched and surpassed the performance and battery capabilities of Apple's Silicon. Most likely when AMD releases their chips, Apple will also release their M2 Pro. If the M2 Pro is anything like the M2, Apple will gain performance at the sacrifice of power consumption.
 
I asked them about the scheduling issues as well as Windows 10. I was told that the core preference stuff is not as complex as the Alder/Rocket Lake setup and that the groundwork for it has been under construction for a few years (i.e. other Zen 1/2/3 optimizations and core preference things). Win 10/11 performance will allegedly be the same. Of course, we'll see how this actually pans out when the chips arrive.
Relying on Microsoft and AMD to get the scheduler right rarely turned out great.
 
Right now Apple has no M2 Pro, and therefore the M1 Pro is a valid comparison. Even when Apple releases the M2 Pro, how would that compare to the new AMD chips? If AMD is correct then x86 has already matched and surpassed the performance and battery capabilities of Apple's Silicon. Most likely when AMD releases their chips, Apple will also release their M2 Pro. If the M2 Pro is anything like the M2, Apple will gain performance at the sacrifice of power consumption.
Yeah but the point is AMD made the same claims with their 5000 and 6000 series mobile lineups and in the 6000 series mobile launch claimed over 20h of battery life and none of it happened.
Yeah, it's a fair comparison until Apple gets the new silicon out which they claim was pushed back because of TSMC delays but it's an old comparison they have used for a while and never actually delivered on.
So AMD can put up all the slide shows they want on this but I want benchmarks that back it up and products on the shelves that actually deliver on their theoretical numbers presented there.
 
I asked them about the scheduling issues as well as Windows 10. I was told that the core preference stuff is not as complex as the Alder/Rocket Lake setup and that the groundwork for it has been under construction for a few years (i.e. other Zen 1/2/3 optimizations and core preference things). Win 10/11 performance will allegedly be the same. Of course, we'll see how this actually pans out when the chips arrive.

I'm not sure it's a concern unless executed poorly - in theory, you get top end gaming performance and top end productivity performance out of the same chip, with your main trade off being a full 12/16 core load (i.e. Cinebench) will be slightly slower than the non-3D parts, but still faster than if you had all 3D cores.

If you want all 3D cores, then you can get a 7800X3D -> those are a single CCD and all cores will have 3D Cache.


7900X3D will be a 6+6 configuration (source: my meeting with them yesterday).
The reviews should be interesting, think I would have liked a 4+8w/cache configuration for a gaming PC. I'm sure the 7800X3D should do amazingly well upgrading from my 9700K.
 
I find myself caring less and less about the Desktop SKU's but a highendish model of either brand and they'll be the same for the most part. Mobile And Server SKU's interest me more (Yey for Sapphire Rapids Launch party on the 10th, pretty excited for it ,even though I can access systems with them running already, intrigued how intel is gonna market it) as the mobility side seems to be a lot of difference, and I'm liking how AMD is pumping up the graphics on APUs, and the built in ai stuff is just interesting, though intel has something similar coming.

Desktop is just, kinda boring these days I guess, probably just because of the high TDP that no one seems to care about these days (Looking at you GPUs) .
 
I find myself caring less and less about the Desktop SKU's but a highendish model of either brand and they'll be the same for the most part. Mobile And Server SKU's interest me more (Yey for Sapphire Rapids Launch party on the 10th, pretty excited for it ,even though I can access systems with them running already, intrigued how intel is gonna market it) as the mobility side seems to be a lot of difference, and I'm liking how AMD is pumping up the graphics on APUs, and the built in ai stuff is just interesting, though intel has something similar coming.

Desktop is just, kinda boring these days I guess, probably just because of the high TDP that no one seems to care about these days (Looking at you GPUs) .
I only buy desktop parts for my gaming system. And maybe the VR box - I’m ready for sapphire rapids too
 
I asked them about the scheduling issues as well as Windows 10. I was told that the core preference stuff is not as complex as the Alder/Rocket Lake setup and that the groundwork for it has been under construction for a few years (i.e. other Zen 1/2/3 optimizations and core preference things). Win 10/11 performance will allegedly be the same. Of course, we'll see how this actually pans out when the chips arrive.

I'm not sure it's a concern unless executed poorly - in theory, you get top end gaming performance and top end productivity performance out of the same chip, with your main trade off being a full 12/16 core load (i.e. Cinebench) will be slightly slower than the non-3D parts, but still faster than if you had all 3D cores.

If you want all 3D cores, then you can get a 7800X3D -> those are a single CCD and all cores will have 3D Cache.


7900X3D will be a 6+6 configuration (source: my meeting with them yesterday).
Much as I'd like to believe that, theory has a way of being unseated by reality.

I'll keep Process Lasso at the ready, just in case the default scheduling behavior doesn't work out.

What I really want to know is how this will affect overclocking, since now we'll have to overclock per CCD due to the V-Cached one hitting limits sooner. I'm not sure if the UEFI on AM5 boards is set up for that.

If the 7900X is 6+6, then that definitely pushes me toward either the 7800X3D (8+0, only cached CCD active) or 7950X3D (8+8) were I to do any Zen 4 builds. Eight homogeneous, performance-oriented cores is my minimum nowadays.
 
Much as I'd like to believe that, theory has a way of being unseated by reality.

Indeed. All we have now is a Tommy Boy style guarantee :).

What I really want to know is how this will affect overclocking, since now we'll have to overclock per CCD due to the V-Cached one hitting limits sooner. I'm not sure if the UEFI on AM5 boards is set up for that.

Curve Optimizer will be available, however, overclocking will not be supported. Maximum vcore will be 1.4. The main use case they discussed related to curve optimizer was pulling down power use for a given frequency/workload as opposed to boosting performance. I don't expect there to be much on the table that you'll be able to get out of these chips beyond the factory configuration.

I'll try to sum up the way they were describing the limitations ->
  • Gaming/Lightly threaded applications
    • Regular CCD - Should see comparable to non X3D part performance.
    • X3D CCD - Should see lower performance, except for gaming/cache hungry apps (due to lower frequency)
    • Note that some games will still do better on the Regular CCD
  • Fully loaded system (i.e. Cinebench)
    • 7800X3D - Should see a comparable difference to the 7700X as the 5800X to 5800X3D
    • 7900X3D/7900X3D - The overall performance gap will be smaller than the 7800X3D when compared to the non X3D parts, but will still be slower due to the X3D CCD running at a lower frequency and lower power limit than the Regular CCD part.
Hopefully it makes sense and I have nothing to back this up other than this is what AMD told me. Once the chips arrive next month, we'll have a better idea of how accurate this is.
 
This heterogeneous cores on a single chip trend for desktop/workstation CPUs is retarded. The scheduling complexity alone with Intel's P+E config is bad enough (talking about the amount of bullshit workload movement and heuristics crap the kernel has to do to properly "balance" across the P+E cores) but now we're talking different logical cores have different perf characteristics? WTF. Have these companies not learned anything from how hard it is to properly optimize for NUMA/multi-socket/multi-node systems? I suspect they just don't give a shit and once again (except in regards to hardware architecture/design now too): https://tonsky.me/blog/disenchantment/.

I wonder how much performance is lost due to all the overhead of managing this garbage on a general purpose OS and the general software free-for-all (so not something largely vertically integrated like iOS/Android phones).

I also wonder how many more silly-named vulnerabilities lurk in this mountain of complexity.
 
They made it work for RDNA 3 so it would stand to reason they could make it work for Zen, who knows, there's a lot to unpack here.
Yeah, other than the fact that I paid a grand for some shit that lights on fire because an engineer dropped the ball... RDNA3 is amazing. Perhaps you should unpack the rest of my fudge pack, lol
 
It isn't really as black and white as you present it.
Averaged out, overall, the 13900k does seem to take the overall win.

But

It depends heavily on the specific game you are playing. There are games where Zen CPUs are the absolute winners (Horizon Zero Dawn is an example).
And there are plenty of games where AMD and Intel's best, were within 10fps of eachother.

What's more interesting to me, is what AMD and Intel have done, with their lower products.

7600x, averaged out, is less than 10fps behind AMD's best. Any Zen 4 = more/less the best gaming performance. You can buy a 7600x for $240 right now.

There is still a small gaming penalty in some games, for the CPUs with two CCX. So, the 7700x beats the 7900x and 7950x in some games. So the trend here with AMD is, if you are focused on gaming and want AMD, you can actually skip 7900x and 7950x. Which is a good thing. Those CPUs should be for creators and work PCs. Gamers can put that money into a GPU or other aspects of the system, like $70 controllers and mice.

Intel's product stack is different. The 13600kf at $280, is a pretty amazing value for a PC user which wants to do some great gaming (matches or beats 12900k), but also do some creating or work tasks, without a lot of compromise for either. (trades with 12700k in creation/work). However, the $240 7600x does edge it out in gaming.

13700k/f is IMO, Intel's smart buy middle ground, for people who want near top gaming and creating/work capability. The KF is $370 right now.

13900k is a true halo product. But it brings me back to the final point on AMD: AMD seemingly has thrown a bone to gamers, allowing them to get top gaming, from the low and middle of the stack. Intel's stack has more balance, but it also drives up the price for a really focused gamer.
Now, the X3D will come in and push up AMD's price of absolute top gaming. But....its also the market right now. If you want to pay more, you can get more.

Or, you can buy a 7600x or 13600k/f and match or beat 12th gen's best and be happy with that?
That is one helluva good breakdown.
 
Yeah, other than the fact that I paid a grand for some shit that lights on fire because an engineer dropped the ball... RDNA3 is amazing. Perhaps you should unpack the rest of my fudge pack, lol
It’s more like because some 12 year old in a Chinese sweat shop had a faulty pressure gauge so they couldn’t correctly fill a vapour chamber.
But yes.
 
Back
Top