7600X3D is here! ... sort of.

OKC Yeakey Trentadue

[H]ard|Gawd
Joined
Nov 24, 2021
Messages
1,442
So AMD is releasing the 7600x3d, but as was the case with the 5600x3d, it will be exclusive to Microcenter. Furthermore, you will need to purchase it as a bundle with 32GB ram and an ASUS B650 mother board - all for $450.

For most, this will make it inaccessible as few are close to a Microcenter and undesirable for many if they already have the ram and motherboard or don't like the included ones.

That said, $450 is only slightly more than the standard alone 7800x3d, so if you are lucky enough where it checks all the blocks, it could be a nice $200 or so savings while still being relatively close in gaming performance.

https://www.techpowerup.com/326111/...center-exclusive-for-usd-300-part-of-a-bundle
 
300 mhz down from the 7800x3d which is a bit more than the 100mhz drop that the 5600x3d saw. The 8 core 5700x3d saw a 400 mhz drop from the 5800x3d.

Still, it should be faster than any non-x3d cpus from amd.
 
Not as big of a savings as it appears to be. The current Microcenter 7800X3D deal is $550, so a $100 savings. It regularly goes down to $500, so really only $50 for the patient people. The cheapest 7800X3D Microcenter deal was $465 and it has dropped to $470 on a few occasions this year. I would expect to see another $470 deal by the holidays, making this $450 7600X3D deal look not as good. It needs to drop to $400 to be a good buy.
 
Not as big of a savings as it appears to be. The current Microcenter 7800X3D deal is $550, so a $100 savings. It regularly goes down to $500, so really only $50 for the patient people. The cheapest 7800X3D Microcenter deal was $465 and it has dropped to $470 on a few occasions this year. I would expect to see another $470 deal by the holidays, making this $450 7600X3D deal look not as good. It needs to drop to $400 to be a good buy.
Exactly my observation. There have been so many 7800x3d bundles within $50 of this. I’d wait it out or spend the extra coin on the 7800x3d.
 
Tom's was the first I saw to check out the 7600x3d:
Screenshot_20240903_213820.jpg

The 7800x3d was about 9% faster. If AMD ever releases a 7700x3d, the gap would likely be even smaller as a couple of games were punishing on the 6 core.

The real story is that the 7600x3d still manages to edge out the 14900k with the new updates. ARROW Lake now has a real uphill battle if it wants to beat the 7800x3d or upcoming 9800x3d.

https://www.tomshardware.com/pc-com...-tested-ryzen-5-7600x3d-gaming-benchmarks-too
 
Last edited:
If they went all out and bout TSMC 3nm for the compute tile of Arrowlake and be the first to push high wattage in them, should perform well but really hard to be cheap.
Both would be hard if they went intel 20A for compute.

Going from a boosted 2021 CPU on Intel 10 to TSMC 3nm new architecture, would be a bit sad if they do not achieve to beat a 7800x3d

To note, the tom hardwares benchmark above, are they with ddr5-4800 for stock intel and 5600 for the overclocked intel ? Versus 6000 for the AMD with expo on like those:
https://www.tomshardware.com/pc-components/cpus/amd-ryzen-5-9600x-cpu-review/4

14900k is not that far away with realistic to be used ram speed.
 
If they went all out and bout TSMC 3nm for the compute tile of Arrowlake and be the first to push high wattage in them, should perform well but really hard to be cheap.
Both would be hard if they went intel 20A for compute.

Going from a boosted 2021 CPU on Intel 10 to TSMC 3nm new architecture, would be a bit sad if they do not achieve to beat a 7800x3d

To note, the tom hardwares benchmark above, are they with ddr5-4800 for stock intel and 5600 for the overclocked intel ? Versus 6000 for the AMD with expo on like those:
https://www.tomshardware.com/pc-components/cpus/amd-ryzen-5-9600x-cpu-review/4

14900k is not that far away with realistic to be used ram speed.
6000 expo for AMD not sure about Intel. The point of their article ,though, is them saying other reviewers understated zen 5 improvements because 5200 mhz should have been used for zen 4 and 5600 mhz for zen 5 as that is their rated speed.
 
Also, node shrinks are hardly indicative of performance. Given the same architecture, I have often seem more done with node refinements than node shrinks.
 
Also, node shrinks are hardly indicative of performance. Given the same architecture, I have often seem more done with node refinements than node shrinks.
Could not tell the nuance, but the last time intel had a significant gain (12900k) was when they upgraded their nodes.

.9900k: intel 14nm
10900k: intel 14nm
11900k: intel 14nm
12900k: intel 10nm
13900k: intel 10nm
14900k: intel 10nm


If my quick glance is not out of place, there exception I imagine but it must be really heavily correlated and here we could be talking about some of the biggest jump in modern time a la Ampere Samsung 8 to TSMC 5nm special Nvidia level ? Or I am underrating where 2021 Intel was as a foundry ?

And here, it would be quite different architecture wise (which make way more likely that the performance would not increase at all and impossible to predict, they could have went all about gaining efficacy, embedded igpu-npu performance, avx and what not) if Lunar Lake embargo lift maybe there will be something to interpolate from, but Intel could keep secret where the compute node was made until last minutes, but if it was not embarrassing and they achieved to make Intel 20A work it would be a strange decision to not use it as an advertising for it.
 
Last edited:
Could not tell the nuance, but the last time intel had a significant gain (12900k) was when they upgraded their nodes.

.9900k: intel 14nm
10900k: intel 14nm
11900k: intel 14nm
12900k: intel 10nm
13900k: intel 10nm
14900k: intel 10nm


If my quick glance is not out of place, there exception I imagine but it must be really heavily correlated and here we could be talking about some of the biggest jump in modern time a la Ampere Samsung 8 to TSMC 5nm special Nvidia level ? Or I am underrating where 2021 Intel was as a foundry ?

And here, it would be quite different architecture wise (which make way more likely that the performance would not increase at all and impossible to predict, they could have went all about gaining efficacy, embedded igpu-npu performance, avx and what not) if Lunar Lake embargo lift maybe there will be something to interpolate from, but Intel could keep secret where the compute node was made until last minutes, but if it was not embarrassing and they achieved to make Intel 20A work it would be a strange decision to not use it as an advertising for it.

That 10nm was a LONG time coming and I suspect 10nm would not have helped Coffee Lake much and would have been far worse on OG Skylake 14nm.

22nm Ivy Bridge wasn't much of an improvement over 32 nm Sandy Bridge.

Going back further, the fsb bump of 65nm Conroe ( core 2 6600 to 6750) was a bigger performance improvement than going to 45nm Wolfdale.

I think the RX 590 was a die shrink too but that did nothing over the RX 580.

Zen+ was probably they biggest boost to performance using just a die shrink that I can remember.

Obviously die shrinks are needed if we want to advance but they are typically less effective than architecture improvements or even stepping (silicon quality) when talking performance or even power savings sometimes.
 
Could not tell the nuance, but the last time intel had a significant gain (12900k) was when they upgraded their nodes.

.9900k: intel 14nm
10900k: intel 14nm
11900k: intel 14nm
12900k: intel 10nm
13900k: intel 10nm
14900k: intel 10nm


If my quick glance is not out of place, there exception I imagine but it must be really heavily correlated and here we could be talking about some of the biggest jump in modern time a la Ampere Samsung 8 to TSMC 5nm special Nvidia level ? Or I am underrating where 2021 Intel was as a foundry ?

And here, it would be quite different architecture wise (which make way more likely that the performance would not increase at all and impossible to predict, they could have went all about gaining efficacy, embedded igpu-npu performance, avx and what not) if Lunar Lake embargo lift maybe there will be something to interpolate from, but Intel could keep secret where the compute node was made until last minutes, but if it was not embarrassing and they achieved to make Intel 20A work it would be a strange decision to not use it as an advertising for it.

That 10nm was a LONG time coming and I suspect 10nm would not have helped Coffee Lake much and would have been far worse on OG Skylake 14nm.

22nm Ivy Bridge wasn't much of an improvement over 32 nm Sandy Bridge.

Going back further, the fsb bump of 65nm Conroe ( core 2 6600 to 6750) was a bigger performance improvement than going to 45nm Wolfdale.

I think the RX 590 was a die shrink too but that did nothing over the RX 580.
And here, it would be quite different architecture wise (which make way more likely that the performance would not increase at all and impossible to predict, they could have went all about gaining efficacy, embedded igpu-npu performance, avx and what not) if Lunar Lake embargo lift maybe there will be something to interpolate from, but Intel could keep secret where the compute node was made until last minutes, but if it was not embarrassing and they achieved to make Intel 20A work it would be a strange decision to not use it as an advertising for it.

There are some exciting rumors with Arrow Lake beyond raw performance. I am hoping they try something like what they did with the 5775c using L4 cache. That cpus gaming capability wasn't fully realized until years after its release.

Crazy to think that 14nm lasted from Broadwell to Rocket Lake.
 
22nm Ivy Bridge wasn't much of an improvement over 32 nm Sandy Bridge.
A 3770k had a 160 mm die versus 216 mm for the 2600k (if tpu is not wrong), it could have been a massive improvement just not passed to the customer.

I do not mean die shrink is an improvement in performance, I mean better node if you do use it to get smaller.
 
Look a bit like some prehistoric x3d cpu ? extra cache shining in stuff like games.

Pretty much, and I don't think games in 2015 leveraged the cache as much as they did in 2020. Combined with the fact that Skybridge was so successful, Intel's 5th gen was overshadowed and its L4 potential was forgotten as we got refined finfet 14nm which helped in getting faster 6, 8, and even 10 core versions of the new architecture.

Would be interesting to see how a 5775c does against a 6700k in today's games.
 
Back
Top