AMD announces Ryzen 7000 Zen 4 CPUs

The most intriguing here are did they use their new tool for auto memory overclock because they used in the presentation DDR5 6000@CL30 which one is not common RAM (and in specs).
Because this can add performance in this 15% too.
 
Real reason for an HFboards member to like it maybe (I would added for the CPU longer life has a server one day, it is fun to have a small iGPU to plug a monitor if you need it during the initial setup), not necessarily the real reason AMD would be added them.

I meant real reason to desire it on a CPU. I think for most practical cases, it will be used when someone doesn't have a dedicated GPU available.
 
The amount of times I've sold an Intel system instead of an AMD system because the i9s come with integrated video and the ryzen 9s do not...

Sometimes you need number-crunching Excel workhorses without any need for dedicated gpu. And the extra money for even a basic gpu can make the AMD system no longer price competitive.
 
I meant real reason to desire it on a CPU. I think for most practical cases, it will be used when someone doesn't have a dedicated GPU available.
Again real reason to desire it on a CPU from the point of view of an HFboard member maybe, for an entreprise or people doing a workload that does not need 3d graphic at all, it is to not have to buy a graphic card at all ever, even the 730/1030 type. It can make the offer versus AMD + buying a graphic card easily go into Intel, it make it even a non brainer usually in enterprise between the 2.
 
One ought to be able to use the same type of interconnects that are used with the stacked 3D design and just mate them on the side instead.

Also, another solution would be to put the hottest layer on top closest to the heat spreader. I can't imagine the cache generates a lot of heat, so stick that on the bottom and allow the hot parts to mate directly to the IHS.
Unfortunately there are no Delids out in the wild but from marketing it is stacked on top which does cause thermal issues. The voltage range was the 'official' reason for the lack of OC, I'd say it could contribute (HBM also was pretty sensitive to heat/voltage) and also to not shit on the Zen4 coming out too badly.
The reason there are no Zen4 Vcache is due to the interconnect vias being troublesome on the smaller process, they need more time to perfect it and reduce defect rates. I'd say also to sell more CPUs later without needing a tock/etc process.

The reason it's stacked on top is the same reason there are no 16c CCX. Because latency, fabrication and interconnect complexity is extremely high that way, butter donut and all the other weird names they use, each core has to chat to each core. 8 is already pushing it, 16 is vastly higher numbers of interconnects if you look at the math for it. It's why the high core Intel stuff in prior generations had poor core to core latency for higher/further cores and what killed them on MT, AMD isolated them to 8c each and destroyed Intel this way in MT, the latency of the Zen2/3 octo core stuff was about as good as the best Intel quads using this method, unless it had to go cross-CCD, then the latency of course spiked, from memory around 120ns or so (don't have my PC with the latency charts but you can find them online I think still).
Basically latency with cache is critical for all cores, or you are wasting cycles on cache misses.
AMD stuck with chiplets for this reason as well (yield/anaologue process of I/O requiring larger process aside), so to go to a longer interconnect length and side-mount (e.g. not vertical stacked and put it alongside as you suggest) would be problematic not only from packaging (there are only so many vias under they can run on the PCB/Chiplet), but the physical length of the trace would cause the cache to become more like an external but much closer ram. By stacking this way [vertical] it is basically the only way they can go to gain from it in most workloads. It's like the LG CX 42" vs the Alienware OLEDs right now. Although the AW does 175 vs 120Hz, the input lag it has to do it makes the CX actually more responsive even with lower FPS in gaming. People with both say it's noticeable in desktop as it is.
So for example, a side cache might allow higher clock speed, but the advantge is mooted due to the noticeably higher cache latency. I'd guess something around 20-30ns or more vs ~13ns of the current X3D. It's only 2ns higher than 5800x [see image attached: 5800x3d on top, 5800x below]. I would personally look at a new socket with a thermal path underneath in the centre (e.g. dual sided HS) and stack underneath, allowing for higher core speeds [edit: this would be a nightmare with the IO chip and routing either way]. And some exotic heat spreaders to boot ;) MLC/caps would be tricky jammed all around and cause their own set of weird path/latency/resonance/capacitance in trace issues. I pity CPU designers with these later designs.

Another thing to consider is sure, they could make another layer(s) for their lithographic process to try address some of this (e.g. 16 cores, side caches surrounding the chiplet etc), however, that now means all your dies take x hours longer to make, reducing the output of your product and costing more, along with the interposer also needing much more complexity - this increase in complexity also means higher chance for yield defects. So overall, going vertical is the least hassle/most cost and time effective way for them to do it.

5800x3d-cache-mem~2.jpg
okay fuck .webp (wtf is that bullshit anyway, jpg works fine ffs)
> 5800x3d on top, 5800x below - credit to Hot hardware

Even with a slight latency bump it still impacts some workloads. I'd love to see a 4.5GHz 5800x vs 5800x3d benchie session, I think one exists somewhere. But it would give better understanding of this cache difference. You can also see slightly higher latency for other caches, I'd guess something to do with the via/interconnect routing causing slightly higher pathing on-die for L1/L2. They have to be a slightly different chiplet design (routing) to achieve this then.

The amount of times I've sold an Intel system instead of an AMD system because the i9s come with integrated video and the ryzen 9s do not...

Sometimes you need number-crunching Excel workhorses without any need for dedicated gpu. And the extra money for even a basic gpu can make the AMD system no longer price competitive.
Same here, at least 2-3 times AMD lost out on a sale because a GPU was useless for the person I was building for and Intel thusly had better bang for buck, thus a faster rig as a result. And no a GT730/930 or whatever isn't a GPU. It's e-waste.
 
Last edited:
Is it me, or is PCIe 5.0 on the Zen 4s too early? It seems like there will be very few things available that will support it at the time these launch.
 
Is it me, or is PCIe 5.0 on the Zen 4s too early? It seems like there will be very few things available that will support it at the time these launch.
Yeah too early, it’s hard even finding PCIe5 in enterprise right now. But if they want this new platform to have a similar longevity as the AM4 platform best to have it in now rather than try to shoehorn it in like they did 4.0.
 
Marketing bulletpoints? Sounds like there's some pcie5 SSDs on their way though.
i would assume that its the controller that is pcie, not the drives those sas connectors go to.
yeah pcie 5 nvme will eventually show up.
but i dont know a a single ad in card right now....
 
The main advantage I see is if you still have your rig in 3-5...10 yrs when they release hardware that's compatible, you can just drop it in. Your CPU will probably still be sufficient, although other components may have left you in the past.
 
Is it me, or is PCIe 5.0 on the Zen 4s too early? It seems like there will be very few things available that will support it at the time these launch.
That seems like a very short-sighted thing to say. It's backwards compatible so it's not like you have to use a 5.0 device but it will be there for your future upgrades.
 
That seems like a very short-sighted thing to say. It's backwards compatible so it's not like you have to use a 5.0 device but it will be there for your future upgrades.
I understand where you're coming from. My thought when I said that was there seems to be little reason to go for 5 now. I'd give it a year or so when there's more momentum for it, but hey who knows. Maybe we'll get lucky and the upcoming gen of GPUs could support it, but I'm not holding my breath for it. I fully agree it will be highly worth it down the road.
 
Is it me, or is PCIe 5.0 on the Zen 4s too early? It seems like there will be very few things available that will support it at the time these launch.
Not if the next generation of AMD video card use PCI 5.0 (and that something they knew for a long time if that would be the case)

And NVME 5.0 x4 could be already out by 2023 for regular users:
https://en.overclocking.com/as2280f5-twsg5-a-first-pcie-5-0-ssd-from-apacer-and-zadak/
https://www.storagereview.com/review/samsung-pm1743-pcie-gen5-ssd-first-take-review
https://www.fierceelectronics.com/e...s-strorage-controller-supporting-pcie-50-spec

Maybe they could have skipped the first generation of AM5 chipset/cpu, but Intel would have had it for a while.
Maybe later 2023 refresh would have been enough and save some headache (apparently PCI 5.0 is quite hard on motherboard)

I think AMD has dual chips on the motherboard now to make the rounting possible, arguably if you want to launch a new tier of chipset x.70 extreme, having full PCI 5.0 and the dual chipset is a good way to do it and a chipset generation change a good time to introduce a new tier.
 
Again real reason to desire it on a CPU from the point of view of an HFboard member maybe, for an entreprise or people doing a workload that does not need 3d graphic at all, it is to not have to buy a graphic card at all ever, even the 730/1030 type. It can make the offer versus AMD + buying a graphic card easily go into Intel, it make it even a non brainer usually in enterprise between the 2.

Yeah but in general those people aren't going to use a regular consumer/gamer catering CPU. For the typical $200-400 CPUs, those are generally used by gamers or people who would have a GPU. Once you get into really high end uses people tend to use the CPUs designed for that stuff. Threadripper and that type of stuff.
 
Not if the next generation of AMD video card use PCI 5.0 (and that something they knew for a long time if that would be the case)
I'm hoping this is the case. Like I mentioned, hopefully we'll get a pleasant surprise with the RX 7000 series. Maybe Nvidia also has this up their sleeve.
And NVME 5.0 x4 could be already out by 2023 for regular users:
https://en.overclocking.com/as2280f5-twsg5-a-first-pcie-5-0-ssd-from-apacer-and-zadak/
https://www.storagereview.com/review/samsung-pm1743-pcie-gen5-ssd-first-take-review
https://www.fierceelectronics.com/e...s-strorage-controller-supporting-pcie-50-spec

Maybe they could have skipped the first generation of AM5 chipset/cpu, but Intel would have had it for a while.
Maybe later 2023 refresh would have been enough and save some headache (apparently PCI 5.0 is quite hard on motherboard)

I think AMD has dual chips on the motherboard now to make the rounting possible, arguably if you want to launch a new tier of chipset x.70 extreme, having full PCI 5.0 and the dual chipset is a good way to do it and a chipset generation change a good time to introduce a new tier.
Good points. Thanks for the info.
 
Yeah but in general those people aren't going to use a regular consumer/gamer catering CPU. For the typical $200-400 CPUs, those are generally used by gamers or people who would have a GPU. Once you get into really high end uses people tend to use the CPUs designed for that stuff. Threadripper and that type of stuff.
I could see an advantage of this being in a multi-monitor setup, where someone uses one for gaming and one for other things. For example, I use a second screen for discord, web browsing, etc. while I'm in-game like many others, most likely. The advantage would come from being able to use the GPU exclusively for the gaming screen and let the CPU handle the secondary monitor display. The savings might not be significant, but unless the CPU is running really hard then I don't see it being counterproductive.
 
Yeah but in general those people aren't going to use a regular consumer/gamer catering CPU. For the typical $200-400 CPUs, those are generally used by gamers or people who would have a GPU. Once you get into really high end uses people tend to use the CPUs designed for that stuff. Threadripper and that type of stuff.
I am talking the type of people that work on something like this:
https://www.dell.com/en-ca/shop/des...op/spd/inspiron-3910-desktop/di3910_sb_s6018e

I very much had many at work regular Intel CPU with iGPU through the years since they become popular, a lot of developers do not need much GPU, can use has much has CPU they can get for reasonable money and threadripper is a bit too much/pricey (you do not use much PCI lane), same would go for heavy Excel worksheet type.
 
Yes and some game run slower on it in some scenario than on a 5800x and even at 1080p it has a 0% effect on some really big titles, would AMD used worst case scenario for the 5800x3d they would have said 0% gain at 1080p, they obviously never do that.
I would guess in games that prefer 5800X if and when it clocks higher? So a micro bump then?
And as far as 1080p comparison of a 0% gain, it has been my understanding that is hard for a CPU to excel over another when they are both GPU bound. I don't think any of your points detract from the 5800X3D's true benefits.
 
Is it me, or is PCIe 5.0 on the Zen 4s too early? It seems like there will be very few things available that will support it at the time these launch.
When your competitor has a check mark and you don't? Competitive suicide when you are in a marketing segment that lives for check marks whether practical or not. Even more so when you are touting being the "latest and greatest" which is expected from Zen4
 
When your competitor has a check mark and you don't?
Yep. I didn't know before I made the post that Intel already had it. In my mind, that means they pulled the gun even more too early than AMD has. But it's future-proofed too, which I get.
Competitive suicide when you are in a marketing segment that lives for check marks whether practical or not.
This is a profound statement about the industry as a whole. It baffles me to see how some components can get charged a hefty premium because they can do something special, or faster than everything else, even if there is no practical real-world benefit to it. And people will buy it with hardly batting an eye. I And I'm not judging, I think all this stuff is cool and interesting, but I personally don't need it.
Even more so when you are touting being the "latest and greatest" which is expected from Zen4
Fair. I agree the most with this.
 
Yep. I didn't know before I made the post that Intel already had it. In my mind, that means they pulled the gun even more too early than AMD has. But it's future-proofed too, which I get.

This is a profound statement about the industry as a whole. It baffles me to see how some components can get charged a hefty premium because they can do something special, or faster than everything else, even if there is no practical real-world benefit to it. And people will buy it with hardly batting an eye. I And I'm not judging, I think all this stuff is cool and interesting, but I personally don't need it.

Fair. I agree the most with this.
As an example: I currently work on an over-a-decade old laptop which originally came with an, at the time, ludicrously fast sata3 connection when the hdd that it came with wouldn't even have stretched a sata1 one (heh). Thanks to ssd's becoming affordable, that connection now makes the difference between a pile of crap and a laptop I can work on, however ancient it may be.

Yesterday's premium feature became today's money-saver.
 
I would guess in games that prefer 5800X if and when it clocks higher? So a micro bump then?
And as far as 1080p comparison of a 0% gain, it has been my understanding that is hard for a CPU to excel over another when they are both GPU bound. I don't think any of your points detract from the 5800X3D's true benefits.
I feel like I lost the plot, some games to not take much advantage of larger cache and have a 0% gain (probably because of lower clock) at 1080p on a 6900xt where you are not fully GPU bound yet.

I am not trying to say the 5800x3D does not have benefit in gaming over a 5800x.

I am saying if AMD marketing / presentation would have used the worst case scenario to talk about the benefit of the 5800x3D they would have said a 0% gain, it would been misleading to use only the worst case scenario for the consumer and a terrible marketing strategy, I feel they never in history ever did that.

To rewind the conversation

Someone claim AMD used the lower range of the worst case scenario
Me: I doubt they did that and they never did such a strange thing ever, no company ever did that, giving a list of recent example of AMD definitely not doing that.
 
I've been waiting for Zen4 for quite some time and even a 15% per core uplift would be nice (thought it may end up more than that, taking everything into account. ), but I am almost more interested in some of the other factors involved that make it a better chip/platform. Others have mentioned the first new IOM in a long time (and a die shrink from 14/12nm down to 6nm!), the inclusion of RDNA2 graphics within it allowing for some integrated GPU elements. Also in the more core spec adding DDR5, PCI-E 5.0, and finally (hopefully) USB4 40gbps+ (Thunderbolt 4 compatible) will be a big step forward. in areas where Intel has been eating AMD's lunch for a little while, even if AMD has overall been the gaming / all around performance / value champion in the past generation with Ryzen 5000 (and only kinda sorta finally eclipsed by Intel 12th gen AlderLake at often a huge price premium).. As others have said AMD has basically under-promised , perhaps strategically, so I will wait to see how it compares to the 12th and depending on launch time 13th generation of the Intel platform on both performance/features and price. Intel 12th for instance has been priced insanely high (ie the top tier mobo for Z690 , the Asus Maximus Extreme is currently $1100-1200 and the Glacial version with the waterblock is $2000! Meanwhile the same top tier AMD X570 Crosshair Dark Hero MSRP was in the 300-500 range over its lifetime! Even in previous eras, I was a frequent buyer of Rampage Intel HEDT boards and they were never priced that way, much less mainstream ones. This is to say nothing about the cost of the CPU or DDR5 RAM, the latter admittedly new.)> There's a certain amount of "new tech is more expensive for a time" elements but things are well beyond that now for a variety of reasons.

The next big thing is when will we actually see Zen4 physically launch and will they NOT do something stupid like hold back certain SKUs (especially the top end ones) for CPUs, mobos etc. I'd be in for either a new Zen4 or possibly Gen13 (I lean towards AMD but I remain open to Intel ) build , but I relaly don't want to see this delayed until very late in the year, especially if AMD is also going to be launching RDNA3 GPUs this year (and NV 4000 series). Please dont let them drag their feet and miss many of the big shopping (summer, end of summer, start of school year , holiday ) and gift seasons by waiting until fiucking November 30th or something dumb and then only having 3 items in stock which will be grabbed up by scalpers yet they can say "they launched". Avoid that, have a good performance and full featured Zen4 platform, price it competitively and hopefully AMD will continue to thrive in both top tier performance and value with Intel paying close attention and/or trading blows.
 
Back
Top