AMD announces Ryzen 7000 Zen 4 CPUs

Fair. But I can hope. No one makes really good Threadripper pro boards yet and the OEM exclusivity sucks ass.
 
frankly, i am most excited about the rudimentary igpu in every package.
Good god why? If I wanted integrated video, I’d buy an APU. Take those same transistors and, if nothing else, make them cache. I hate buying junk I have to turn off, and an iGPU is junk I have to turn off.
 
I wouldn't mind having a basic igpu just for those times that I might be in between video cards. Plus it makes sense for people that don't give a crap about playing games. What I don't want them to focus on is making a faster igpu. All it should be on these CPUs is something very basic just so you can have some video without having to fool with a dedicated card.
 
Diagnosing problems with your video card, apparently. Not a *great* reason, but *a* reason.
This makes the lineup OEM friendly, getting a basic iGPU in the lineup is important if they want to make some inroads there.

It’s also cheap, the die shrink from GloFlo 12 to TSMC 6 saves a lot of real estate, tossing a bare bones iGPU for the basics takes up the difference, and costs the same or less than their previous IO die.

I’m happier about the options this gets me for office systems.
 
gaming really doesn't need any of this. current stuff is way more than ready to do it.
 
Good god why? If I wanted integrated video, I’d buy an APU. Take those same transistors and, if nothing else, make them cache. I hate buying junk I have to turn off, and an iGPU is junk I have to turn off.
It will be nice to not have to pick between current generation cpu core or an APU (when 4000 series were OEM only, you were picking between Zen3 CPU or Zen1 APU). Overall, the new I/O die looks to be around the same size as the old one, so it's probably not a very big GPU, but will help people with (S)oft requirements like me (yeah, I know I'm in the wrong forums :)
 
Well there are cases where the 5800 x 3D is well over 15% faster than the standard 5800x. There are a few cases where it's even over 20 to 25% faster so again it appears there will be cases where the 5800 x3d could be faster than Zen 4 in some games. And having more cores than 16 would be irrelevant for gaming.
Basically, I think AMD is saying don't worry, Zen 4 is at least as good as Alder Lake. And later, they will reveal what makes it truly better.
If AMD isn't stacking cache for Zen 4 as a default spec: there may indeed be a couple of games which totally love cache, allowing the 5800X3D to win. However, IPC + clockspeed + DDR5 + cache increases + improvements to cache management; Many games will be better than 5800X3D. The particularly cache dependent ones may be more/less even. As we already see that with Alder Lake.
 
After watching the event. I think AMD is not mentioning a lot in preparation for the new Intel CPU.

They do look damn impressive. But, I might hold on to my 5900x until Zen4D comes out. You know that will be the next logical step.
 
  • Like
Reactions: N4CR
like this
Why would a presentation talk of 15%, it is at least in most of the worst cases ?
AMD has exceeded CPU marketing claims in recent years, i.e. sandbagged every time, even a few percent usually, sometimes more.
Especially this far out, they don't want to give the game up so Intel can perfectly stratify their lineup.

Honestly not expecting much over the 5800 x3d in quite a few gaming loads till v-cache next year. The supposed 16c CCD +5GHz will be interesting though, if latency is good for all-round tasks. Add v-cache with that core count and I have a logical upgrade reason. Right now, x3d is going to be plenty fast enough for my non-gaming loads after a 2600k lol.
 
AMD has exceeded CPU marketing claims in recent years, i.e. sandbagged every time, even a few percent usually, sometimes more.
Especially this far out, they don't want to give the game up so Intel can perfectly stratify their lineup.

Honestly not expecting much over the 5800 x3d in quite a few gaming loads till v-cache next year. The supposed 16c CCD +5GHz will be interesting though, if latency is good for all-round tasks. Add v-cache with that core count and I have a logical upgrade reason. Right now, x3d is going to be plenty fast enough for my non-gaming loads after a 2600k lol.

From what I can gather, the CCDs are still going to be 8-core.
 
AMD has exceeded CPU marketing claims in recent years, i.e. sandbagged every time, even a few percent usually, sometimes more.
Even if we talk minimum in the worst case ?

Going with worst case minimum they would have annoucned the 5900x has a 5-6% upgrade over the 3900x, 5800x3d has a downgrade over the regular and so on.

AMD are probably not cherry picking a particularly juicy scenario and the track record is good in that regard, but I am not sure if they have ever used the minimum of the worst case in a presentation before.

I also welcome the small gpu on all CPU, easier for the second life has a server, life easier when your GPU die, specially in this era having extra GPU automatically laying around is a big plus to me.
 
The point about the core counts, is that AMD seems to be obscuring their hand. And I'm willing to bet core counts might be included in what they are hiding.

A higher than 16 core could exist, but would probably be a slower gaming chips no ?

Not sure to follow the conversation, someone is just speculating that for a list of title, 5800x3d has a shot to be has fast as 7xxx, if not faster on their launch, higher core count or not worth the 35% increase in cost over the 5800X (where I live at least)... change nothing to that statement.
"16 cores is the maximum, AMD confirmed that the design features two chiplets each with up to eight CPU cores, no 24 core chip here"

https://www.techspot.com/news/94678-amd-talks-next-gen-zen-4-cpus-ryzen.html
 
Just like how AMD locked down overclocking on the 5950X because it runs hotter.
It’s not a matter of heat it’s a voltage issue, TSMC’s specifications on the 3D caching are very tight. The limited applications and tight requirements are why you don’t see it on more systems. But yes applying sufficient cooling to the stuff under it is troublesome.
 
What? 5950x is not locked down.
Pretty sure that was an attempt at sarcasm, because the 5950x is hot and overclockable.

But heat is not the issue with the 5800x3d it’s voltage. Easier to lock it down than tank a class action when people tried to OC it using the same methods as the 5800x and burn it out, then blame AMD.
 
Does a PCI-E 5 Graphics Card need more than an x4 Slot? NVME need more that x2? (For now)
 
Does a PCI-E 5 Graphics Card need more than an x4 Slot? NVME need more that x2? (For now)
Depends, since no card is even PCI-E 5.0. We have no idea how much bandwidth a 4090 or 7900RX will need.

Heck even the 4090 is PCI-E 4.0 if the rumors are correct. We will find out this year.
 
Last edited:
I mean you people can do math right?
Well based on that well spoken statement, do we all think that AMD isn't low balling in the numbers? Are they really launching a slower product?
This whole thread has me shaking my head [H'rs]!
 
I'll just wait till it's released before making a conclusion. That said, I am leaning more towards intel (but I'm never set in stone till the day I buy).


Also, I like the inclusion of integrated graphics. I know my next rig will have ddr5 so the ddr5 limitation only will not hurt me.

The chipsets look as expected.
 
What? 5950x is not locked down.
Exactly.

It’s not a matter of heat it’s a voltage issue, TSMC’s specifications on the 3D caching are very tight. The limited applications and tight requirements are why you don’t see it on more systems. But yes applying sufficient cooling to the stuff under it is troublesome.
Yes, this is the Official™ and Sanctioned© reasons AMD gave. Just how the Official™ and Sanctioned© reasons Intel started adding more cores to their processors had nothing to do with AMD at all.
 
Exactly.


Yes, this is the Official™ and Sanctioned© reasons AMD gave. Just how the Official™ and Sanctioned© reasons Intel started adding more cores to their processors had nothing to do with AMD at all.
Well the design constraints are listed by TSMC and they are imposing the same limitations on any other customers who want to use them. They are also the reason that they aren’t available on 5nm yet, TSMC’s words not AMD’s. So unless TSMC has taken to fudging their design documents for AMD’s sake then I’m going to trust them on this one.

Both Intel and IBM have experimented with 3D stacking in the same way but we’re always caught up on the same problems, their designs call for tiny liquid “pipes” to dissipate heat and function as voltage barriers. But the technology is exceedingly expensive to use and they shelved it. Now with TSMC’s success with it I am sure they will attempt to replicate their work which will lead to good things all around. But yeah, TSMC, Intel, and IBM all having the same problem leads me to believe AMD and TSMC on this one, so not going to dive into the conspiracy’s here, I think it’s just a coincidence.

I’m thinking the initial 7000 series is going to feel a lot like Intel’s 11’th Gen, a lot of new tech that isn’t quite flushed out. AMD has clarified that they are continuing production of the 5000 series for the foreseeable future so I suspect they feel the same way.
 
That's great and all, but as long as it doesn't have 40+ PCIe lanes, it falls in the "never buy" category for me.

Also, I bet it will require Windows 11, or something stupid like that.
 
Exactly.


Yes, this is the Official™ and Sanctioned© reasons AMD gave. Just how the Official™ and Sanctioned© reasons Intel started adding more cores to their processors had nothing to do with AMD at all.

I don't understand why they can't just place the cache side by side. There is plenty of space under the IHS in consumer applications.
 
Good god why? If I wanted integrated video, I’d buy an APU. Take those same transistors and, if nothing else, make them cache. I hate buying junk I have to turn off, and an iGPU is junk I have to turn off.
Take a longer view: Once iGPUs are ubiquitous, there will be a significant number of systems out there will more than one GPU. Then it will begin to make sense for games to offload things onto the second GPU. Maybe just small things at first, but from there it's a small step to implementing full multi-GPU support.
 
Even if we talk minimum in the worst case ?

Going with worst case minimum they would have annoucned the 5900x has a 5-6% upgrade over the 3900x, 5800x3d has a downgrade over the regular and so on.

AMD are probably not cherry picking a particularly juicy scenario and the track record is good in that regard, but I am not sure if they have ever used the minimum of the worst case in a presentation before.

I also welcome the small gpu on all CPU, easier for the second life has a server, life easier when your GPU die, specially in this era having extra GPU automatically laying around is a big plus to me.
X3d they only marketed for gaming and were accurate on the games listed. It's not an all-round cpu. As for the 5900x I'm pretty sure they used IPC for Zen 3 which turned out accurate in most tasks the CPU is designed for.
Note I said marketing claims, you said minimums.

From what I can gather, the CCDs are still going to be 8-core.
"16 cores is the maximum, AMD confirmed that the design features two chiplets each with up to eight CPU cores, no 24 core chip here"

https://www.techspot.com/news/94678-amd-talks-next-gen-zen-4-cpus-ryzen.html
Yup sounds like you are right. Was wondering how they'd magically solved the latency issue if going 16.


I don't understand why they can't just place the cache side by side. There is plenty of space under the IHS in consumer applications.
Because latency. Otherwise it becomes too much delay and the gains are then effectively nullified outside of even fewer very specific tasks. With HBM it's fine, there was a decent latency reduction over gddr if you look at trace length.
 
Depends, since no card is even PCI-E 4.0. We have no idea how much bandwidth a 4090 or 7900RX will need.

Heck even the 4090 is PCI-E 4.0 if the rumors are correct. We will find out this year.
what? aren't all recent GPUs PCIe 4? from radeon 5000 up and nvidia 3000 series?
 
X3d they only marketed for gaming and were accurate on the games listed.
Yes and some game run slower on it in some scenario than on a 5800x and even at 1080p it has a 0% effect on some really big titles, would AMD used worst case scenario for the 5800x3d they would have said 0% gain at 1080p, they obviously never do that.

The debate was someone saying that AMD is using +15% and that is probably the lower of the worst case.
 
X3d they only marketed for gaming and were accurate on the games listed. It's not an all-round cpu. As for the 5900x I'm pretty sure they used IPC for Zen 3 which turned out accurate in most tasks the CPU is designed for.
Note I said marketing claims, you said minimums.



Yup sounds like you are right. Was wondering how they'd magically solved the latency issue if going 16.



Because latency. Otherwise it becomes too much delay and the gains are then effectively nullified outside of even fewer very specific tasks. With HBM it's fine, there was a decent latency reduction over gddr if you look at trace length.

One ought to be able to use the same type of interconnects that are used with the stacked 3D design and just mate them on the side instead.

Also, another solution would be to put the hottest layer on top closest to the heat spreader. I can't imagine the cache generates a lot of heat, so stick that on the bottom and allow the hot parts to mate directly to the IHS.
 
I wouldn't mind having a basic igpu just for those times that I might be in between video cards. Plus it makes sense for people that don't give a crap about playing games. What I don't want them to focus on is making a faster igpu. All it should be on these CPUs is something very basic just so you can have some video without having to fool with a dedicated card.

frankly, i am most excited about the rudimentary igpu in every package.


For Ryzen 7000, AMD is also introducing a new 6 nm I/O die (IOD), which replaces the 14 nm IOD used in previous Zen 3 designs. Marking a first for AMD, the new IOD is incorporating an iGPU, in this case based on AMD's RDNA2 architecture. So with the Ryzen 7000 generation, all of AMD's CPUs will technically be APUs as well, as graphics is a basic part of the chip's construction.

https://www.anandtech.com/show/17399/amd-ryzen-7000-announced-zen4-pcie5-ddr5-am5-coming-fall

Ever since it launched Zen 2, AMD has built 7nm chiplets at TSMC and purchased its I/O dies from GlobalFoundries. Those dies were built on 14nm using 12nm design rules and they don’t seem to have changed much with the launch of Zen 3. With Zen 4, AMD’s I/O die will shift from 14nm to 6nm, move from GlobalFoundries to TSMC, and it’ll incorporate an RDNA2-based GPU for the first time.

https://www.extremetech.com/extreme...wered-ryzen-7000-family-upcoming-am5-chipsets
 
As interested as I am in getting an AM5 setup, it will almost certainly not be using a Gigabyte motherboard. Techpowerup has pictures of four upcoming X670E and all of them lack reset buttons along with full 7.1 or even 5.1 channel out audio.

Hate how so many features are being increasingly relegated to the highest end boards. Gatekeeping power and reset from being available on sub-$400 boards is ridiculous enough to begin with, but Gigabyte apparently can't even be bothered to splurge on a reset button when proving the now seemingly ultra-premium feature of a power button. I also wonder at this point if there's any personnel overlap in Gigabyte's PSU design team and motherboard board design team.
 
frankly, i am most excited about the rudimentary igpu in every package.
I also like this feature. Not every computer needs a dedicated video card. And with video card prices being what they are it sucks spending at least $150 extra just to get a desktop image. Plus with iGPU it means you are free to use full lane PCI slot for something like a NVME adapter. My CPU purchases are often dependent on if an iGPU is included.
 
IF I understand that it will be DDR5 only (did I understood that correctly ?) did sound to me like a big issue, but pricing could be somewhat ok by launch time, is there good 32 gig kit ddr5- under $250 USD with tax ?

Total package cost could loose quite a bit to a ddr4 next gen Intel CPU in the low end/mid range.

24 PCIe 5.0 slots sound like a lot, that 48 PCIe 4.0 bandwith or 96 PCIe 3.0, that sound like bandwith of not so long ago HEDT level platform I think and 55% of a threadripper 3960x.

Will see how long it will take for anything regular user PCI-E 5.0 to change anything to 4.0 at least bandwith wise.
Kingston website
has DDR 5 at decent price.
 
I don't understand why they can't just place the cache side by side. There is plenty of space under the IHS in consumer applications.
Speed, you basically double the acccess times for every inch you add. If they had put it elseware you would encounter significant performance swings depending on which set of cache you were accessing it from. It would have completely crippled the chip for gaming as your performance variance would have been too great.
 
I wouldn't mind having a basic igpu just for those times that I might be in between video cards. Plus it makes sense for people that don't give a crap about playing games. What I don't want them to focus on is making a faster igpu. All it should be on these CPUs is something very basic just so you can have some video without having to fool with a dedicated card.

This is the real reason, IMO. Sometimes trouble shooting a GPU is something you have to do and not everyone has a second GPU.
 
One ought to be able to use the same type of interconnects that are used with the stacked 3D design and just mate them on the side instead.

Also, another solution would be to put the hottest layer on top closest to the heat spreader. I can't imagine the cache generates a lot of heat, so stick that on the bottom and allow the hot parts to mate directly to the IHS.
The Cache on an AMD chip consumes roughy 1/3’rd the power and is subsequently responsible for 1/3’rd the heat. The x3D chip even more so. Cache runs hot.
 
This is the real reason, IMO. Sometimes trouble shooting a GPU is something you have to do and not everyone has a second GPU.
Real reason for an HFboards member to like it maybe (I would added for the CPU longer life has a server one day, it is fun to have a small iGPU to plug a monitor if you need it during the initial setup), not necessarily the real reason AMD would be added them.

Opening the door to some sector of OEMs/entreprise market I feel could be more it, what percentage of desktop PC sold worldwide is without a dGPU ?

In 2016s:
https://www.edacafe.com/nbc/article...tal-GPU-Shipments-Remain-Flat-Quarter-Quarter
Discrete GPUs were in 35.92% of PCs

2021:
Over the next five years, the penetration of discrete GPUs (dGPU) in the PC will grow to reach a level of 25%.

I could imagine those numbers include laptop ? Not sure, but that leaving a large market to Intel completely alone the enterprise PC that do want a iGPU with some codec ability and 2-3 monitor support but not more (not enough to want a 5600G type), has AMD become a large competitor again and if they keep an efficacy edge on the high end, could want to get back into that enterprise market.
 
Back
Top