Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Good god why? If I wanted integrated video, I’d buy an APU. Take those same transistors and, if nothing else, make them cache. I hate buying junk I have to turn off, and an iGPU is junk I have to turn off.frankly, i am most excited about the rudimentary igpu in every package.
Diagnosing problems with your video card, apparently. Not a *great* reason, but *a* reason.Good god why? If I wanted integrated video, I’d buy an APU. Take those same transistors and, if nothing else, make them cache. I hate buying junk I have to turn off, and an iGPU is junk I have to turn off.
This makes the lineup OEM friendly, getting a basic iGPU in the lineup is important if they want to make some inroads there.Diagnosing problems with your video card, apparently. Not a *great* reason, but *a* reason.
It will be nice to not have to pick between current generation cpu core or an APU (when 4000 series were OEM only, you were picking between Zen3 CPU or Zen1 APU). Overall, the new I/O die looks to be around the same size as the old one, so it's probably not a very big GPU, but will help people with (S)oft requirements like me (yeah, I know I'm in the wrong forumsGood god why? If I wanted integrated video, I’d buy an APU. Take those same transistors and, if nothing else, make them cache. I hate buying junk I have to turn off, and an iGPU is junk I have to turn off.
Basically, I think AMD is saying don't worry, Zen 4 is at least as good as Alder Lake. And later, they will reveal what makes it truly better.Well there are cases where the 5800 x 3D is well over 15% faster than the standard 5800x. There are a few cases where it's even over 20 to 25% faster so again it appears there will be cases where the 5800 x3d could be faster than Zen 4 in some games. And having more cores than 16 would be irrelevant for gaming.
AMD has exceeded CPU marketing claims in recent years, i.e. sandbagged every time, even a few percent usually, sometimes more.Why would a presentation talk of 15%, it is at least in most of the worst cases ?
AMD has exceeded CPU marketing claims in recent years, i.e. sandbagged every time, even a few percent usually, sometimes more.
Especially this far out, they don't want to give the game up so Intel can perfectly stratify their lineup.
Honestly not expecting much over the 5800 x3d in quite a few gaming loads till v-cache next year. The supposed 16c CCD +5GHz will be interesting though, if latency is good for all-round tasks. Add v-cache with that core count and I have a logical upgrade reason. Right now, x3d is going to be plenty fast enough for my non-gaming loads after a 2600k lol.
Even if we talk minimum in the worst case ?AMD has exceeded CPU marketing claims in recent years, i.e. sandbagged every time, even a few percent usually, sometimes more.
The point about the core counts, is that AMD seems to be obscuring their hand. And I'm willing to bet core counts might be included in what they are hiding.
"16 cores is the maximum, AMD confirmed that the design features two chiplets each with up to eight CPU cores, no 24 core chip here"A higher than 16 core could exist, but would probably be a slower gaming chips no ?
Not sure to follow the conversation, someone is just speculating that for a list of title, 5800x3d has a shot to be has fast as 7xxx, if not faster on their launch, higher core count or not worth the 35% increase in cost over the 5800X (where I live at least)... change nothing to that statement.
Just like how AMD locked down overclocking on the 5950X because it runs hotter.That 5800X3D does run hotter. So.....It seems you people are saying that AMD designed it to run hot, so as to justify locking it down?
It’s not a matter of heat it’s a voltage issue, TSMC’s specifications on the 3D caching are very tight. The limited applications and tight requirements are why you don’t see it on more systems. But yes applying sufficient cooling to the stuff under it is troublesome.Just like how AMD locked down overclocking on the 5950X because it runs hotter.
What? 5950x is not locked down.Just like how AMD locked down overclocking on the 5950X because it runs hotter.
Pretty sure that was an attempt at sarcasm, because the 5950x is hot and overclockable.What? 5950x is not locked down.
Depends, since no card is even PCI-E 5.0. We have no idea how much bandwidth a 4090 or 7900RX will need.Does a PCI-E 5 Graphics Card need more than an x4 Slot? NVME need more that x2? (For now)
Well based on that well spoken statement, do we all think that AMD isn't low balling in the numbers? Are they really launching a slower product?I mean you people can do math right?
Exactly.What? 5950x is not locked down.
Yes, this is the Official™ and Sanctioned© reasons AMD gave. Just how the Official™ and Sanctioned© reasons Intel started adding more cores to their processors had nothing to do with AMD at all.It’s not a matter of heat it’s a voltage issue, TSMC’s specifications on the 3D caching are very tight. The limited applications and tight requirements are why you don’t see it on more systems. But yes applying sufficient cooling to the stuff under it is troublesome.
Well the design constraints are listed by TSMC and they are imposing the same limitations on any other customers who want to use them. They are also the reason that they aren’t available on 5nm yet, TSMC’s words not AMD’s. So unless TSMC has taken to fudging their design documents for AMD’s sake then I’m going to trust them on this one.Exactly.
Yes, this is the Official™ and Sanctioned© reasons AMD gave. Just how the Official™ and Sanctioned© reasons Intel started adding more cores to their processors had nothing to do with AMD at all.
Exactly.
Yes, this is the Official™ and Sanctioned© reasons AMD gave. Just how the Official™ and Sanctioned© reasons Intel started adding more cores to their processors had nothing to do with AMD at all.
Take a longer view: Once iGPUs are ubiquitous, there will be a significant number of systems out there will more than one GPU. Then it will begin to make sense for games to offload things onto the second GPU. Maybe just small things at first, but from there it's a small step to implementing full multi-GPU support.Good god why? If I wanted integrated video, I’d buy an APU. Take those same transistors and, if nothing else, make them cache. I hate buying junk I have to turn off, and an iGPU is junk I have to turn off.
X3d they only marketed for gaming and were accurate on the games listed. It's not an all-round cpu. As for the 5900x I'm pretty sure they used IPC for Zen 3 which turned out accurate in most tasks the CPU is designed for.Even if we talk minimum in the worst case ?
Going with worst case minimum they would have annoucned the 5900x has a 5-6% upgrade over the 3900x, 5800x3d has a downgrade over the regular and so on.
AMD are probably not cherry picking a particularly juicy scenario and the track record is good in that regard, but I am not sure if they have ever used the minimum of the worst case in a presentation before.
I also welcome the small gpu on all CPU, easier for the second life has a server, life easier when your GPU die, specially in this era having extra GPU automatically laying around is a big plus to me.
From what I can gather, the CCDs are still going to be 8-core.
Yup sounds like you are right. Was wondering how they'd magically solved the latency issue if going 16."16 cores is the maximum, AMD confirmed that the design features two chiplets each with up to eight CPU cores, no 24 core chip here"
https://www.techspot.com/news/94678-amd-talks-next-gen-zen-4-cpus-ryzen.html
Because latency. Otherwise it becomes too much delay and the gains are then effectively nullified outside of even fewer very specific tasks. With HBM it's fine, there was a decent latency reduction over gddr if you look at trace length.I don't understand why they can't just place the cache side by side. There is plenty of space under the IHS in consumer applications.
what? aren't all recent GPUs PCIe 4? from radeon 5000 up and nvidia 3000 series?Depends, since no card is even PCI-E 4.0. We have no idea how much bandwidth a 4090 or 7900RX will need.
Heck even the 4090 is PCI-E 4.0 if the rumors are correct. We will find out this year.
Yes and some game run slower on it in some scenario than on a 5800x and even at 1080p it has a 0% effect on some really big titles, would AMD used worst case scenario for the 5800x3d they would have said 0% gain at 1080p, they obviously never do that.X3d they only marketed for gaming and were accurate on the games listed.
X3d they only marketed for gaming and were accurate on the games listed. It's not an all-round cpu. As for the 5900x I'm pretty sure they used IPC for Zen 3 which turned out accurate in most tasks the CPU is designed for.
Note I said marketing claims, you said minimums.
Yup sounds like you are right. Was wondering how they'd magically solved the latency issue if going 16.
Because latency. Otherwise it becomes too much delay and the gains are then effectively nullified outside of even fewer very specific tasks. With HBM it's fine, there was a decent latency reduction over gddr if you look at trace length.
I wouldn't mind having a basic igpu just for those times that I might be in between video cards. Plus it makes sense for people that don't give a crap about playing games. What I don't want them to focus on is making a faster igpu. All it should be on these CPUs is something very basic just so you can have some video without having to fool with a dedicated card.
frankly, i am most excited about the rudimentary igpu in every package.
Yes, not for bandwidth, but for structural support. With some of these coolers…Does a PCI-E 5 Graphics Card need more than an x4 Slot? NVME need more that x2? (For now)
I also like this feature. Not every computer needs a dedicated video card. And with video card prices being what they are it sucks spending at least $150 extra just to get a desktop image. Plus with iGPU it means you are free to use full lane PCI slot for something like a NVME adapter. My CPU purchases are often dependent on if an iGPU is included.frankly, i am most excited about the rudimentary igpu in every package.
Kingston websiteIF I understand that it will be DDR5 only (did I understood that correctly ?) did sound to me like a big issue, but pricing could be somewhat ok by launch time, is there good 32 gig kit ddr5- under $250 USD with tax ?
Total package cost could loose quite a bit to a ddr4 next gen Intel CPU in the low end/mid range.
24 PCIe 5.0 slots sound like a lot, that 48 PCIe 4.0 bandwith or 96 PCIe 3.0, that sound like bandwith of not so long ago HEDT level platform I think and 55% of a threadripper 3960x.
Will see how long it will take for anything regular user PCI-E 5.0 to change anything to 4.0 at least bandwith wise.
Speed, you basically double the acccess times for every inch you add. If they had put it elseware you would encounter significant performance swings depending on which set of cache you were accessing it from. It would have completely crippled the chip for gaming as your performance variance would have been too great.I don't understand why they can't just place the cache side by side. There is plenty of space under the IHS in consumer applications.
I wouldn't mind having a basic igpu just for those times that I might be in between video cards. Plus it makes sense for people that don't give a crap about playing games. What I don't want them to focus on is making a faster igpu. All it should be on these CPUs is something very basic just so you can have some video without having to fool with a dedicated card.
The Cache on an AMD chip consumes roughy 1/3’rd the power and is subsequently responsible for 1/3’rd the heat. The x3D chip even more so. Cache runs hot.One ought to be able to use the same type of interconnects that are used with the stacked 3D design and just mate them on the side instead.
Also, another solution would be to put the hottest layer on top closest to the heat spreader. I can't imagine the cache generates a lot of heat, so stick that on the bottom and allow the hot parts to mate directly to the IHS.
Real reason for an HFboards member to like it maybe (I would added for the CPU longer life has a server one day, it is fun to have a small iGPU to plug a monitor if you need it during the initial setup), not necessarily the real reason AMD would be added them.This is the real reason, IMO. Sometimes trouble shooting a GPU is something you have to do and not everyone has a second GPU.