Its negated almost completely by a superior cache design yes. Is going off die on paper worse... sure. It can be compensated for in a design that properly feeds those cores math by not forcing clearing of cache as often. Perhaps quadrupling the cache is a brute force fix ... but it's a fix.
Clever algorithms that can utilize cache slush to mask the increased latency between dies still doesn't negate the fact that the latency is there. Again, this is basic physics. (Does anyone remember Cray supercomputers? Does anyone remember why they were built in cylindrical fashion? Interconnect latency, is why. This isn't theoretical shit at all. This is fucking reality.)
For the vast majority of workloads I'd bet that there is no detectable difference. There's absolutely no difference if you can keep the workload into 8c/16t (with SMT enabled) or 8c/8t (with SMT disabled) on a single chiplet. Most 'normal' workloads fit within these parameters. Again, no difference should be detectable if the entire workload can be contained on a single chiplet.
Talking about HT and SMT is just a distraction to this very basic problem.
I thought this was [H] a hardware enthusiast site that appreciated new PC technology. Way too many fanboys, (including Nvidia), here.
Enthusiasts are very appreciative that AMD has met or exceeded Intel on almost every front. I certainly would like to build a Team Red system but financial priorities intervene (not to mention that with two bad eyes I don't game much at all). Acknowledgement of weaknesses in a design are not the antithesis to enthusiasts but is intrinsic to the basic nature of an enthusiast.
Last edited: