Intel Readies Atom "Grand Ridge" 24-core Processor, Features PCIe 4.0 and DDR5

Weird chip design tiled chip layouts are inefficient for general workloads. (Memory scheduling becomes a problem) and these seem too weak for hpc stuff.

I run a bunch of the phi 7210s (64 core 256thread @1.3ghz on 14nm) and they are VERY specific as far as workload.

Additionally the cache kinda rings alarms. The phis have 16gb of mcdram on chip they use as a shared L3. It seems weird nothing about on chip memory is marketed.

This looks like old news. Perhaps just a old false rumor?
I can find nothing about these chips past their 2020 announcement that these were coming sometime in 2021 or 2022.

I can see these being very nice if packaged and sold as a complete SoC solution, it would be a pleasant upgrade over my EPYC 3000 embedded boxes which are good but the drivers/bios from Supermicro are entertaining, to say the least.
 
I can find nothing about these chips past their 2020 announcement that these were coming sometime in 2021 or 2022.

I can see these being very nice if packaged and sold as a complete SoC solution, it would be a pleasant upgrade over my EPYC 3000 embedded boxes which are good but the drivers/bios from Supermicro are entertaining, to say the least.


I had been considering shopping for these.

What kind of problems have there been?
 
I had been considering shopping for these.

What kind of problems have there been?
For me, it's been driver issues with server 2019 Standard relating back to the Hyper-V instances I run on them. There is just a series of updates I can't do because it makes them unstable and the errors in the logs all point back to chipset and memory transfer issues. On my Epyc 7000's these were fixed with a BIOS update released in 2020 but the most recent BIOS from Supermicro for my 3000's is 2018. I have a support ticket going with them as I am hoping they have one that just isn't posted to the website, otherwise, I may have to look at rebuilding the boxes as Server 2016 the VM's shouldn't care, but it is a PITA.
 
For me, it's been driver issues with server 2019 Standard relating back to the Hyper-V instances I run on them. There is just a series of updates I can't do because it makes them unstable and the errors in the logs all point back to chipset and memory transfer issues. On my Epyc 7000's these were fixed with a BIOS update released in 2020 but the most recent BIOS from Supermicro for my 3000's is 2018. I have a support ticket going with them as I am hoping they have one that just isn't posted to the website, otherwise, I may have to look at rebuilding the boxes as Server 2016 the VM's shouldn't care, but it is a PITA.

Interesting, and somewhat alarming. Have you run them under Linux at all? I wonder if they are stable there.
 
Interesting, and somewhat alarming. Have you run them under Linux at all? I wonder if they are stable there.
Probably good??? I haven’t run them with VMWare because my VM’s are all imaged for Hyper-V and transitioning from one to the other is more painful than it’s worth.
The windows patches in question specifically address security issues with VM’s and buffer overflow blah blah blah that could let an attacker into the memory of adjacent VM’s on a server. So probably only related to windows and how they handle VM’s maybe VM-ware but I can’t say for sure.

But if you were considering one make sure it's cheap they are after all chips that went into production in 2018 (Zen 1) and they have done very little with the platform since and we're not getting a refresh until mid-2023, and the CPU's are soldered in place so they are completely non-upgradable on that front, so unless it is offering you something you can't get on a different platform, I would look at other platforms first.
 
Last edited:
For a dedicated NAS I think it could be great. I mean, 24 cores may be a bit overkill, but filesystem loads generally spread quite nicely over many cores, and don't really need big core performance.

That said, my NAS is also my VM server which hosts several VM's and containers for me and for those applications I think I'd like big cores.


Yeah, that seems like a serious limitation.

On the one hand, who knows, these modern Atom cores may be as fast or faster than my old dual Ivy Bridge era Xeon E5-2650v2's, but on the other hand I am using quite a lot of PCIe lanes on that thing. 8x lanes for the SAS controller, 8x lanes for the dual 10gig ethernet, 56x lanes on the 14 separate m.2 NVMe drives, and that's just the ones I've manually added and doesn't include the on board devices on the Supermicro X9DRI-F.

As small cores get more and more powerful, I imagine they will eventually be more than sufficient for my NAS/VM/Container needs, but I don't think I'll ever get away from wanting massive quantities of PCIe lanes.


My NAS also runs VMs - but very basic, not high performance required ones.

I would love double the IPC on my NAS while still 35w or so. Denverton isn't the most amazing CPU out there, despite the initial rave reviews of it.
Must admit 24 cores is a bit much, honestly I'd be fine with 8, I just want my 35W, with ECC and IPMI.

The AMD Epyc 3000 low power server series sounded interesting but it's very uncommon to be put in systems unfortunately and it's now so old that they'll be up to the 3004 series next, perhaps that one will get traction.
 
Back
Top