Up to 64 core EPYC 2 CPU

juanrga

2[H]4U
Joined
Feb 22, 2017
Messages
2,804
https://hothardware.com/news/amd-epyc-2-64-cores-128-threads-and-256mb-l3-cache

It seems my prediction of 6-core CCX for Zen2 has been just killed. Either AMD is going for 8-core CCX modules or will maintain the same 4-core configuration and will add four CCX modules per die.

I think 8-core CCX modules and two modules per die is a better technological solution. This also means that the Zen2 APUs for mainstream would be up to 8-core.

The quadrupling of L3 cache per CCX could help to mitigate the latency problems when accessing main RAM, but in the other hand, this seems to imply that Zen2 core size will be similar to Zen. I wonder if most of the IPC gains for Zen2 will come from the larger L3.
 
I am not sure how impressed I am because the PCIe lane count is not increasing. Of course, of course, the 128 lane 1P configuration is awesome but could we get like 256 lanes :p ?

There are already 2U servers with 24 x4 NVMe hot swap drives, creating a database server with a RAID 0 of 12 pairs of RAID 1 NVMe SSDs is just obscene performance wise.
 
I don't know if I buy that they will implement a larger L3 or not, that would put more strain on the cache coherency bus, And the benefit from a larger eviction cache would be limited, perhaps they plan on putting more gmi links on each die and wiring the 4 dies in a mesh instead of a ring, and maybe updating the cache to something smarter as they increase its size per core?

Of course, of course, the 128 lane 1P configuration is awesome but could we get like 256 lanes :p ?

Their are enough pins they could add a few more lanes, but making the socket any larger and it will start to resemble the chip interfaces(their not really sockets persay) used on IBM zseries mainframes!
 
I don't know if I buy that they will implement a larger L3 or not, that would put more strain on the cache coherency bus, And the benefit from a larger eviction cache would be limited, perhaps they plan on putting more gmi links on each die and wiring the 4 dies in a mesh instead of a ring, and maybe updating the cache to something smarter as they increase its size per core?



Their are enough pins they could add a few more lanes, but making the socket any larger and it will start to resemble the chip interfaces(their not really sockets persay) used on IBM zseries mainframes!

Were getting to the point that essentially any higher level of throughput is going to have to come from GPU's. That or make add in CPU Cards using something entirely new that doesn't use PCI express. Even Intel's 64 core / 256 thread monster of a DataCenter CPU can't come even a fraction of a decimal in percentage as close to the power of a GTX1080 non Ti.
 
Were getting to the point that essentially any higher level of throughput is going to have to come from GPU's. That or make add in CPU Cards using something entirely new that doesn't use PCI express. Even Intel's 64 core / 256 thread monster of a DataCenter CPU can't come even a fraction of a decimal in percentage as close to the power of a GTX1080 non Ti.

You can't scare me with your floating point execution units, The power of mere processing elements is nothing compared to the ability to share data between processing elements in a useful fashion.!!!!!!!!!!

That is to say, you missed the point of my argument entirely.
 
It seems my prediction of 6-core CCX for Zen2 has been just killed. Either AMD is going for 8-core CCX modules or will maintain the same 4-core configuration and will add four CCX modules per die.

I think 8-core CCX modules and two modules per die is a better technological solution. This also means that the Zen2 APUs for mainstream would be up to 8-core.

Almost confirmed it is a 8-core CCX configuration. My original argument used the non-linear increase of complexity of IF interconnect with number of nodes. But now it is leaked that EPYC 2 has the same 4x2 memory channel structure than EPYC. So it is almost confirmed that EPYC 2 is a four-die chip with 16-core per die. Each die having a pair of CCX of 8-core each.

The Zen2 CCX diagram would be

Code:
core L3 L3 core
core L3 L3 core
core L3 L3 core
core L3 L3 core
 
No way AMD gives up the 4 core CCX.

Anyone who has followed AMD for the past half century knows that they do not give up on a design easily. They will milk the 4 CCX cow for years to come.

IMO they are likely looking to place the threadripper model on one die, aka double the ccx paths locally, maintain the single paths globally.
 
Guys, can't find information regarding Windows 7 compatibility.
I know this sounds weird, but i need to know.
Considering information feed AMD website is incoherent shit - i found nothing regarding the topic.
Any leads?
 
Guys, can't find information regarding Windows 7 compatibility.
I know this sounds weird, but i need to know.
Considering information feed AMD website is incoherent shit - i found nothing regarding the topic.
Any leads?
Likely a hard find as most only consider server OSs.

https://www.servethehome.com/amd-epyc-supported-os-hypervisor-compatibility-matrix-launch/

Beyond this, we have started testing OSes. Thus far we have verified that Ubuntu 17.04, Windows Server 2012 R2 and Windows Server 2016 work out-of-the-box with AMD EPYC.

Best I could find in 30sec.
 
Likely a hard find as most only consider server OSs.

https://www.servethehome.com/amd-epyc-supported-os-hypervisor-compatibility-matrix-launch/

Best I could find in 30sec.

JustReason, aha, i've seen this page. No W7 mentioning, as expected.
Hmm.. Windows 2012, as far as i know, is base on modified Windows 8 code...so maybe we have some chances..

Looking in the past: Windows 7 was not on the list of Ryzen compatibility. But currently i successfully using Windows 7 on Ryzen platform without any restrictions..
 
Back
Top