EPYC vs. Xeon: AMD, Intel Could Be Doubling Core Counts in 2019

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
If recent speculation from AdoredTV is correct, AMD and Intel will be releasing new server chips next year with double the core count. The channel claims that Rome, AMD’s successor for their current Naples-based EPYC chips, will feature a 9-die design comprising 8 CPU dies and 1 IO die for a total of 64 cores. Intel will be countering with their 3-die Cooper Lake, comprising 2 CPU dies and 1 IO die for a total of 56 cores.

AMD's Naples based EPYC and Intel's Skylake SP-based Xeon have a maximum core count of 32 and 28 cores respectively. If these rumors are true, then the maximum core count for both vendors will double in 2019. While AMD has already bet on the multi-chip design, Intel has not yet thrown their hat in the ring (except perhaps with Kaby Lake G, which has one GPU and one CPU die as well as HBM2 on one package).
 
Sweet jebus.

Hurray for per-socket licences.

BOOOOOOOO per-core licences. I'm looking at you, Microsoft.

Honestly, I'd prefer per core licenses.

It's pissing me off that my older dual socket Xeon (2x hexacore L5640's, for a total of 12c/24T) costs twice as much to license as a brand new 32 core monster.

Per core licensing is fundamentally more fair.

Per socket licensing just hurts home users who often rely on older hardware due to it being cheaper.
 
The funkiest thing in the video was that the AMD cpu cores are 7nm and the non core "block" 14nm . That was funny to see that because of cost 14nm still useful...
 
Instead of 20 core chips, can't we get some 4 core chips at 10ghz?

I know the answer, I'm just trolling... and wishing.
 
Instead of 20 core chips, can't we get some 4 core chips at 10ghz?

I know the answer, I'm just trolling... and wishing.
Thing is, that would not make it faster per say anyways. The industry is moving heavily into simultaneous processing, at the end of the day there IS a penalty to trying to stuff more into less, faster, than more distributed to more, latency should improve as it's done at the -same- time instead of on wait and cycles.
 
I can tell you right now from working with HP's Gen10 Epyc based systems that they are much better overall systems than the Intel based systems. LGA 3647 IMO is a bad product.
 
  • Like
Reactions: N4CR
like this
Honestly, I'd prefer per core licenses.

It's pissing me off that my older dual socket Xeon (2x hexacore L5640's, for a total of 12c/24T) costs twice as much to license as a brand new 32 core monster.

Per core licensing is fundamentally more fair.

Per socket licensing just hurts home users who often rely on older hardware due to it being cheaper.

For what software? IIRC, MS licences a minimum of 8c per processor, 2 processor minimum, so you're into a 2up by default.
 
I can tell you right now from working with HP's Gen10 Epyc based systems that they are much better overall systems than the Intel based systems. LGA 3647 IMO is a bad product.

I haven't had a chance to mess with epyc but I'm really liking both my 3647 based xeons and 3647 72core phis
 
will be interesting to see what they have to do for cooling those things.

plus, they are gonna have to be really low clocked
 
Funny I had made comments on twitter to AMD that a 9-core design would make a ton of sense the very day that guy published that article. Basically with 9 cores you can look at it like a keyboard number pad 1-9 keys. You have have a ton of different possible precision boost configurations that could be adaptable to maximize performance to workload conditions. It's easy to speculate that a 9-core design is something they've thought about and considered internally.
 
For what software? IIRC, MS licences a minimum of 8c per processor, 2 processor minimum, so you're into a 2up by default.

I don't use MS software.

Currently, the only software I have that is licensed in this manner is my Proxmox install. I would be much happier with a per core license, as I am definitely overpaying for my dual socket hexacores.
 
BOOOOOOOO per-core licences. I'm looking at you, Microsoft.

Microsoft is one of the last to the per-core party. Anyone else on per-socket probably won't remain there for long before also switching to per-core.

The whole charging by MIPS from the mainframes isn't looking so wonky now...
 
The industry is moving heavily into simultaneous processing

Ha! Multi-threading/parallelization has been pushed for over a decade now, but it's way harder than people thought and thus we still have lots of apps that are still heavily dependent on single thread performance still. And I don't see that ending any time soon either.

Since single thread performance is dependent on clock speed more than anything else and we are pretty much at the end of where we can go with that, I wish more cores had more of an impact than they do, but no one has figured out how to make more than a few specialized things scale well with parallelization yet.
 
Ha! Multi-threading/parallelization has been pushed for over a decade now, but it's way harder than people thought and thus we still have lots of apps that are still heavily dependent on single thread performance still. And I don't see that ending any time soon either.

Since single thread performance is dependent on clock speed more than anything else and we are pretty much at the end of where we can go with that, I wish more cores had more of an impact than they do, but no one has figured out how to make more than a few specialized things scale well with parallelization yet.

I am not a system programmer. Any idea why is it so hard when latency is not an issue? It's not like every app out there is a game trying to push frames as fast as it can.
 
I am not a system programmer. Any idea why is it so hard when latency is not an issue? It's not like every app out there is a game trying to push frames as fast as it can.

When what is so hard? Multithreading? It's hard because of concurrency issues. There aren't as many opportunities to independently work on different aspects of a problem than bring them back together that parallelization would require.

If you have to work through a problem sequentially it can't be done in parallel!

Image processing, compressing MP3s - those are examples of things where multiple chunks can be worked on independently and then brought back together since there aren't interdependencies - but it turns out there aren't a lot of processes or algorithms we rely on that are like that.

Unfortunately.
 
When what is so hard? Multithreading? It's hard because of concurrency issues. There aren't as many opportunities to independently work on different aspects of a problem than bring them back together that parallelization would require.

If you have to work through a problem sequentially it can't be done in parallel!

Image processing, compressing MP3s - those are examples of things where multiple chunks can be worked on independently and then brought back together since there aren't interdependencies - but it turns out there aren't a lot of processes or algorithms we rely on that are like that.

Unfortunately.

So, a lot of what's going on behind the scenes is single threaded and absolutely cannot be split? What kind of problems are we talking about here?
 
What if the 9th chip is actually an active interposer at 14nm?
 
Back
Top