AMD Hopes to Break Intel Server Dominance with New 32-Core Naples Chip

I ran an incredibly amount of stuff on a 2x8/16 64GB Dell rack at my prior job....I can't even comprehend the amount of junk you could stuff into a fully loaded dual socket Naples system.
 
200w.webp
 
Please no less than 2.8ghz base clock.....and tdp no higher than 130w.

32 cores =
8core 2ghz = 30W.
4x30W = 120 W.
2.5ghz = 140 W maybe
2.8ghz = 175 W maybe

I do not think 32 core will fit in 130W at 2.8 base, but but numbers above on 14nm is crazy still.
But we don't know for sure, but they can fit 32core in 120w at 2ghz 100% sure.
 
I don't see 32 cores at a ghz rating grater than say 2.4. I really think that would be top end for server TDP. Especially if you want 2-4 of these in a single server.
 
You're all probably right. I knew not to expect magic when the clock speeds are protected by NDA.
 
Tons of cores are fine, unless the software you are using is licensed by the core.
With something like SQL, it can make more sense to pay more for more Ghz and less cores, due to the high software licensing costs.
Yeah but I specifically mentioned Hyper V which is the exact use case this would be awesome for. Pretty sure we don't pay per core or socket just number of installs and the price is really cheap compared to the ROI.
 
Well, with server 2016 you're going to be paying with a per-core licensing. With VMware and Veeam it's still per proc. We're torn between holding onto Server 2012R2 (boo because a shared desktop looks like Win 8) and moving to 2016 (boo per core licensing).

I wonder if there are good tested desktop mods to make a hosted shared desktop from a 2012R2 box look anything like either 7 or 10...
 
Well, with server 2016 you're going to be paying with a per-core licensing. With VMware and Veeam it's still per proc. We're torn between holding onto Server 2012R2 (boo because a shared desktop looks like Win 8) and moving to 2016 (boo per core licensing).

I wonder if there are good tested desktop mods to make a hosted shared desktop from a 2012R2 box look anything like either 7 or 10...


While it isn't my team doing the work we use virtual desktops here.. but they do look like windows 7. A toggle to windows 10 would be nice.

and holy crap.. a per core license cost would EAT OUR FREAKING LUNCH. (Depending on cost of course.)

Microsoft is going to hit a damn upgrade wall with that BS.
 
Last edited:
Well, with server 2016 you're going to be paying with a per-core licensing. With VMware and Veeam it's still per proc. We're torn between holding onto Server 2012R2 (boo because a shared desktop looks like Win 8) and moving to 2016 (boo per core licensing).

I wonder if there are good tested desktop mods to make a hosted shared desktop from a 2012R2 box look anything like either 7 or 10...
Threaten to move to Linux and see what your account manager says. Granted only reduction SQL servers need to be paid for dev and QA and other nonprod can be dev edition which is full blown SQL just not for prod.
 
While it isn't my team doing the work we use virtual desktops here.. but they do look like windows 7. A toggle to windows 10 would be nice.

and holy crap.. a per core license cost would EAT OUR FREAKING LUNCH. (Depending on cost of course.)

Microsoft is going to hit a damn upgrade wall with that BS.

Well, If they did switch to a per core license fee (which honestly makes a lot more sense than a per socket license fee) I'd imagine that the cost per core would be a lot lower than the cost per socket.

Right now, I have 12 cores on my server.. 2 sockets of 6 cores each. (Older Westmere-EP Xeon). If instead of $10 per month per socket, I paid, - say - $1.67 per core per month, my licensing costs would be just about the same, and they wouldn't be giving "unfair" discounts to those with newer higher core single socket systems over older lower core multi-socket systems.

Per socket licensing just makes absolutely no sense at all.
 
I understand what you are saying. But those of us with 16 core CPU's running across 2 or more CPU's in a server are going to feel it... of course depending on the per core cost.

Finding that Data was too irksome for me at the moment as it is all based on negotiated contracts and were not doing ours until August.

But in reading the online information per core licensing will be the way it is... minimum 8 cores per server license.. Hopefully we can divide that up in a virtual space because a lot of our servers that are task specific don't need 8 cores and 2 runs just fine.
 
Please no less than 2.8ghz base clock.....and tdp no higher than 130w.


Personally, for what I do, I don't even need that kind of base clock today. My dual Xeon L5640 system has a base clock of 2.2 and turbo's to 2.8.

I've actually been toying around with replacing it with a Atom C3000 based system, as long as the C3000's support enough RAM, and have enough PCIe lanes. I need lots of cores for VM's, but I don't need lots of high power for what I do. I need lots of I/O for my storage purposes and networking, and I definitely need lots of RAM. (Currently have 192GB, and don't want to downgrade)

If one of those 16 core Atom C3000's supports 256GB of RAM and has enough I/), it may just be my next server. Even Atom's today have higher IPC than my old Westmere Xeon's, meaning I'd probably only need a 1.6Ghz base Atcom C3000 to keep up with where I am now at 2.2Ghz base 2.8 turbo. It would use less power too.

In order to make this happen, it can't be priced to outrageously though, and I have to be able to find a good deal on DDR4 ECC RAM. If I do a server upgrade, most of my upgrade cost is going to be in RAM...
 
Personally, for what I do, I don't even need that kind of base clock today. My dual Xeon L5640 system has a base clock of 2.2 and turbo's to 2.8.

I've actually been toying around with replacing it with a Atom C3000 based system, as long as the C3000's support enough RAM, and have enough PCIe lanes. I need lots of cores for VM's, but I don't need lots of high power for what I do. I need lots of I/O for my storage purposes and networking, and I definitely need lots of RAM. (Currently have 192GB, and don't want to downgrade)

If one of those 16 core Atom C3000's supports 256GB of RAM and has enough I/), it may just be my next server. Even Atom's today have higher IPC than my old Westmere Xeon's, meaning I'd probably only need a 1.6Ghz base Atcom C3000 to keep up with where I am now at 2.2Ghz base 2.8 turbo. It would use less power too.

In order to make this happen, it can't be priced to outrageously though, and I have to be able to find a good deal on DDR4 ECC RAM. If I do a server upgrade, most of my upgrade cost is going to be in RAM...
That sounds promising. However, I've gone the small core route before. It's not I/O that's a problem usually but lack of single thread power within one or more VM's. If you are doing a NAS appliance however, you shouldn't have any problems. But in my case my all-in-one drives the entire house triple video cards in all. 1 Video card for bedroom, 1 for downstairs, and 1 for the guest room. As it stands right now my 6 core Ivy is just enough to power it. I would LOVE to move to something like Naples. I could add a forth or fifth room if I wanted. I've been experimenting with PCIE Expanders at work and they are a god send.
 
That sounds promising. However, I've gone the small core route before. It's not I/O that's a problem usually but lack of single thread power within one or more VM's. If you are doing a NAS appliance however, you shouldn't have any problems. But in my case my all-in-one drives the entire house triple video cards in all. 1 Video card for bedroom, 1 for downstairs, and 1 for the guest room. As it stands right now my 6 core Ivy is just enough to power it. I would LOVE to move to something like Naples. I could add a forth or fifth room if I wanted. I've been experimenting with PCIE Expanders at work and they are a god send.


Interesting.

I tried going that route about a year ago, but no matter what I tried I could not get GPU passthrough to work, even when I went with Quadro 2000's which officially support it.

I wound up backing off, and using HTPC's on each room instead.

My server still runs a lot of VM's though, but the current Westmere-EP is more than powerful enough to do what I do, and by all accounts current "small cores" are bigger than my old "big cores"

upload_2017-3-9_22-45-10.png
 
Interesting.

I tried going that route about a year ago, but no matter what I tried I could not get GPU passthrough to work, even when I went with Quadro 2000's which officially support it.

I wound up backing off, and using HTPC's on each room instead.

My server still runs a lot of VM's though, but the current Westmere-EP is more than powerful enough to do what I do, and by all accounts current "small cores" are bigger than my old "big cores"

View attachment 18978
Nice. Yeah it looks like those are mostly network services. Even Mythtv isn't that CPU heavy. With that workload I think you are good to go. Going low power in your case makes a ton of sense. It's a little bit different in my case. You can game in either of the three rooms. Two people can game simultaneously without anyone noticing. The third person starts to eat into PCIE bandwidth but games are totally still playable. I also do video encoding in one of the VMs, hence the 6-core Ivy.
 
Nice. Yeah it looks like those are mostly network services. Even Mythtv isn't that CPU heavy. With that workload I think you are good to go. Going low power in your case makes a ton of sense. It's a little bit different in my case. You can game in either of the three rooms. Two people can game simultaneously without anyone noticing. The third person starts to eat into PCIE bandwidth but games are totally still playable. I also do video encoding in one of the VMs, hence the 6-core Ivy.


Yep. if I look at the max use over a monthly time span, this server, even with its relatively small cores by modern standards is pretty much overkill for me:

upload_2017-3-14_16-43-38.png


What is not visible in this list is that I also run my NAS off of this rig. I have two ZFS pools. One small mirror on dual SSD's for the boot/VM storage of the system, and one large 12x4TB hard drive (two vdevs, 6 drives each in RAIDz2, so RAID60 equivalent) with two 512GB SSD's striped as a cache, and two SSD's mirrored as a SLOG/ZIL device. This does use some CPU.

The CPU use peaked at 22.54%. Load average peaked at 12.99. That's actually higher than I expected, but still not terrible considering it was a short peak, and most of th etime it is much lower, and that it is a 12C/24T setup.

What I do use tons of though is RAM. (though this is not evident in the top of the image, as the system just rebooted due to power loss)
 
Zarathustra[H] I would worry about your CPU charts not showing the full story. Can you look at your I/O saturation for the same time frame.. see if you are plateauing for your physical I/O?

I doubt it, these being home systems and constrained by consumer level network connectivity, but just curious.

Making a gaming VM host with one of the new generation of Video card's coming from AMD should be bad ass, put a pair of those in, with some decently fast storage and a buttload of ram and you could host 4 or more actively gaming VM's no problem. (Depending on the native acceptance of AMD's virtualzied GPU performance. )
 
Zarathustra[H] I would worry about your CPU charts not showing the full story. Can you look at your I/O saturation for the same time frame.. see if you are plateauing for your physical I/O?

I doubt it, these being home systems and constrained by consumer level network connectivity, but just curious.

Making a gaming VM host with one of the new generation of Video card's coming from AMD should be bad ass, put a pair of those in, with some decently fast storage and a buttload of ram and you could host 4 or more actively gaming VM's no problem. (Depending on the native acceptance of AMD's virtualzied GPU performance. )


Not quite sure how I would get the I/O use data.

It's an older Supermicro X8DTE dual socket Xeon board. (I was going to post a link, but the Supermicro site seems down, which is odd)

Anyway, it has quite a few PCIe lanes, but they are mostly x2.
  • 4 (x8) PCI-E 2.0 slots
  • 1 (x4) PCI-E 2.0 slot
  • 1 (x4) PCI-E slot
I do have a 10Gbit NIC in one of the 8x slots.
 
Back
Top