Intel Xeon Scalable Processors worth it?!

Joined
Apr 20, 2016
Messages
50
After seeing a wicked dual-Xeon system here, I began to wonder, are Intel Xeon Scalable Processors really worth it? I mean, as opposed to, say, a dual 2066 socket system? (Provided I can find one, that is).

Surprisingly, if you google for 'Scalable Processors'. most of what you're getting is promo-talk, and less about what it actually does. :) (And how it can benefit your system). Like will IUP (Intel Ultra Path) make your system see them as 1 CPU?!
 
Socket 2066 is not available in 2P form; the socket is a variant of the Xeon-SP socket with reduced memory channels to lower board costs.
The "Xeon Scalable" nonsense is mostly marketing - the platform is really no different from any of its predecessors. The only real change this generation is the unification of the 2P, 4P, and 8P sockets - all high end Xeons are now 8P capable. Previously, the HCC Xeons were available in 2P ("EP") and 4/8P ("EX") packaging, with the EX processors costing substantially more.

Is it worth it? A dual Platinum build is not that expensive if you are clever - probably around 6K for dual P-8136's, 192GB, a Supermicro of your choice, and a single consumer GPU. The problem is that 6K gets you worse performance than a 9900K is almost all applications; not that many workstation programs scale to over 8 cores and those that do likely scale well enough that you can come up with a more cost effective solution involving a small cluster or a bunch of Threadrippers. If you're in the business of building $6000 workstations, the elusive W-3175X is likely the right choice if you can get your hands on one. Overall price will be slightly higher because the Dominus Extreme is $1500, but single-threaded performance and worst-case latency will be much better.
 
Socket 2066 is not available in 2P form; the socket is a variant of the Xeon-SP socket with reduced memory channels to lower board costs.
The "Xeon Scalable" nonsense is mostly marketing - the platform is really no different from any of its predecessors. The only real change this generation is the unification of the 2P, 4P, and 8P sockets - all high end Xeons are now 8P capable. Previously, the HCC Xeons were available in 2P ("EP") and 4/8P ("EX") packaging, with the EX processors costing substantially more.

Is it worth it? A dual Platinum build is not that expensive if you are clever - probably around 6K for dual P-8136's, 192GB, a Supermicro of your choice, and a single consumer GPU. The problem is that 6K gets you worse performance than a 9900K is almost all applications; not that many workstation programs scale to over 8 cores and those that do likely scale well enough that you can come up with a more cost effective solution involving a small cluster or a bunch of Threadrippers. If you're in the business of building $6000 workstations, the elusive W-3175X is likely the right choice if you can get your hands on one. Overall price will be slightly higher because the Dominus Extreme is $1500, but single-threaded performance and worst-case latency will be much better.


Thanks for your reply.

'worse performance than a 9900K' pretty much seals (or closes, rather) the deal. Even though I could probably even afford 2x Xeon 8180's, I'm not entirely sure I want to (that's like the price of a mid-size car). I think building a system for like $6,0000-$10,000 feels more reasonable. And the W-3175X is probably what I should go for -- even though availability is a bit of an issue right now. I do a lot of video rendering (x264, mainly), so whatever amount of cores I can get my hands on, will scale. And maybe GIGABYTE's C621 Aorus Extreme board will be available faster.
 
The "Xeon Scalable" nonsense is mostly marketing - the platform is really no different from any of its predecessors. The only real change this generation is the unification of the 2P, 4P, and 8P sockets - all high end Xeons are now 8P capable. Previously, the HCC Xeons were available in 2P ("EP") and 4/8P ("EX") packaging, with the EX processors costing substantially more.

Actually , no.

The 'Xeon Scalable' refers to the fact that most if not all Xeons are available in 'F' variants.The F standing for 'Fabric' .
F variant Xeons have a secondary 'sideband' connector designed to allow direct CPU linking via 100Gb omnipath interconnects.....it's to make large farms easier to connect.

-----

Is it worth it ?

I sure like mine , but it was $$.....I'm sure it's a case-by-case thing.
You mention video rendering.
I built mine for H.265 (HEVC) 4k video editing/rendering.
The skylake Xeons do onboard hardware HEVC encode/decode.......And they absolutely crush it.

(y)
 
Actually , no.

The 'Xeon Scalable' refers to the fact that most if not all Xeons are available in 'F' variants.The F standing for 'Fabric' .
F variant Xeons have a secondary 'sideband' connector designed to allow direct CPU linking via 100Gb omnipath interconnects.....it's to make large farms easier to connect.

-----

Is it worth it ?

I sure like mine , but it was $$.....I'm sure it's a case-by-case thing.
You mention video rendering.
I built mine for H.265 (HEVC) 4k video editing/rendering.
The skylake Xeons do onboard hardware HEVC encode/decode.......And they absolutely crush it.

(y)


This 'omnipath interconnects' thingy wouldn't be relevant on a single mobo with dual socket, though, right?! But yeah, dual-sockets boards are what makes Xeons tempting, after all. If they had dual-socket boards for 2066... but I haven't found any.

EDIT: For example, I was looking at https://www.supermicro.com/products/motherboard/Xeon/C620/X11DAi-N.cfm Put 2x 8180 Xeons in there, and I'd have one major kick-*ss system! :)
 
Actually , no.

The 'Xeon Scalable' refers to the fact that most if not all Xeons are available in 'F' variants.The F standing for 'Fabric' .
F variant Xeons have a secondary 'sideband' connector designed to allow direct CPU linking via 100Gb omnipath interconnects.....it's to make large farms easier to connect.

-----

Is it worth it ?

I sure like mine , but it was $$.....I'm sure it's a case-by-case thing.
You mention video rendering.
I built mine for H.265 (HEVC) 4k video editing/rendering.
The skylake Xeons do onboard hardware HEVC encode/decode.......And they absolutely crush it.

(y)

I'm curious - what's your video editing workflow? I haven't had the best scaling on many-core systems with Adobe products, for example.
I think the 'Scalable' is just a brand - it replaces the old 'E5 v3' type naming conventions. Omnipath is a vaguely related fruit of the Intel-Qlogic acquisition where something akin to 100Gb Infiniband is integrated onto the CPU for easier HPC deployments, but its still fundamentally no different from any other network interface - there are no provisions for cache coherency or other such nonsense, for example.
 
Thanks for your reply.

'worse performance than a 9900K' pretty much seals (or closes, rather) the deal. Even though I could probably even afford 2x Xeon 8180's, I'm not entirely sure I want to (that's like the price of a mid-size car). I think building a system for like $6,0000-$10,000 feels more reasonable. And the W-3175X is probably what I should go for -- even though availability is a bit of an issue right now. I do a lot of video rendering (x264, mainly), so whatever amount of cores I can get my hands on, will scale. And maybe GIGABYTE's C621 Aorus Extreme board will be available faster.

Both are in stock if your feeling froggy.

https://www.newegg.com/Product/Prod...5X &cm_re=Xeon_W-3175X-_-19-118-010-_-Product

https://www.newegg.com/Product/Product.aspx?Item=N82E16813119192
 

Yes, I noticed. But Asetek still isn't shipping their 690LX-PN Liquid Cooler outside the US. (And I don't want to build a custom loop, as I simply suck at that stuff, and risk losing $4,000 in the process of having a leak). I fired off an annoyed email at Asetek about it, though.

EDIT: P.S. Is that a NB fan I see on there?! O, I hate those! (They tend to get very noisy after a while). The Gigabyte one seems to be fanless.
 
Last edited:
Yes, I noticed. But Asetek still isn't shipping their 690LX-PN Liquid Cooler outside the US. (And I don't want to build a custom loop, as I simply suck at that stuff, and risk losing $4,000 in the process of having a leak). I fired off an annoyed email at Asetek about it, though.

EDIT: P.S. Is that a NB fan I see on there?! O, I hate those! (They tend to get very noisy after a while). The Gigabyte one seems to be fanless.

Yeah I do believe the GB is passive.
On the cooler tip...
If your not willing to go water and your not OC'ing, than this HSF would make a great stand in till you could get your AIO.

https://www.servethehome.com/noctua-nh-u14s-dx-3647-intel-xeon-scalable-lga3647-cooler-review/

https://www.amazon.com/Noctua-NH-U1...3647&qid=1553705857&s=gateway&sr=8-2-fkmrnull
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
I ended up going with a supermicro X11DPi-NT....has the cool bits like Dual 10Gb , OcULink pci-e ..etc.


I'm curious - what's your video editing workflow? I haven't had the best scaling on many-core systems with Adobe products, for example.
I think the 'Scalable' is just a brand - it replaces the old 'E5 v3' type naming conventions. Omnipath is a vaguely related fruit of the Intel-Qlogic acquisition where something akin to 100Gb Infiniband is integrated onto the CPU for easier HPC deployments, but its still fundamentally no different from any other network interface - there are no provisions for cache coherency or other such nonsense, for example.

Yes , I think that's valid....The Intel xeon scalable "Family".
My x11DPi-NT has the motherboard connections for f-type processors.Obviously I didn't use f-type procs ( I wish I had a farm for em, but...)

My workflow ? Adobe Premier pro and handbrake (often at the same time).
Back when I was researching this hardware, it was suggested to me that 12-16 cores was the best you were gonna get as far as scaling for Adobe...Hence my 2x xeon gold 6144's (8c/16t per cpu)

--------

Thanks! I didn't even realize you could air-cool a 3647 socket. :)

Air cooled is the standard for 3647 chips.
Be very careful if you decide to watercool a 3647 rig.The CPU mounting mechanism is flakey at best.
In the process of watercooling mine, I lost 2 channels of ram on CPU2.....this is a common problem from what I've seen.
 
Air cooled is the standard for 3647 chips.
Be very careful if you decide to watercool a 3647 rig.The CPU mounting mechanism is flakey at best.
In the process of watercooling mine, I lost 2 channels of ram on CPU2.....this is a common problem from what I've seen.


Yes, I read about that, the other day. Something about the lack of a traditional retention backplate or something. I didn't quite get it, though. I mean, wouldn't the same issue exist with an air cooler?!

Also, I was actually hoping to overclock a little. Like at least all cores as 4 Ghz (something anal on my end, not wanting any of the 28 cores to run slower than my current i7 6700K -- which I eventually did give a very modest overclock to 4.5 Ghz, btw). So, it seems both air and water are a bit problematic with these CPU's. High time Intel finally started baking something smaller than 14nm. :)

EDIT: I can find suprisingly little about the cooling performance of that Noctua NH-U14S DX-3647. Only thing I could find was like 26C stock. That's not bad per se, but nothing about full load -- let alone overclock.
 
Last edited:
S3647 doesn't have a traditional ILM, but rather relies on the heatsink screws to keep the lands on the CPU in contact with the pins in the socket. My guess is that the stock Supermicro coolers are carefully designed to apply the correct amount of force to the package. Waterblocks are probably less meticulously made, and huge workstation coolers apply all kinds of bad forces to the socket if the board is vertical.
 
Nope, the 3175X is a 1P only part - Intel really likes to price-segregate based on socket count.
 
S3647 doesn't have a traditional ILM, but rather relies on the heatsink screws to keep the lands on the CPU in contact with the pins in the socket. My guess is that the stock Supermicro coolers are carefully designed to apply the correct amount of force to the package. Waterblocks are probably less meticulously made, and huge workstation coolers apply all kinds of bad forces to the socket if the board is vertical.

Absolutely !
The standard 'dynotron' and supermicro HSF's are actually milled components.They are very precise, have captive fasteners etc.
The waterblocks have a single punched piece attached to the block, to attach to the cpu mount....not rigid enough is my guess.
After reading/watching what Der8auer went through to try and get his 2 channels back, I decided to just call it a day....Yea , I lost 2 mem channels(of 6) on 1 cpu, but The difference is negligible .

-----

The Asus c621e-SAGE was my second pick.Badass mobo, but less 'server' options, but more overclockabilty (as limited as Xeon OC is)

If you are seriously considering a w3175x processor , I recommend the 2 cpu equivalent....dual Xeon gold 10 core chips (Xeon gold 6130 ? )
While this is normally opposite of common advice, I say this because of PCI-e lanes.

(y)
 
Thx for the feedback, guys. The lack of a traditional ILM does concern me a bit, though. Especially watching Der8auer's delidding vid, and hear him talk about losing 2 channels. At first glance, I thought it was just a matter of too large a heatsink, but apparently the loss wasn't permanent, and he was talking about, what, not all CPU pins making proper contact or something?! And now you guys are reporting the same thing.
 
P.S. Perhaps I should clarify a bit. I have some physical disabilities (to my hands, primarily) that make me shy away from too overcomplicated and/or extreme precision work. Hence, I can't do custom loops, or (perhaps?) do sockets that may require too much precision to assemble a cooler on. So, yeah, if Der8auer has trouble with these Xeon sockets, then I will definitely too. I can always find someone to assemble it for me, of couse, but have yet to find a company here that offers BOTH a cpu-mounting service AND sells the high-end Xeons.
 
https://www.supermicro.com/products/nfo/superworkstation.cfm
Badass workstations with whatever you want, prebuilt.
----------
The last comment I would leave is this :

I have wanted a dual CPU system since Katmai, but it's a serious decision.
Expect to pay 15-20k$ at minimum.
Don't buy cheap or non-conforming gear for anything.QVL for every step.

The reward ?
Commercial grade gear that smokes almost anything, with cool things most 'retail' customers never see.

The Caveat ?
When you go asking questions, very few people have any idea.\

:ROFLMAO:
 
Last edited:
P.S. Perhaps I should clarify a bit. I have some physical disabilities (to my hands, primarily) that make me shy away from too overcomplicated and/or extreme precision work. Hence, I can't do custom loops, or (perhaps?) do sockets that may require too much precision to assemble a cooler on. So, yeah, if Der8auer has trouble with these Xeon sockets, then I will definitely too. I can always find someone to assemble it for me, of couse, but have yet to find a company here that offers BOTH a cpu-mounting service AND sells the high-end Xeons.

At that point, you might as well buy a pre-built workstation from Dell or HP. Unlike the their desktops, Dell/HP high end workstations have a lot of engineering and design. The down side is they're very customized and don't take standard parts.

We have Precision 7920's and they are very very neat, IMO. Although if you want something to show off, perhaps a Digitalstorm workstation is better.
 
At that point, you might as well buy a pre-built workstation from Dell or HP. Unlike the their desktops, Dell/HP high end workstations have a lot of engineering and design. The down side is they're very customized and don't take standard parts.

We have Precision 7920's and they are very very neat, IMO. Although if you want something to show off, perhaps a Digitalstorm workstation is better.

I may well just go with an HP, pre-made system, yes. I just hate the idea of ruining a $10,000 Cpu. :) I actually worked for HP for a while (at their server department, no less), so I know what they have to offer. But yeah, hardware tends to be 'proprietary' for systems like that.
 
I may well just go with an HP, pre-made system, yes. I just hate the idea of ruining a $10,000 Cpu. :) I actually worked for HP for a while (at their server department, no less), so I know what they have to offer. But yeah, hardware tends to be 'proprietary' for systems like that.

Looks like the Xeon-SP for HP is the Z8 workstation. Not very familiar with that first hand. But I was always impressed with their predecessor Z8xx series.

Upside of proprietary is that it's very tightly integrated. Downside is that if a part breaks, you're going to pay a premium (which is why I suppose most people get a 5 year support).
 
Looks like the Xeon-SP for HP is the Z8 workstation. Not very familiar with that first hand. But I was always impressed with their predecessor Z8xx series.

Upside of proprietary is that it's very tightly integrated. Downside is that if a part breaks, you're going to pay a premium (which is why I suppose most people get a 5 year support).

The HP Z8 G4 Workstation may be the thing for me. :) And it seems they're selling the dual Platinum Xeons too. I may postpone getting a new car by a year, LOL, but it's premade, hassle-free, and with a 5-year maintenance contract, little could actually go wrong.
 
Back
Top