Feast your eyes upon a 4P-8160 and what it does to Cinebench R15.

... confused ...

Of course it can play Crysis. Not at the moment, obviously; but slap in a video card (or three) and it'll be just fine.


Joke from like 10 years ago when Crysis was released and brought every pc to their knees.

Then it started, every new cpu, video card, pc etc. That got released.

Was asked, can it play crysis ?

The reply was usually.

Nothing can play crysis.

And seeing that cracking setup you have.

I still say, nothing can play crysis.

:)
 
converting it into it's role is going to take a little longer than expected we had a little trouble with the enjoyable hash rate...I'm just talking about fractions of a penny...
 
Grimlaking, I fully agree with your well justified rant about the cost of CALs and how Microsoft has made it far too painful. This won't be for SQL/DB use; but we are developing a system that uses MS-SQL and the costs get upsetting very quickly. We used to do per-socket licenses, then we did four core licenses as it worked out neatly for our needs on an older 2P platform. But now? Fuck me gently with a chainsaw.

EVIL-SCOTSMAN, oh, I know the old joke. Just being too literal and figured people genuinely thought it couldn't game. Of course, to be fair, server platforms were never very game friendly until somewhat recently. Oh, and yes, it really does look a little bit like a CPU die, doesn't it? :)
 
fastgeek Have you done performance testing to see if fewer cores at higher clock help you more or not? In our case we needed the threads for synchonys Jobs not being on hold for CPU wait state.
 
That is some hardware. Wow. Interesting board design, too. Is it like 2x2 CPU modules stacked up in there?
 
...shit that MS does...
If your operation is demanding and moving too quickly to consider developing an alternative, then money is not really a concern, and that's entirely how "the Microsoft cut" is justified.

It kills me, but it's true.
 
just for some shits and giggles, estimated mininger performance of the more popular crypto's ?

I would imagine power and cooling, let alone sheer cost would absolutely make it a non starter, but just like a crazy fast car or jet few people could ever hope of owning and only "dream about it"

was curious :)

on another note, that speed comparison for that test would very much be like using an F16 to race a shopping cart down a drag strip, likely not at all fuel friendly, but DAMN THAT SPEED lol.
 
I did like your post because I read it the wrongly, but, now that I have seen the mistake I have to deduct some points.

U get -560 points for saying “it cant play crysis”.

It “cant play crisis” can only be determined by asking.

“Can it play Crysis ?”

That Crucial Question was neither asked nor confirmed.

Wtf has happened to this forum that allows “Crysis playing determination questions” to NOT be asked, but be answered ???

I am as confused as everyone else is.

Of course I was messing with you. There is no gaming card in the server pic hence the joke. Sheesh take a chilly pill lol
 
just for some shits and giggles, estimated mininger performance of the more popular crypto's ?

I would imagine power and cooling, let alone sheer cost would absolutely make it a non starter, but just like a crazy fast car or jet few people could ever hope of owning and only "dream about it"

was curious :)

on another note, that speed comparison for that test would very much be like using an F16 to race a shopping cart down a drag strip, likely not at all fuel friendly, but DAMN THAT SPEED lol.

High end enterprise gear is normally pretty dang good on prefermance/watts. I know my phis outdo every CPU and GPU out there in effeciancy
 
Of course I was messing with you. There is no gaming card in the server pic hence the joke. Sheesh take a chilly pill lol

Dude.

I was also messin

;)

I take crysis playing abilities seriously.

One day I will find a rig that can play it and when that day comes, that rig will forever be called “farcry rulez, fuck crysis”.
 
... confused ...

Of course it can play Crysis. Not at the moment, obviously; but slap in a video card (or three) and it'll be just fine.



Guess you should've beat me to sharing the goods then, eh? :p
Yeah. I guess I should have. I am to the point these days where I am over the hardware aspect of it all for the most part. That said my favorite train set is made up of over 880,000 physical hosts. I have no idea how many cores that is, but lets just say it does hyperscale machine learning and everything in between.
 
I've gotta ask because apparently I missed it -- what does this rig pull from the wall in wattage?

I don't have any way of properly measuring it at the wall; but per the iLO power meter I had it pulling 1,050W (or 3,583BTU) at 200V when running three 48-thread instances of Prime95.
 
trudude, this is the first 4P Skylake I've had in my possession (should've received one months ago) so it's new and shiny to me. 4P itself is old hat now. If I could only convince someone to trial the latest SuperDome w/ the SGI interconnects. Now that would be something. :)

NixZiZ, yes, this is the same way Dell does their R820, R830 and (presumably) their forthcoming R840.
 
Kind of curious how's this differ from Epyc for your needs anyway? I'm sure they both have strong and weak points presumably. I imagine you had to weigh that decision? I envy the amount of glued together bandwidth of Epyc provides with octa channel. Also doesn't NVDIMM fit into DDR4 slots? Quad channel memory paired together with quad channel NVDIMM seems like it would be gnarly. Get Kraken release your inner geekdom nerd!
 
Kind of curious how's this differ from Epyc for your needs anyway? I'm sure they both have strong and weak points presumably. I imagine you had to weigh that decision? I envy the amount of glued together bandwidth of Epyc provides with octa channel. Also doesn't NVDIMM fit into DDR4 slots? Quad channel memory paired together with quad channel NVDIMM seems like it would be gnarly. Get Kraken release your inner geekdom nerd!

Well obviously he didnt want to get fired so...
 
We have some 4Tb hosts with 4 cpu's in them. It's like $30k to $40k in CPU, maybe more, and $192k for the ram, at retail prices. Just googled the shit, no idea what the company pays for them. It's probably less than those totals, but still nuts.
 
Kind of curious how's this differ from Epyc for your needs anyway? I'm sure they both have strong and weak points presumably. I imagine you had to weigh that decision? I envy the amount of glued together bandwidth of Epyc provides with octa channel. Also doesn't NVDIMM fit into DDR4 slots? Quad channel memory paired together with quad channel NVDIMM seems like it would be gnarly. Get Kraken release your inner geekdom nerd!

I do not think any of the big vendors are using Epyc just yet. It would probably require more engineering than it is worth, except all the spectre/meltdown shit might spur some customers to demand it. But the price difference, $40k for 4 top end intel cpus, vs what $3k each, or $12k for top end AMD cpu's. If my guesstimate is right, that's only $28k potential difference. For a machine that is in the several hundred k price range, it turns out to be just a small piece of the pie. Especially factoring in all your software license costs. It's not enough to worry about. Spectre/Meltdown is patched, and it hasn't been an issue in our environment. Hosts are memory starved before they are ever cpu starved, so even with any potential performance hits, doesn't matter. And the spectre/meltdown is probably one of the few potential drivers of a shift in CPU architectures a company might want in their datacenter. But even that hasn't been a big issue.

Someone probably knows if any of the big boys are going to make any AMD powered stuff..
 
We have some 4Tb hosts with 4 cpu's in them. It's like $30k to $40k in CPU, maybe more, and $192k for the ram, at retail prices. Just googled the shit, no idea what the company pays for them. It's probably less than those totals, but still nuts.

dont be suprised if they actuall pay more then market value. some companys couldnt care less about how much they spend as long as its justified. the key is to get into the consulting field for those company :p
 
Watching that thing destroy Cinebench R15 is rather satisfying.
 
We have some 4Tb hosts with 4 cpu's in them. It's like $30k to $40k in CPU, maybe more, and $192k for the ram, at retail prices. Just googled the shit, no idea what the company pays for them. It's probably less than those totals, but still nuts.

Are they custom, bought through a partner program with the originating server vendor, or bought through a VAR?

But 200k total doesn't seem far off from what you'd pay for that.
 
trudude, this is the first 4P Skylake I've had in my possession (should've received one months ago) so it's new and shiny to me. 4P itself is old hat now. If I could only convince someone to trial the latest SuperDome w/ the SGI interconnects. Now that would be something. :)

NixZiZ, yes, this is the same way Dell does their R820, R830 and (presumably) their forthcoming R840.
My boss specializes in domes. Maybe I might get him to take some pics. ;)
 
I do contract work for the US Patent Office and the Dept of Energy. They have lots of top end domes and power series systems. I usually only work on the power series stuff and z mainframes.
 
I do not think any of the big vendors are using Epyc just yet. It would probably require more engineering than it is worth, except all the spectre/meltdown shit might spur some customers to demand it. But the price difference, $40k for 4 top end intel cpus, vs what $3k each, or $12k for top end AMD cpu's. If my guesstimate is right, that's only $28k potential difference. For a machine that is in the several hundred k price range, it turns out to be just a small piece of the pie. Especially factoring in all your software license costs. It's not enough to worry about. Spectre/Meltdown is patched, and it hasn't been an issue in our environment. Hosts are memory starved before they are ever cpu starved, so even with any potential performance hits, doesn't matter. And the spectre/meltdown is probably one of the few potential drivers of a shift in CPU architectures a company might want in their datacenter. But even that hasn't been a big issue.

Someone probably knows if any of the big boys are going to make any AMD powered stuff..

Dell makes Epyc powered PowerEdge servers. Depending on the workload, I would imagine certain software would be harder to optimize for (2S is effectively 8 NUMA nodes). One thing Epyc has a considerable advantage in is the 1S market and storage. Intel does not have a 1S solution with enough PCI-E lanes for NVME storage.

That said, I'm finding it interesting the OP is using 4S servers. From my viewpoint, I thought 4S servers were a dying breed. Basically cloud solutions (e.g. Azure/OpenStack or Clustering) made the RAS of 4S less useful. E.g. if your server was dying, you would take it offline, and all the services would move to another node. Or worst case scenario, you could unplug a server, and all the services *should* still migrate to the next node (since its all network storage based anyways).

I suppose its reflected in our strategy, as most of the servers here are 2x Xeon Platinum 8176's. Curious what your general strategy is.
 
Last edited:
Dell makes Epyc powered PowerEdge servers. Depending on the workload, I would imagine certain software would be harder to optimize for (2S is effectively 8 NUMA nodes). One thing Epyc has a considerable advantage in is the 1S market and storage. Intel does not have a 1S solution with enough PCI-E lanes for NVME storage.

That said, I'm finding it interesting the OP is using 4S servers. From my viewpoint, I thought 4S servers were a dying breed. Basically cloud solutions (e.g. Azure/OpenStack or Clustering) made the RAS of 4S less useful. E.g. if your server was dying, you would take it offline, and all the services would move to another node. Or worst case scenario, you could unplug a server, and all the services *should* still migrate to the next node (since its all network storage based anyways).

I suppose its reflected in our strategy, as most of the servers here are 2x Xeon Platinum 8176's. Curious what your general strategy is.

I hate to be speaking for the OP here, but my suspicion is either a very dense VM host group, or some sort of distributed computing need that needs a LOT of CPU horsepower and memory utilization. (I believe he hinted these are VM Hosts.)

The needs for the high core count and very high memory amount leads me down that path or some killer database hosts.

On your point about 4S servers being a dieing breed. There are tasks where you want to keep the processing in house for security/and or very HIGH latency control. Whenever you trust your systems to cloud then you have the issue of the cloud vendor shuffling your processing to nodes that have greater latency. For my companies traffic introducing another 10 ms of latency that we can't plan and account for is a problem. Our VM hosts are 72 thread 512gb 2u systems. (2 slot as well) but I could see CPU density needing to be higher and maxing out as a test bed for ROI.

I do see your point though in regards to risk. Even if these are VM hosts you are putting a lot of eggs in once basket. OR you need some CRAZY large VM's that I would rather see in physical anyway.
 
I hate to be speaking for the OP here, but my suspicion is either a very dense VM host group, or some sort of distributed computing need that needs a LOT of CPU horsepower and memory utilization. (I believe he hinted these are VM Hosts.)

The needs for the high core count and very high memory amount leads me down that path or some killer database hosts.

On your point about 4S servers being a dieing breed. There are tasks where you want to keep the processing in house for security/and or very HIGH latency control. Whenever you trust your systems to cloud then you have the issue of the cloud vendor shuffling your processing to nodes that have greater latency. For my companies traffic introducing another 10 ms of latency that we can't plan and account for is a problem. Our VM hosts are 72 thread 512gb 2u systems. (2 slot as well) but I could see CPU density needing to be higher and maxing out as a test bed for ROI.

I do see your point though in regards to risk. Even if these are VM hosts you are putting a lot of eggs in once basket. OR you need some CRAZY large VM's that I would rather see in physical anyway.

The cost of 4S servers have come down greatly since the previous generation since the CPU's are the same now. Previously you had the Xeon E7 series which fetched a price overhead. So the price to processing power ratio has come down in 4S's favor, but nevertheless is still a premium going the 4S route.

When I speak of cloud computing, I also meant in premise cloud offerings or hybrid cloud models. Windows Server 2016 Datacenter with HA clustering is basically a 'cloud' stack (and has been since Server 2012, IMO). You also have OpenStack deployments that are very easy to do. So let's assume you have a choice between 5x 4S/3TB or 10x 2S/1.5TB as the compute node. With Xeon Platinum 8176's, you're at 56C/112T for 2S and 112C/224T for 4S. Unless you specifically need huge amounts of memory per node (and that would have to be > 1.5TB), is there an advantage to going the 4S route? The 2S solution can run 50x VM's at almost 32GB average memory each. Any node failure would be much easier to distribute with increased server count.

Am I missing something?
 
O1mNd5U.gif
 
Sorry for slacking off. Wanted to post some sort of reply before I spaced yet again.

"Cloud" is a dirty word in my book. Not because it's useless, it's not, rather because so many people seem to think it's the be-all-end-all solution to everything. Not saying anyone here is of such an opinion... but it's something I deal with a lot. For example, if the use case completely monopolizes a 4P box like this, then the point of doing a "cloud" is pretty pointless. Now if the new Super Domes work the way I think they do, meaning the ability to make an assload of systems legitimately and transparently seem like 'one big ass box'... then that would be a whole different scenario. BTW, trudude, I'm all ears for real world information on the new domes. People here are pretty reluctant to use them. Not so much from a cost point; but from the worries that HPE might not be around for the long haul.

Really cannot comment on what this server is used for; can say it's not a VM or DB server, nor is it any sort of DC setup. Oh, and they'd take more sockets in a heartbeat. ;) Will say the software license costs would make most of you choke.

AltTabbins... I was watching that GIF thinking "WTF is this all about?"... then I saw sweet pop up and laughed my ass off. :D
 
"Cloud" is a dirty word in my book. Not because it's useless, it's not, rather because so many people seem to think it's the be-all-end-all solution to everything. Not saying anyone here is of such an opinion... but it's something I deal with a lot.

For example, if the use case completely monopolizes a 4P box like this, then the point of doing a "cloud" is pretty pointless.

Cloud is used a lot because its the current marketing jargon. Concept of modern cloud computing has existing since at least the early 2000s.

Disagree with the latter assessment. You can run bare-metal failover services provided by Server 2016 or OpenStack; it doesn't have to be containerized or virtualized. If you just run it standard bare-metal as a single node (without a controller) and system fails, you'd have to do the failover manually. At least with a controller, the failover is automatic. IIRC, the new rage is containers (which has access to all physical resources, bar networking) or bare-metal in cloud.

Really cannot comment on what this server is used for; can say it's not a VM or DB server, nor is it any sort of DC setup. Oh, and they'd take more sockets in a heartbeat. ;) Will say the software license costs would make most of you choke.

I like the 'guess the workload' game. I had doubts it was a VM/HPC because 4S for VM/HPC is unusual these days.

Task likely to be extremely parallel, doubtfully IO constrained (don't see much in the way of networking and storage isn't using occulink), likely requiring large amounts of memory due to dataset size. I would guess financial or some kind of science sim (e.g. oil/gas/weather), but they seem to be moving those workloads to GPGPU.
 
Guess I should post my 2S Xeon Platinum 8176 benches for reference. Not sure how to get single core performance. OS is Windows Server 2016 RS1, guess Cinebench isn't updated for a 2 year old OS.

Trying to get Xeon 8180's (much higher Base and Turbo clocks), but there seems to be shortages in quantity.

h7wiUtX.png
 
Last edited:
dexvx, I will be honest and say that I'm far from an expert on virtualization. That said, those who are express doubts of moving the workloads of this particular product to such an environment. Maybe it'll happen once we really start researching it. For now though, it's a uni-tasker. (There are other factors, too, but not getting into those) BTW, no gas/oil/etc. :p Scientific might be fair.

Oh, click on File > Advanced Benchmark to enable single core. :) We're restricted to the embedded road map, else we'd likely be using more robust processors.
 
You have no power against my SQL queries. LOL

That's one hell of a setup!
 
fastgeek I like your boxes. I'm betting on something doing oil location work and such. But it isn't really important. Just you have users out there who need a TON of threads and a SHITTON of memory to run. That makes me think you're talking very large image files and data files manipulated on the fly. So I would point to something taking very high res images and rendering them allowing the "enhance" features in tv and such. But that's just a guess. Pure scientific work I would think would be better handled by some high end GPU's designed for that sort of work. ( Though their floating point efficiency is to be found lacking.)

Lets not go down that rabbit hole and guess correctly. Impressive system. That's a lot of density for a single tasked system. But if you need accurate floating point... that makes more sense.
 
Really cannot comment on what this server is used for; can say it's not a VM or DB server, nor is it any sort of DC setup.

Boss needed a new workstation because Outlook took 15 seconds to open eh? Well Minesweeper will run like a champ on that.
 
Its amazing what they can cram into a box these days. I can only wonder if there is a Skunkworks of the PC industry.
 
Its amazing what they can cram into a box these days. I can only wonder if there is a Skunkworks of the PC industry.

I have no documentation to back up what I am about to say here. I have never worked in the intelligence industry. I have no clearance.

Yes... yes there is it is in direct partnership with the three letter agencies of the US. They develop specific hardware for encryption and decryption mostly.

The above is based on pure supposition and what I know from 20+ years in the IT industry.
 
Some thoughts here. That is a very nice machine. 192 thread is impressive and seeing it hit 2.5ghz during the load was nice as well. I have some 64 thread systems that do what I need and some 72 thread VM hosts. All good for the task.

I did want to note that someone mentioned the cost of the hardware. In planning a system out that hardware cost IS important. BUT you would be FUGGING STAGGARED at the cost of OS and SQL licensing.

As an example the company I work for has a contract with MS. We do SQL enterprise licesnes at 4 cores for 25k. (4 CORES) so lets take that 192 core machine. That's 4 24 core CPU's... so if you wanted that to be one big SQL host... (something that COULD use that amount of CPU and Ram.. (And oh my god the staggering performance please tell me it has it's own dedicated AFA supporting it!) But you would take that 25k licensing cost.. and MULTIPLY it by 24. 600,000. NOT INCLUDING THE LICENSING COST OF THE OS.

Some other STUPID shit that MS does in licensing. Lets say you wanted 3 of those systems as VM hosts. (Because you can run a F ton of stuff on them.) If memory serves it's say 7k a socket so that price isn't bad. BUT.. WAIT THAT'S NOT ALL. If you have VM's that are running MS os's they want to charge you (in our case) 220 per core...

Not bad right? Because the VM is only going to be 4 core... so it's cheap... OH WAIT... NOT SO FAST, says MS. That's xxx per core that the VM COULD run on. WHAT? Ok it isn't THAT insane since it's a x core VM. You only multiply the assigned cores by the number of hosts to get your licensing cost. (Affinity to specific hosts helps reduce cost here.) (Whatever that negotiated rate is.)

The other option is a datacenter license where you pre license all cores across all hosts at a.... discounted rate that is still going the be WHAT high.

Of course Linux varients have their own money grab in this licensing scheme as well... I just hope it's cheaper than MS's. ;)

Of course all that being said... my point is. In most enterprise deployments the big cost isn't the metal. It's the Intellectual property/licensing.

And us typical "gaming" end-users complain about DLC or season passes, annual subscriptions and microtransactions... The costs for corporate customers mentioned above require me to wash my ass and change my shorts. Excuse me...
 
Back
Top