"New" server build and specs!

sinisterDei

[H]ard|Gawd
Joined
Dec 1, 2004
Messages
1,591
Hiya, just figured I would share my "new" server PC with everyone. New is in quotes there, because some of the hardware is pretty ancient.

I call it a "server" but really it just runs Windows 10. It does some server-ish things like run VMware (Workstation edition) and hosts some VMs for me, and acts as my media repository, backup storage, PLEX, FTP access to my house, and some other ancillary duties.

This upgrade was prompted by a need for the box to do a better job at virtualization, as well as ever-expanding storage capacity needs.

Firstly, the "old" hardware, some of which is actually newer than the "new" hardware.

Intel i7 4770
ASRock Z97 Extreme4
32 GB Crucial DDR3
Samsung SM951 128GB M.2 SSD (boot drive)
3ware 9690SA-8i RAID controller
8x 3TB HDDs (varying brands) in RAID 6

And now, the new hardware:
2x Intel Xeon X5680 CPUs (6-core, 3.33 GHz)
Supermicro X8DTL-if
96 GB Samsung DDR3
Samsung 850 EVO 250 GB SSD (boot drive)
IBM M5015 RAID controller
HP SAS expander
8x 8TB HDDs (Seagate Archive 8TB)

Common hardware that didn't change:
Fractal Design R4 Black Pearl case
Cooler Master 650W PSU
250 GB Crucial SATA SSD

So, the CPUs are 3 years older, but they're 6-core and there's two of them. The memory is slower by a bit because it's registered ECC, but there is 3 times as much of it. This system will be *far* more capable for virtualization and server duties.

I was originally going with an ASUS Z8NA-D6C, but the one I bought was DOA. That turned out to be a good thing though, because I'm not sure it would support the X5680 CPUs anyways, and I like Supermicro better.

The X8DTL-if motherboard is pretty nifty, considering the amount of crap they were able to smash into a standard ATX form factor. It *does* suffer from a cooling problem on it's chipset - it gets too hot, and when it does every fan on the system goes to 100% to try and cool it down. I had to rig up an 80MM case fan and pointed it in the general direction of the chipset heatsink, and that keeps it cool enough to prevent the airplane-volume fan speed from kicking on.

I lost the SM951 PCIe SSD in favor of the 850 EVO SSD because the X8DTL-if is far too old to have a M.2 slot on it. Heck, the damn thing doesn't even have USB 3.0 which kind of sucks.

On the other hand, it does have a BMC (IPMI, remote KVM) and since the system is headless and installed in a closet, that's pretty handy.

I got the CPUs, memory, RAID controller, and hard drives for free, which was also a driving factor. I just had to buy the motherboard and cooling fans, which collectively added up to about $160. I already owned the SAS expander. I also had to buy a little $15 adapter that converts a 6-pin PCI-E connector into an additional 8-pin EPS connector, since this particular motherboard requires two EPS12V connectors to work.

Now then, notes on the system. Firstly, the Seagate Archive drives are *horrible* to use in any kind of a write-heavy environment or RAID controller. They are drive-managed SMR drives, and they have a very limited PMR write-cache. Basically, these drives are primarily for very, very read-heavy operations, and Seagate officially recommends *not* using them in a RAID array. In my case, I chose to use them for a few reasons.

1. They were free
2. My usage scenario is mostly reads - media archive uses most the space
3. The primary case against them for use in a RAID array is their extremely limited write speed will cause any array rebuilds to be extremely slow (think measured in days or weeks, rather than hours). I chose to ignore this because I am running RAID 6 so even if another drive dies during rebuild, the array won't fail, and because I won't care about the performance impact on the array during the rebuild since I have extremely low performance requirements just to stream some movies and TV shows.

That said, using these drives did pose an interesting scenario during the initial data copy to the array, which was around 15 TB of data. All told, it took around 8 *days* to copy that 15 TB. This was partially slowed because at the time the RAID array itself was also initializing in the background, and the RAID controller didn't have a good BBU attached to it so it was operating in write-through mode the whole time.

My VMs live primarily on the SSDs, which are not in a RAID. The contents of the VMs that I care about back up to the RAID6 array daily, so losing one or both of the SSDs would represent an inconvenience, not a true loss.

Additionally, I don't back up most of the actual media on the RAID 6 array - I don't have another 8x 8TB array sitting around anywhere to send a copy to and I don't have the internet to send the shit offsite. I *do* back up a particular folder on the RAID 6 array though, via Backblaze. That folder contains the important stuff (user profiles, documents, desktop, pictures, etc...) from every computer in my house.

You'll notice my RAID card supports 8 drives, and I only have 8 drives (the SSDs are connected to the motherboard) and yet I have a SAS expander in the mix. Welp, this is for future proofing, and because I already owned the expander. If, in the future, I decided to replace the 8TB drives with 16TB drives or something, I've got enough ports on the expander that I can connect all 16 drives at once if I need to, allowing for easy copying. This is a luxury I've never had before - when performing an upgrade like when I went from 2TB drives to 3TB drives, I had to borrow a NAS box, copy all my data over to it, then install the 3TB drives on the RAID card previously connected to the 2TB drives, then copy it all back.

The extra memory and CPU cores are a god-send. My primary work PC is a VM hosted on this system (I RDP to it. I operate this way to keep all the "work shit" that I have to have like their remote management software, VPN, and antivirus stuff "contained" rather than on my personal PC) and I was able to expand it from 2-cores and 8 GB RAM to 4/16 and my overall performance on that VM has gone way up. Additionally, I now can keep all of my testing environments online at the same time, rather than having to pick-and-choose which VMs I can boot up simultaneously. I've got a small domain controller/exchange environment, a few linux boxes, and three OS X clients, and previously I could only boot up one or two boxes at a time without running into memory limits.

Thanks to having both systems booted up simultaneously, I was able to move over my Kodi MySQL database, as well as my PLEX installation, without much interruption. This was my first time moving a PLEX install, but they had a guide for it so that was easy. Moving the MySQL install was super easy as well, but I've done that about four times now so it was old hat.

Everything's all done now, and I'm pretty happy with it. It's older and slower CPU cores are not noticeably so, and that slower speed is significantly offset by their quantity. I've got 28TB free space, which is a nice sight to see. This is definitely a project I would recommend to anyone who gets a pair of CPUs, RAID card, and a bunch of memory and hard drives for free!
 
Have you tried Hyper-V?

It's been years since I used VMWare Workstation and things could have changed, but last I had used it, it was not extremely well suited for server tasks. In terms of remote management, it could serve the VPC's screen through VNC, and that was it. No remote management console for starting or stopping of machines, changing config, etc, and also the virtualization occurred in an application window - no way to run it as a service.

Windows 10 Pro includes Hyper-V, and it's really cool stuff. VMs run as a service, and of course you have a management window to control them when you need to do maintenance. The management console can connect over the network - so you can do VM maintenance from your workstation (of course, I realize with your current setup you can RDP into the Win 10 server to do VMWare maintenance). Hyper-V is also an actual hypervisor and designed with being a server solution in mind, so you gain robustness there. Works with Windows guests of course, plus Linux is even supported as a first-class citizen (Ubuntu, Red Hat, CentOS run without need for a VM tools installation).
 
On my previous build, I did actually test out Hyper-V. I was specifically interested in RemoteFX and better associated RDP performance to my work VM, but I was disappointed by the performance. We run VMware for work, and thus it makes sense for me to run that in my home environment which is where I do a bunch of testing. Plus, since it's for work, I get the software for free.

VMware Workstation is still pretty workstation-y. The screen integration has gotten much better if you want it, but it's still not usable as a "service". That's not much of an issue for me, because I make active use of the console session on the "server" PC as well, and RDP into it directly and start the VMs if the system reboots.

Which isn't often, because I'm in charge of its update schedule. One way you can still control Windows 10 updates is by enrolling it in a WSUS server, which is a huge commitment because WSUS even for just supporting Windows 10 still uses some 150 GB+ of disk space, and you've gotta have a WSUS server somewhere slurping up some memory and CPU. On the other hand, you break the cycle of Microsoft controlling your update schedule, so that's nice.
 
You were right to not go with the Asus board for the x5680's. They are not supported due to the power requirements, the Z8NA only supports 95w CPU's so x5675 is as fast as you can go. Having said that the ones I have owned have been pretty good and its only the lack of USB 3 and SATA 6G that holds them back - both of which can be solved with add in cards.
 
Yeah. USB3 is the only bit that bothers me. The M5015 RAID card supports 6GB SAS/SATA, but the HP SAS expander limits that to 3GB when operating with SATA drives, which is what all of mine are. But that's still way faster than my actual performance requirements, so I use it anyways. I *do* occasionally load a bunch of stuff onto a USB drive though, so I'll probably add a USB 3.0/3.1 add-on card at some point.
 
Back
Top