sinisterDei
[H]ard|Gawd
- Joined
- Dec 1, 2004
- Messages
- 1,591
Hiya, just figured I would share my "new" server PC with everyone. New is in quotes there, because some of the hardware is pretty ancient.
I call it a "server" but really it just runs Windows 10. It does some server-ish things like run VMware (Workstation edition) and hosts some VMs for me, and acts as my media repository, backup storage, PLEX, FTP access to my house, and some other ancillary duties.
This upgrade was prompted by a need for the box to do a better job at virtualization, as well as ever-expanding storage capacity needs.
Firstly, the "old" hardware, some of which is actually newer than the "new" hardware.
Intel i7 4770
ASRock Z97 Extreme4
32 GB Crucial DDR3
Samsung SM951 128GB M.2 SSD (boot drive)
3ware 9690SA-8i RAID controller
8x 3TB HDDs (varying brands) in RAID 6
And now, the new hardware:
2x Intel Xeon X5680 CPUs (6-core, 3.33 GHz)
Supermicro X8DTL-if
96 GB Samsung DDR3
Samsung 850 EVO 250 GB SSD (boot drive)
IBM M5015 RAID controller
HP SAS expander
8x 8TB HDDs (Seagate Archive 8TB)
Common hardware that didn't change:
Fractal Design R4 Black Pearl case
Cooler Master 650W PSU
250 GB Crucial SATA SSD
So, the CPUs are 3 years older, but they're 6-core and there's two of them. The memory is slower by a bit because it's registered ECC, but there is 3 times as much of it. This system will be *far* more capable for virtualization and server duties.
I was originally going with an ASUS Z8NA-D6C, but the one I bought was DOA. That turned out to be a good thing though, because I'm not sure it would support the X5680 CPUs anyways, and I like Supermicro better.
The X8DTL-if motherboard is pretty nifty, considering the amount of crap they were able to smash into a standard ATX form factor. It *does* suffer from a cooling problem on it's chipset - it gets too hot, and when it does every fan on the system goes to 100% to try and cool it down. I had to rig up an 80MM case fan and pointed it in the general direction of the chipset heatsink, and that keeps it cool enough to prevent the airplane-volume fan speed from kicking on.
I lost the SM951 PCIe SSD in favor of the 850 EVO SSD because the X8DTL-if is far too old to have a M.2 slot on it. Heck, the damn thing doesn't even have USB 3.0 which kind of sucks.
On the other hand, it does have a BMC (IPMI, remote KVM) and since the system is headless and installed in a closet, that's pretty handy.
I got the CPUs, memory, RAID controller, and hard drives for free, which was also a driving factor. I just had to buy the motherboard and cooling fans, which collectively added up to about $160. I already owned the SAS expander. I also had to buy a little $15 adapter that converts a 6-pin PCI-E connector into an additional 8-pin EPS connector, since this particular motherboard requires two EPS12V connectors to work.
Now then, notes on the system. Firstly, the Seagate Archive drives are *horrible* to use in any kind of a write-heavy environment or RAID controller. They are drive-managed SMR drives, and they have a very limited PMR write-cache. Basically, these drives are primarily for very, very read-heavy operations, and Seagate officially recommends *not* using them in a RAID array. In my case, I chose to use them for a few reasons.
1. They were free
2. My usage scenario is mostly reads - media archive uses most the space
3. The primary case against them for use in a RAID array is their extremely limited write speed will cause any array rebuilds to be extremely slow (think measured in days or weeks, rather than hours). I chose to ignore this because I am running RAID 6 so even if another drive dies during rebuild, the array won't fail, and because I won't care about the performance impact on the array during the rebuild since I have extremely low performance requirements just to stream some movies and TV shows.
That said, using these drives did pose an interesting scenario during the initial data copy to the array, which was around 15 TB of data. All told, it took around 8 *days* to copy that 15 TB. This was partially slowed because at the time the RAID array itself was also initializing in the background, and the RAID controller didn't have a good BBU attached to it so it was operating in write-through mode the whole time.
My VMs live primarily on the SSDs, which are not in a RAID. The contents of the VMs that I care about back up to the RAID6 array daily, so losing one or both of the SSDs would represent an inconvenience, not a true loss.
Additionally, I don't back up most of the actual media on the RAID 6 array - I don't have another 8x 8TB array sitting around anywhere to send a copy to and I don't have the internet to send the shit offsite. I *do* back up a particular folder on the RAID 6 array though, via Backblaze. That folder contains the important stuff (user profiles, documents, desktop, pictures, etc...) from every computer in my house.
You'll notice my RAID card supports 8 drives, and I only have 8 drives (the SSDs are connected to the motherboard) and yet I have a SAS expander in the mix. Welp, this is for future proofing, and because I already owned the expander. If, in the future, I decided to replace the 8TB drives with 16TB drives or something, I've got enough ports on the expander that I can connect all 16 drives at once if I need to, allowing for easy copying. This is a luxury I've never had before - when performing an upgrade like when I went from 2TB drives to 3TB drives, I had to borrow a NAS box, copy all my data over to it, then install the 3TB drives on the RAID card previously connected to the 2TB drives, then copy it all back.
The extra memory and CPU cores are a god-send. My primary work PC is a VM hosted on this system (I RDP to it. I operate this way to keep all the "work shit" that I have to have like their remote management software, VPN, and antivirus stuff "contained" rather than on my personal PC) and I was able to expand it from 2-cores and 8 GB RAM to 4/16 and my overall performance on that VM has gone way up. Additionally, I now can keep all of my testing environments online at the same time, rather than having to pick-and-choose which VMs I can boot up simultaneously. I've got a small domain controller/exchange environment, a few linux boxes, and three OS X clients, and previously I could only boot up one or two boxes at a time without running into memory limits.
Thanks to having both systems booted up simultaneously, I was able to move over my Kodi MySQL database, as well as my PLEX installation, without much interruption. This was my first time moving a PLEX install, but they had a guide for it so that was easy. Moving the MySQL install was super easy as well, but I've done that about four times now so it was old hat.
Everything's all done now, and I'm pretty happy with it. It's older and slower CPU cores are not noticeably so, and that slower speed is significantly offset by their quantity. I've got 28TB free space, which is a nice sight to see. This is definitely a project I would recommend to anyone who gets a pair of CPUs, RAID card, and a bunch of memory and hard drives for free!
I call it a "server" but really it just runs Windows 10. It does some server-ish things like run VMware (Workstation edition) and hosts some VMs for me, and acts as my media repository, backup storage, PLEX, FTP access to my house, and some other ancillary duties.
This upgrade was prompted by a need for the box to do a better job at virtualization, as well as ever-expanding storage capacity needs.
Firstly, the "old" hardware, some of which is actually newer than the "new" hardware.
Intel i7 4770
ASRock Z97 Extreme4
32 GB Crucial DDR3
Samsung SM951 128GB M.2 SSD (boot drive)
3ware 9690SA-8i RAID controller
8x 3TB HDDs (varying brands) in RAID 6
And now, the new hardware:
2x Intel Xeon X5680 CPUs (6-core, 3.33 GHz)
Supermicro X8DTL-if
96 GB Samsung DDR3
Samsung 850 EVO 250 GB SSD (boot drive)
IBM M5015 RAID controller
HP SAS expander
8x 8TB HDDs (Seagate Archive 8TB)
Common hardware that didn't change:
Fractal Design R4 Black Pearl case
Cooler Master 650W PSU
250 GB Crucial SATA SSD
So, the CPUs are 3 years older, but they're 6-core and there's two of them. The memory is slower by a bit because it's registered ECC, but there is 3 times as much of it. This system will be *far* more capable for virtualization and server duties.
I was originally going with an ASUS Z8NA-D6C, but the one I bought was DOA. That turned out to be a good thing though, because I'm not sure it would support the X5680 CPUs anyways, and I like Supermicro better.
The X8DTL-if motherboard is pretty nifty, considering the amount of crap they were able to smash into a standard ATX form factor. It *does* suffer from a cooling problem on it's chipset - it gets too hot, and when it does every fan on the system goes to 100% to try and cool it down. I had to rig up an 80MM case fan and pointed it in the general direction of the chipset heatsink, and that keeps it cool enough to prevent the airplane-volume fan speed from kicking on.
I lost the SM951 PCIe SSD in favor of the 850 EVO SSD because the X8DTL-if is far too old to have a M.2 slot on it. Heck, the damn thing doesn't even have USB 3.0 which kind of sucks.
On the other hand, it does have a BMC (IPMI, remote KVM) and since the system is headless and installed in a closet, that's pretty handy.
I got the CPUs, memory, RAID controller, and hard drives for free, which was also a driving factor. I just had to buy the motherboard and cooling fans, which collectively added up to about $160. I already owned the SAS expander. I also had to buy a little $15 adapter that converts a 6-pin PCI-E connector into an additional 8-pin EPS connector, since this particular motherboard requires two EPS12V connectors to work.
Now then, notes on the system. Firstly, the Seagate Archive drives are *horrible* to use in any kind of a write-heavy environment or RAID controller. They are drive-managed SMR drives, and they have a very limited PMR write-cache. Basically, these drives are primarily for very, very read-heavy operations, and Seagate officially recommends *not* using them in a RAID array. In my case, I chose to use them for a few reasons.
1. They were free
2. My usage scenario is mostly reads - media archive uses most the space
3. The primary case against them for use in a RAID array is their extremely limited write speed will cause any array rebuilds to be extremely slow (think measured in days or weeks, rather than hours). I chose to ignore this because I am running RAID 6 so even if another drive dies during rebuild, the array won't fail, and because I won't care about the performance impact on the array during the rebuild since I have extremely low performance requirements just to stream some movies and TV shows.
That said, using these drives did pose an interesting scenario during the initial data copy to the array, which was around 15 TB of data. All told, it took around 8 *days* to copy that 15 TB. This was partially slowed because at the time the RAID array itself was also initializing in the background, and the RAID controller didn't have a good BBU attached to it so it was operating in write-through mode the whole time.
My VMs live primarily on the SSDs, which are not in a RAID. The contents of the VMs that I care about back up to the RAID6 array daily, so losing one or both of the SSDs would represent an inconvenience, not a true loss.
Additionally, I don't back up most of the actual media on the RAID 6 array - I don't have another 8x 8TB array sitting around anywhere to send a copy to and I don't have the internet to send the shit offsite. I *do* back up a particular folder on the RAID 6 array though, via Backblaze. That folder contains the important stuff (user profiles, documents, desktop, pictures, etc...) from every computer in my house.
You'll notice my RAID card supports 8 drives, and I only have 8 drives (the SSDs are connected to the motherboard) and yet I have a SAS expander in the mix. Welp, this is for future proofing, and because I already owned the expander. If, in the future, I decided to replace the 8TB drives with 16TB drives or something, I've got enough ports on the expander that I can connect all 16 drives at once if I need to, allowing for easy copying. This is a luxury I've never had before - when performing an upgrade like when I went from 2TB drives to 3TB drives, I had to borrow a NAS box, copy all my data over to it, then install the 3TB drives on the RAID card previously connected to the 2TB drives, then copy it all back.
The extra memory and CPU cores are a god-send. My primary work PC is a VM hosted on this system (I RDP to it. I operate this way to keep all the "work shit" that I have to have like their remote management software, VPN, and antivirus stuff "contained" rather than on my personal PC) and I was able to expand it from 2-cores and 8 GB RAM to 4/16 and my overall performance on that VM has gone way up. Additionally, I now can keep all of my testing environments online at the same time, rather than having to pick-and-choose which VMs I can boot up simultaneously. I've got a small domain controller/exchange environment, a few linux boxes, and three OS X clients, and previously I could only boot up one or two boxes at a time without running into memory limits.
Thanks to having both systems booted up simultaneously, I was able to move over my Kodi MySQL database, as well as my PLEX installation, without much interruption. This was my first time moving a PLEX install, but they had a guide for it so that was easy. Moving the MySQL install was super easy as well, but I've done that about four times now so it was old hat.
Everything's all done now, and I'm pretty happy with it. It's older and slower CPU cores are not noticeably so, and that slower speed is significantly offset by their quantity. I've got 28TB free space, which is a nice sight to see. This is definitely a project I would recommend to anyone who gets a pair of CPUs, RAID card, and a bunch of memory and hard drives for free!