[H]ome SAN Build

int0x80

n00b
Joined
Jul 5, 2006
Messages
26
Looking for feedback on this build before pulling the trigger. I am currently running a decommissioned HP workstation for my home server with ESXi 6. The system has six WD Red Pro 6TB drives running in RAID6 with an Areca ARC 1880i RAID controller. The system has had a few random shutdowns recently but no hardware failures have been found when digging through the logs. I was sitting near the server when a recent shutdown occurred and things felt a bit hot.

With overheating being my prime suspect, I have decided to move the storage out to a SAN while keeping the workstation running ESXi. This is a completely new undertaking to me, so mistakes are expected. The storage and RAID card will move over to the SAN so I only need a barebones system.

CPU: AMD FX-6300 $97.50
RAM: Crucial Ballistix 16GB $84.99
MoBo: ASRock 970 Extreme3 $44.99
PSU: Rosewill Photon 750 $74.99
Case: Rosewill 4U $104.99
NIC: HP MELLANOX (2) $34.60
Cable: 10G Twinax 30AWG $9.50
SSD: PNY SSD7CS1131-120-RB $39.99

Total: $491.55

Intent is to expose the RAID6 array to ESXi on the workstation via iSCSI target. Haven't ever done that either, but seems pretty straightforward for both the setup of the target and consumption as acquisition in ESXi. SAN will run Ubuntu Server 16.04 along with RapidDisk to utilize unused RAM for disk caching.

I've based the build off of this article from 2013 https://thehomeserverblog.com/esxi-storage/building-a-homemade-san-on-the-cheap-32tb-for-1500/ so that's why some components are older. Adding a few hundred dollars onto the build for more reliable components would be acceptable.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
The only small change I would do is some ECC ram as long as the mobo supports it as the CPU IMC does.

Now if we are talking hypothetically or If it was my setup:
I'd change the mobo to something with better VRM heatsinks and power phases that also state ECC support is present.
Use of ECC ram. Search for 14900E. It's what I run in my server. Example: http://www.ebay.com/itm/Genuine-App...507904?hash=item3f6ea8fbc0:g:gs8AAOSwMgdXycSZ
PSU: I go for Seasonic or Superflower based. That Rosewill PSU OE is Sirfa, formerly Sirtec according to jonnyguru.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
The only small change I would do is some ECC ram as long as the mobo supports it as the CPU IMC does.

Now if we are talking hypothetically or If it was my setup:
I'd change the mobo to something with better VRM heatsinks and power phases that also state ECC support is present.
Use of ECC ram. Search for 14900E. It's what I run in my server. Example: http://www.ebay.com/itm/Genuine-App...507904?hash=item3f6ea8fbc0:g:gs8AAOSwMgdXycSZ
PSU: I go for Seasonic or Superflower based. That Rosewill PSU OE is Sirfa, formerly Sirtec according to jonnyguru.

Thank you for the response. I had no idea about the PSU and had assumed the RAM was ECC due to multiple mentions in the blog post -- turns out it wasn't, as you pointed out. But the board does support ECC, reportedly. Looks like those kits are unbuffered/unregistered. I also found this HP (712288-581) 8GB PC3-14900E DDR3 RAM Memory SKhynix HMT41GU7AFR8C-RD kit that is 8GB on one stick; so I'm considering buying two sticks which gets me up to 16GB while still leaving two slots available.

I've updated the PSU to the Seasonic Prime 650W for $159.99 after having read reviews on jonnyGURU. Thanks for showing me that site, also.

What are your thoughts on the RAM and PSU changes?
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
Thank you for the response. I had no idea about the PSU and had assumed the RAM was ECC due to multiple mentions in the blog post -- turns out it wasn't, as you pointed out. But the board does support ECC, reportedly. Looks like those kits are unbuffered/unregistered. I also found this HP (712288-581) 8GB PC3-14900E DDR3 RAM Memory SKhynix HMT41GU7AFR8C-RD kit that is 8GB on one stick; so I'm considering buying two sticks which gets me up to 16GB while still leaving two slots available.

I've updated the PSU to the Seasonic Prime 650W for $159.99 after having read reviews on jonnyGURU. Thanks for showing me that site, also.

What are your thoughts on the RAM and PSU changes?

Looks good to me. As long as the ram is 8GB per stick. Some will list a single stick of ram as 8GB but its only 4GB as it's from a 2 Dimm kit.

That PSU is a beast, Titanium rated, HardOCP Gold award and a 10 year warranty. You could save 30 dollars by going to the SS-750KM3 which is a really good unit, but for the price difference compared to what you're getting I'd stick with the Prime series.

I would just switch to the 750W from Newegg for the same price though: http://www.newegg.com/Product/Produ...9&cm_re=seasonic_prime-_-17-151-159-_-Product
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
I suggest you sell the hardware raid as hw-raid is getting obsolete. HW-raid does not protect your data against data corruption, and you are locked in to ARECA. I suggest you sell the hw-raid and use an open source software raid ZFS solution instead. ZFS does protect your data against data corruption, and is potentially much faster. And you are not locked in to a specific OS, you can switch between Linux, Mac OS X, FreeBSD, Solaris, OpenSolaris. Save the money and do something nice with it instead.
https://en.wikipedia.org/wiki/ZFS#Data_integrity
 
Yeah what brutalizer said. I went with FreeNAS, ZFS, SSDs (mirrored then striped) and 10GbE (Intel x520-DA2) for my home SAN.
 
I did similar but went with 10Gig-E over RJ45 because it works up to and over 50FT, cat5 is cheap, and I can keep my storage in another room away from my PC.
 
I did similar but went with 10Gig-E over RJ45 because it works up to and over 50FT, cat5 is cheap, and I can keep my storage in another room away from my PC.

I went back and forth for quite sometime but landed on SFP+ due to its upgrade path. I figured, I ever wanna go farther than 7 meters (I think that's the max for passive direct-attach cables) I could do optics and fiber. More expensive than RJ-45, sure, but SFP+ uses less power and offered more flexibility (for me).
 
These are ConnectX-2 below, so rather slow and HP badge will complicate firmware upgrades. I'd go for ConnectX-3 (Pro), ou can find used / refurb on eBay for cheap.
--
NIC: HP MELLANOX (2) $34.60
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Looking for feedback on this build before pulling the trigger. I am currently running a decommissioned HP workstation for my home server with ESXi 6. The system has six WD Red Pro 6TB drives running in RAID6 with an Areca ARC 1880i RAID controller. The system has had a few random shutdowns recently but no hardware failures have been found when digging through the logs. I was sitting near the server when a recent shutdown occurred and things felt a bit hot.

With overheating being my prime suspect, I have decided to move the storage out to a SAN while keeping the workstation running ESXi. This is a completely new undertaking to me, so mistakes are expected. The storage and RAID card will move over to the SAN so I only need a barebones system.

CPU: AMD FX-6300 $97.50
RAM: Crucial Ballistix 16GB $84.99
MoBo: ASRock 970 Extreme3 $44.99
PSU: Rosewill Photon 750 $74.99
Case: Rosewill 4U $104.99
NIC: HP MELLANOX (2) $34.60
Cable: 10G Twinax 30AWG $9.50
SSD: PNY SSD7CS1131-120-RB $39.99

Total: $491.55

Intent is to expose the RAID6 array to ESXi on the workstation via iSCSI target. Haven't ever done that either, but seems pretty straightforward for both the setup of the target and consumption as acquisition in ESXi. SAN will run Ubuntu Server 16.04 along with RapidDisk to utilize unused RAM for disk caching.

I've based the build off of this article from 2013 https://thehomeserverblog.com/esxi-storage/building-a-homemade-san-on-the-cheap-32tb-for-1500/ so that's why some components are older. Adding a few hundred dollars onto the build for more reliable components would be acceptable.

I would go with a case that has hot swappable bays.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
Back
Top