36TB File server build log

roach9

Limp Gawd
Joined
Jan 17, 2012
Messages
147
This is for streaming media throughout my network. Primarily 1080p movies, 720p television, and FLAC music.
I am building this server with archiving in mind. I do not have a backup for this machine. It is too expensive and quite frankly, while 30TB of media seems like a lot, media is very replaceable. I am selling my 20TB file server in the coming days, but keeping the media on it. I do not get attached to my media, so a backup is not essential.

I am absolutely open to criticism and input. When it comes to OSes, I know there won't be a consensus, so I'd appreciate it if discussion revolved around hardware and not the software... of course, if you are adamant I am flawed in my OS choice... let me know.

-------------------------------------------------------------------------

Modus operandi:
NAS4Free.
This is a supported spinoff of FreeNAS 7. The guys who built the nighly-builds and kept the support going for FreeNAS 7 were asked to cease building under the FreeNAS name, as FreeNAS took off with version 8. The result is NAS4Free. While the community is small, I am choosing this operating system for a few reasons.
1. Stability. Been running the latest FreeNAS 7 nightly build for about 300 days now without any errors or downtime.
2. Updated ZFS. I love ZFS to begin with. NAS4Free offers the most recent ZFS versions.
3. Continued support.
4. Built on FreeBSD. I know that the new versions of FreeBSD have great driver support for all of my components.

Server case:
Fractal Design Define XL.
63339_l.jpg

I have experience with this case. Love the space. Love the acoustic dampening material. The build quality of these are great. Not much to say about this choice. It makes the most sense for my build. I don't have experience with a server rack, nor do I have the space for one at the moment. Perhaps in the future I can transplant into one, but for now, I'd like to stick to the ubiquitous computer case.

Motherboard:
Supermicro X9SCA-F ATX
60386_l.jpg

Supermicro are renowned for making high quality server boards. I went with this one because it has a number of features I'd like to take advantage of (at a budget price!). Some features I consider a premium for what I am building would be ECC support, and dual LAN (I'd probably only use one, realistically). Onboard video and LAN are really nice features for me as my current file server had to be outfitted with a low-profile GPU and Intel LAN card... while easy installs, I consider it a bit of a waste of space and energy.

Computer Processing Unit:
Intel Core i3 2100 Dual Core
58339_l.jpg

I know what you're thinking. You buy a Supermicro board with ECC... but your CPU doesn't even support ECC! But it does. I've read reports of users who have this processor and are running with ECC RAM. I think this will be suitable for a file server. It has low energy consumption and supports ECC. While a Xeon would be a nice, I consider it a luxury in this case and the i3 can do everything the Xeon can do, a little slower. But speed is not of major concern with this build.

RAM:
Kingston ECC 8GB RAM
62867_l.jpg

ZFS loves RAM, so perhaps 16GB would be nice here... but I think 8GB of ECC is enough for my needs. The speed difference at 8GB and 16GB of RAM is not something I've looked closely into, but I cannot imagine the difference being worth an additional $70.

SAS/SATA Controller:
IBM ServeRAID M1015
213027827.jpg

I have already loaded up the IT firmware. These controllers have received a lot of praise from those who've used it. I've already purchased this piece, so this is a component I am sticking with. One of the main reasons I went with this over an LSI 1068 controller is that I aim to use 3TB+ hard drives. As far as I know, this controller and the motherboard will support 3TB+ drives... please correct me if I'm wrong here!

Hard drives:
Hitachi 3TB Deskstar 5400RPM
1019427555.jpg

This was the toughest decision out of any other component. This decision came down to 2 things:
1. Ratio of $/GB is the cheapest of any hard drive in Canada at the moment, at a price of $160~.
2. 3 year warranty.
The warranty is a huge factor for me and eliminates any WD/Seagate drive right off the bat. The only other option is the Samsung 2TB F4s. I have 10 of them in my other server and they've served me well without any problems for a year and a half. The fact that they are now assembled by Seagate kind of turns me off to them... perhaps. The Hitachis seem to have turned around their HD division... they used to be known as the DeathStars, but that tag seems to have faded from their product. I plan on buying 12 of these suckers!

Power supply:
Antec EarthWatts 500W
24129_l.jpg

I already own this. Came with my Antec Sonata III. Only 2 years old. I feel that it is trustworthy and despite looking generic and coming with a case, it is a really well built PSU. 500W should be more than enough for this build, I believe.

----------------------------------------------------------------------------

Bits and pieces...

I plan on running this off of an internal 8GB USB stick. Nothing special... everything is loaded onto the RAM post-boot anyway. I almost consider a RAM loaded OS safer than an SSD or HDD.

No point talking about the cables...

What do you guys think? What can be improved upon without drastically upping the price?
Does everything look compatible (software with hardware)?

Thanks guys... sorry about the length of this. I guess whoever reads this is interested in these sorts of things anyway.
 
I have a very similar setup as my primary workstation (see my sig). The XL has trays for 10 drives. I will suggest a Lian-Li 6 In 3 bracket http://lian-li.com/v2/en/product/product06.php?pr_index=559&cl_index=2&sc_index=5&ss_index=12 which will allow you to mount 4 additional 3.5" HDDs and 2 2.5" SSDs or HDDs as well as a DVDRW in the 4 externally available 5.25" slots. Since you want 36TB available, if you do double parity of some kind (very suggested) you will have 14 drive slots available. Also plan to pick up an extra front mounted fan for the lower drives. I would suggest going for the 16B update, for the $$$ it is worth it. As for the Hitachi 3TB 5400's, I fully recommend them. Unfortunately, they are becoming almost impossible to find new (especially in quantity) and now that the WDC sale is complete I do not think you will see any more 5K3000's. I am not a particular fan of that power supply, I recommend Seasonic. As for serving media throughout your house, if your clients are capable of directly playing all the formats the i3 will be fine. If you decide to go with lower powered clients (appletv, roku, etc) I would suggest an i5 which will better handle transcoding and remuxing video, especially if you have multiple clients streaming concurrently. Not that it will make a difference in throughput since you are using HDDs, but the M1015 is a x8 card, and your board choice has 1 16x slot, and 2 x4 slots (x4 electrically, x8 physically). Next, instead of getting an 8GB stick and running this off USB, 30GB SSDs are cheap enough that you can get them for $35-$45 now, you might as well gt one of them.
 
Last edited:
Will the OS then run off the SSD, as opposed to the RAM? Thanks a ton for your help!

What is the 16B update you speak of?
 
The hardware overall looks like a reasonable setup. Changes I personally would make:

1. I'd try to locate a used e3-12xx Xeon in the $140-180ish range, instead of the i3-2100. Though if you're only using it as a file server, without encryption and/or compression, dual vs quad probably won't matter.

2. Look at the X9SCM-F and X9SCL+-F. The first gives you 2 x8 pci-e slots, and 2 x4 pci-e slots, which gives you more slots and bandwidth for host controllers. The second has 2 NICs which are supported out of the box by ESXI, on the other models only one NIC is supported, without some manual configuration fun.

3. ZFS loves memory, but for a streaming media server 8GB will probably be fine. I did have some issues with long writes causing reads to have dropouts. As I recall, decreasing vfs.zfs.vdev.max_pending in /boot/loader.conf solved them. Though that decreases the performance of many parallel small writes. That was also back in FreeBSD 8, so it may have been fixed in 9.

4. Hotswap trays/bays sure are convenient, once you get used to them, it's hard to go back to a plain old computer case.


I ran a Ubuntu Linux Server file server off of a USB stick for a year or 2 with no issues. So when I first tried BSD, I also installed it to a USB stick. It worked, but updates installed very slowly, since they are generally compiled from source, which involves a lot of small random reading and writing. So I moved my FreeBSD install to a HDD, which sped things up a lot. It might not be an issue if NAS4Free only does binary updates. It's also possible you have one of the few USB sticks that is good at random I/O.

The SuperMicro boards with the -F postfix have IPMI. Plug in the IPMI ethernet adapter, and you can do everything remotely, even mount and install operating systems from ISO images on the remote system. Unless you have network configuration issues with the IPMI, you don't ever need to plug in a keyboard mouse and monitor, or even install media.


12 disks is a slightly odd size for ZFS, optimal vdev sizes are:
raidz1: 3, 5, or 9 disks
raidz2: 4, 6, or 10 disks
raidz3: 5, 7, or 11 disks

raidz1 is not generally recommended on say .5TB or larger disks since the resilver times are so long when replacing a failed disk.

Since you have no interest in backups, raidz3 would be a good idea, the optimal vdev size would be 11 disks, leaving 24TB of storage. You should snapshot often, so that you can recover from accidental or malicious deletions from remote systems.

If for some reason you feel you have to use 12 disks, the optimal configuration would probably be a pool with 2x 6 disk raidz2 vdevs, which would also result in 24TB of storage. This would give you an "easy" upgrade path to add 12TB more by adding another 6x3TB raidz2 vdev at a later date.

Another option would be to start with a 10 disk raidz2 vdev, with plans to later add another 10 disk raidz2 vdev, when HDDs get cheaper again.

Since you won't have a backup, you should definitely put some thought into how you want to setup your vdevs, both to start, and in terms up future storage upgrades.
 
I don't plan on upgrading, to be honest. The 30TB or so should be enough for me.

What is with the suggested disk numbers? I was going to go with a raidz2 with my 12 disks. Will this be a problem, or will I only see a slightly slower performance?

I am not particularly interested in the remote installation as I will be on the network all the time... no need to remotely do anything really!

One big question:

What is the difference between the motherboard I have in the OP and this one:
X9SCL-O (http://ncix.com/products/?sku=60601)

I think for my needs, I might only need the mATX motherboard...
 
If you dont have backup, and want to safe, you can go for raidz3. So you have 11 disks in raidz3. Do that twice and you have 22 disks in raidz3. How many disk slots do you have in total?

If you have 24 disks in total, that leaves two empty disk slots. You can have one disk as hot spare. And another disk as a ZIL/L2ARC device. An 11 disk raidz3 gives 8 disks for storage, which means you have in total, 16 disks for storage. The rest of the disks is for increasing redundancy/safety.
 
Guys... I really don't have $2500 extra to drop on more HDs.

12 disks is my max. Thanks for the info on # of disks vs. raidZ type. I'll definitely be reading more into it.
 
Also, what do you guys think of running this off a 40GB Intel SSD, as opposed to off a USB stick (essentially, the RAM)?
 
Roach - I suggested that up in post #2. In any case, RAM and an SSD are two completely different things. RAM is a volatile area where programs and data are loaded for immediate use, the SSD is where your OS, apps and data reside when the power is off and for things that don't need to be in memory at that time (and can be used for virtual memory if you run out of RAM)
 
One possibility: two 6-disk raidz2. So you would have 8 disks worth of usable data (2/3 total). Having two vdevs gives you better performance too.
 
I really only want to devote 2 of the 12 disks to redundancy. I'm still reading about possible issues with a one vdev 12 disk raidZ2, but I cannot find any serious issues with this.

The disks are 512byte sector disks as well, which seems to ease the issue of 12 disk raidZ2... Hmmm

Also, I'm aware of the difference between RAM and SSD.
I'm wondering, if I run the OS off the SSD, it won't load onto the RAM, correct? That would save RAM for throughput, as opposed to running the OS... a marginal performance gain, but a performance gain, nonetheless?
 
No, the OS will always run in RAM. If your sticking 16 (or even 8GB) of RAM in this box, you will do fine.
 
Ok.

With that said, I think I'll go with my original plan and boot the OS off a USB stick.
Thanks mwroobel!
 
no new pools should be built without making sure that zfs creates a pool with ashift=12 or you *will* get burned by a replacement disk in the future when it can not be added to the pool. the only option then it to rebuild the pool. 512 sector disks will be harder to find in the future unless you by enterprise drives
 
Sorry... I'm not sure what that lingo means.

I'm assuming you're saying... don't get the 5K3000 disks because in the future it will be hard to replace one because the new model of HDs are 4K sectors... ?
 
While ECC memory is preferred, the i3 CPU won't support it. You either have to move up to a Xeon to get ECC support or to AMD. Intel removed ECC support from everything but the Xeon a long time ago.
 
We've had this conversation here before... the i3 does indeed support ECC memory! I'll pull up the confirmations from people with this setup in a few minutes...
 
While ECC memory is preferred, the i3 CPU won't support it. You either have to move up to a Xeon to get ECC support or to AMD. Intel removed ECC support from everything but the Xeon a long time ago.

Although this is commonly said and even some of the intel documentation states this there are numerous reports where ECC does in fact run on consumer level i3 processors and maybe some pentiums?
 
This build is going to be awesome! I'm certainly looking forward to seeing it in action. Please take pictures as you go!
 
A new board just announced, Workstation class from Gigabyte, http://semiaccurate.com/2012/06/03/gigabyte-makes-an-enthusiast-workstation-board/ . ATX, 8 DIMMS, 14 drive ports (8 SAS, 6 SATA), USB3, lots-o-slots. C606 LGA2011 based (New chipset). Even overclocking friendly. Not a "Server" board like the SM, but a little more gaming friendly. Again, you need to decide what you really want/need the most, and this may be something you would consider.
 
A new board just announced, Workstation class from Gigabyte, http://semiaccurate.com/2012/06/03/gigabyte-makes-an-enthusiast-workstation-board/ . ATX, 8 DIMMS, 14 drive ports (8 SAS, 6 SATA), USB3, lots-o-slots. C606 LGA2011 based (New chipset). Even overclocking friendly. Not a "Server" board like the SM, but a little more gaming friendly. Again, you need to decide what you really want/need the most, and this may be something you would consider.
It seems to have ECC? How can that be?

I have heard that even though the mobo uses ECC, it can happen that ECC is not functioning. Pure marketing.
 
If it's a C606 board it should support ECC with any CPU that supports ECC. Nominally for socket 2011 that's only Xeons.

But the article is completely wrong about single socket LGA2011 Xeon pricing, they are nowhere near 3x the cost.

pricing from Newegg, unless noted:

non xeons:
i7-3820 4c, 3.6gHz: $300 ($230 from Microcenter)
i7-3930k 6c, 3.2gHz: $590 ($500 from Microcenter)
i7-3960x 6c, 3.3gHz: $1030

vs Xeons:
e5-1620 4c, 3.6gHz: $294 (Intel recommended price, $320 from BLT)
e5-1650 6c, 3.2gHz: $583 (Intel recommended price)
e5-1660 6c, 3.3gHz: $1070

For a file server that board is overkill. One exception being if you wanted to put 64 gigs on it for zfs dedupe.
 
If it's a C606 board it should support ECC with any CPU that supports ECC. Nominally for socket 2011 that's only Xeons.

But the article is completely wrong about single socket LGA2011 Xeon pricing, they are nowhere near 3x the cost.

pricing from Newegg, unless noted:

non xeons:
i7-3820 4c, 3.6gHz: $300 ($230 from Microcenter)
i7-3930k 6c, 3.2gHz: $590 ($500 from Microcenter)
i7-3960x 6c, 3.3gHz: $1030

vs Xeons:
e5-1620 4c, 3.6gHz: $294 (Intel recommended price, $320 from BLT)
e5-1650 6c, 3.2gHz: $583 (Intel recommended price)
e5-1660 6c, 3.3gHz: $1070

For a file server that board is overkill. One exception being if you wanted to put 64 gigs on it for zfs dedupe.

Agreed for some processors, but there are some high end versions that are much more expensive. Especially when you get to DP and MP compliant processors. (eg - 6 core i7-3930k 3.2Ghz is about $589 vs 8 core Xeon e5-2690 2.9Ghz which is over 2 Grand) Actually, this board isn't all that bad as it has 8 LSI SAS ports as well as 6 SATA ports. It saves you a slot for the SAS board you don't need to buy and install. Also, with 8GB DIMMS coming down in price, you can put 64GB on the board on the cheap (compared to what the 4GB/8GB price spread was just 6-9 months ago.
 
Back
Top