build new multipurpose file server

alphabanks

n00b
Joined
Oct 31, 2012
Messages
62
I would like to consolidate two file servers into one system. As of right now I have one system that runs a perc6i with a raid 10 configuration I use that for all of my Vmware ESXI lab stuff. My second system also has a Perc6i running a raid 5 with 7 2TB WD Reds. I want to put this all together into one overall system maybe I will look at ZFS however, I'm not sure yet. The system will need to serve SMB,NFS and ISCSI. I mainly need help picking out hardware I know if I go ZFS I will need about 24 gigs of ram and at least two SSD drives. So what overall hardware would you guys select for this build?
 
Hold up. ZFS doesn't "need" 24GB RAM - it needs a few GB (2GB-4GB is enough) and will use any other RAM it has available for caching (ARC) over time. Unless you are running dedup, that is. Btw - don't use dedup. :p

Are you planning on running ESXi still in an all-in-one setup? Which OS do you prefer (thinking ZFS, as there's now multiple options with ZFSoL)? If you're migrating your data across to ZFS, where are you going to put it while you destroy the RAID arrays you have and rebuild?

Also check whether the Percs play nice as HBAs. I have no experience with the Dell cards, though I know plenty of others have.

What's your budget? Are you able to re-use any of your existing hardware?
 
The storage server will be separate from my ESXI lab boxes. Also I have about another backup server its 9 terabytes passed through to a server 2012 vm running in a raid 0 that I use to backup my main file server nightly. I'm rather torn about ZFS I might end up with a Linux box build. From my research ZFS is not all that great if you are running raid 10 which is what I use for my virtual machines I just use the raid 5 for other data. I really don't have anything to reuse of course I have all my hard disk and I also have two IBM M1015 controllers that are just laying around. However, it seems that most ZFS users get really good speeds and I assume that's because of using tons of ram and the ssd's for cache. I wonder if you can use ssd's in Linux as cache drives.
 
The storage server will be separate from my ESXI lab boxes. Also I have about another backup server its 9 terabytes passed through to a server 2012 vm running in a raid 0 that I use to backup my main file server nightly. I'm rather torn about ZFS I might end up with a Linux box build. From my research ZFS is not all that great if you are running raid 10 which is what I use for my virtual machines I just use the raid 5 for other data. I really don't have anything to reuse of course I have all my hard disk and I also have two IBM M1015 controllers that are just laying around. However, it seems that most ZFS users get really good speeds and I assume that's because of using tons of ram and the ssd's for cache. I wonder if you can use ssd's in Linux as cache drives.

I run raid10 (two mirrored pair vdevs in one zpool) with zfs and have no performance issues - the read speeds in particular are fantastic. What issues are you referring to?

You can use SSDs as L2ARC caches with ZFS.
 
The storage server will be separate from my ESXI lab boxes. Also I have about another backup server its 9 terabytes passed through to a server 2012 vm running in a raid 0 that I use to backup my main file server nightly. I'm rather torn about ZFS I might end up with a Linux box build. From my research ZFS is not all that great if you are running raid 10 which is what I use for my virtual machines I just use the raid 5 for other data. I really don't have anything to reuse of course I have all my hard disk and I also have two IBM M1015 controllers that are just laying around. However, it seems that most ZFS users get really good speeds and I assume that's because of using tons of ram and the ssd's for cache. I wonder if you can use ssd's in Linux as cache drives.

ZFS and raid-10 is pefect and this is what I would use for ESXi datastores unless you go SSD only. Raid-Z1 (similar Raid-5 but without write hole problems) is not as fast regarding I/O performance.
Additional SSD read-cache drives can improve performance. If you can add more RAM then prefer this over SSD cache disks as RAM is much faster than an SSD. An addtional SSD as log device can be added if you need fast and secure sync writes.

If your main usage is SMB, NFS and iSCSI, you should consider OmniOS with the included fast kernel-based SMB server, NFS and Comstar iSCSI (all part of the base OS). It is hard to find another server OS that is as fast and easy to configure as all is included and it is ZFS only.
 
Thanks for the info guys from what I read looks like one of the biggest benefits of ZFS is addressing the raid 5 write hole problems. When you go raid 10 this problems does not exist thus it would seem the appeal goes down. I actually found some random articles that talked about this I will see if I book marked them. OmniOS looks interesting I've mainly looked at OpenIndiana. Out of curiosity what speeds are you guys pushing across the network from your zfs systems I assume some of you are doing LACP?
 
I'm running a 8x4TB RAIDz2 array in my storage server with all the drives connected to an 8-port HBA. I have no issues fully utilizing a gigabit Ethernet connection (at least on sequential reads).

Like others have said, run a RAID10 setup with ZFS, throw in as much memory as you can, and you should be just fine, even running VMs. If you need, you could use an SSD as L2ARC, but you may not find it's necessary with enough RAM.
 
Back
Top