A somewhat noobe need an opinion using Raid6 file server with Xen

vl1969

n00b
Joined
Aug 12, 2013
Messages
15
OK, Here is the issue in a nutshell. and I do apologies for such a long post

I am NOT a Linux guy(hence the noobe in the title).
but I am not that new to Linux as I do read a lot and have been playing with different setups for about a month now, and even was successful to a degree in setting up a working environment with Ubuntu Server and Xen. all with the help of people like you here who kindly post help over the forums and actually answer my questions when needed :)

an FYI : ESXi and XenServer is a No-Go as they do not support PCI pass-though on this hardware Xen 4.2 does - and it confirmed.

my current file server is a bare metal UnRaid Based for 4TB total. (3 HDD x2TB one dedicated for parity)
Idea is to go away from UnRaid(it is good system and all and my server have been running steady for the last 2 years but I just want to entertain an alternative solution right now.)

after some research and considerations (I have went over most if not all currently available options like FreeNas/Nas4Free/OMV/SnapRaid/sharedSpaces etc.)
my finding is as such :
this are all good options but they do have limitations that steered me to UnRaid in the first place.

#1 - somewhat high HDD requirements for reliability and data safety.
#2 - not as simple storage space expansion compared to unraid where I can expand one disk at a time with no issues.
#3 - not all of the solutions offer real time data protection, and one that do .. see #1 and #2

after some searching I came upon solution that offer most of the functionality I am looking for except and ease of operation and that is a Linux Software RAID6 with mdadm option
based on my research this setup offers
real time reliability and multi-parity support
it's seams to be stable as per most of the forums I seen.
it can be expanded one disk at a time (not easy but doable.)
it does not require for disk of same size/type to be used.

the only bad I see is that initial setup requires 4 drives
and management is only CLI. which I think I can live with.

my questions are :
1. what do you think of this setup for file server. is using the native Linux Software Raid6 possible with Xen setup as described

2. what the best Linux distro for use with Xen 4.2 where I can have the Dom0 also manage my Raid6 file server needs this simplifies the management of data storage some what.

3. my hardware conceivably supports up to 30 drives setup 3-8port SAS cards for 24 HDD data storage and 6 on-board SATA for 6 additional HDD to use for LVM for VM storage
of course it will be years until I can fill this with HDD but a good setup will provide the opportunity to do so :p :D


Here is my goal(s) : for this build
I want to run
1. pfSence and/or Untangle Super router/firewall to replace my cable-vision router.
2. file server on Raid6 for storage.(this is a replacement for my UnRaid setup.)
3. Ubuntu Server 12.04 LTS SABNZBD+ Sickbeard+ CouchPotato+ Transmission
4. 1 or 2 Windows 7 or 8 VMs where I can setup software for Transcoding movies into MKV (if I can find the way to do it in Linux as easy as I do it now in Windows this might change)
5. FOG / Clonezilla
6. PXE boot server
7. FTP server
8. what ever else I need / want for work or play

My Hardware is :
Supermicro SC846 -- 24 Bay
Motherboard: H8DME-2 (BIOS version 3.5)
Procs: 2 AMD Opteron Hex Core 2431 @ 2.4Ghz for total of 12 cores
Ram: 56GB using 4x2GB + 12 x4GB ECC DDR2 PC5300 @667mHz
IPMI Card: Kira 100
3 SAT2-MV8 PCI-X Raid cards
1 Ablecom PWS-902-IR Redundant PSU
Network : 2 on-board Gigabit ports
1 Intel Pro Dual port PCI-e card for total of 4 ports
(add on card is to be used as a pass-through with pfSence or Untangle VM router/firewall)
 

RabidSmurf

n00b
Joined
Jun 30, 2009
Messages
56
I know nothing about Xen or why you could not do a PCI-E passthrough with your hardware using ESX, but with regards to the actual software RAID itself:

ZFS is straight up the way to go for a modern software RAID that rivals hardware controllers in performance and reliability. The only downside with it currently that I am aware of (and this is if you go with the free OpenIndiana distro over Solaris) is that an array cannot be expanded one drive at a time (They are reported to be working on this feature though).

However a ZFS pool can contain multiple arrays, so you could have a RAIDZ (RAID5) in the same pool as a RAIDZ2 (RAID6), or multiple RAIDZ, or whatever other crazy combination you can dream up and ZFS will stripe them together as one giant volume.

Here is my current home setup that has been working flawlessly for me so far, assuming Xen supports OpenIndiana, and PCI-E passthrough for your hardware then I see no reason why you couldn't use it.

1x ESXi 5.1 Host (SuperMicro board, XEON e3 and 32GB RAM in a NORCO 20 hot-swap bay case)
3x IBM M1015 RAID Controllers flashed to IT Mode (No option ROM or RAID features enabled, just host bus adapter mode), passed through to VM via PCI-E passthrough in ESX.
1x OpenIndiana VM (Free Solaris) with Napp-it (Web GUI)
6x WD RED 7200RPM 3TB drives configured in a RAIDZ2 (ZFS RAID6) in OpenIndiana
1x Windows Server VM with ZFS array is passed through via iSCSI and formatted with NTFS (Don't judge me, I have reasons, you could use use NFS instead of iSCSI)
1x pfSense VM with dual port GB NIC passed through via PCI-E passthrough in ESX acting as home firewall/router.
 

/dev/null

[H]F Junkie
Joined
Mar 31, 2001
Messages
15,190
I do not think that multiple different raid levels in a pool works on all zfs implementations. I'm about 90% sure it doesn't work in FreeBSD.
 

RabidSmurf

n00b
Joined
Jun 30, 2009
Messages
56
I do not think that multiple different raid levels in a pool works on all zfs implementations. I'm about 90% sure it doesn't work in FreeBSD.

http://en.wikipedia.org/wiki/ZFS

I could be wrong, but the way this is worded implies otherwise:

"Likewise, a zpool consists of one or more vdevs. Each vdev can be viewed as a group of hard disks (or partitions, or files, etc.). Each vdev should have redundancy, because if a vdev is lost, then the whole zpool is lost. Thus, each vdev should be configured as RAID-Z1, RAID-Z2, mirror, etc. It is not possible to change the number of drives in an existing vdev (Block Pointer Rewrite will allow this, and also allow defragmentation), but it is always possible to increase storage capacity by adding a new vdev to a zpool."

Given that this is one of the big features everyone raves about, I would say it mostly likely works in Solaris and OI, I know very little about FreeBSD though.
 
Top