WHS + 5TB + ESXi

DlStreamnet

Limp Gawd
Joined
Mar 10, 2005
Messages
359
Hi all, please ignore my previous threads and posts. I would like to get some advice if you don't mind on my home server install. It will be as so:

Intel DQ67SW motherboard
16GB Kingston (non-ECC) RAM
5x 1TB drives (3x Samsung, 2x Seagate)
ESXi5 (second SATA card and 2x intel NIC's)

I intend to run Windows Home Server 2011 because it works well for back up and restore features (please, do not suggest others as I would really like to use this again :( - I appreciate this is a dick thing to say)

For my storage, WHS2011 doesn't do DE which I would like to use so I need to present the storage and pool them together another way.

The data I am storing is mostly videos (1.4TB of movies) and music (which will be modified/updated irregularly [maybe 5 or 6 saves a day?]).

Would it make sense to use something like Openfiler or OpenIndiana to present the storage? I most likely won't use this storage as a VM store as I have another controller/disks for that where throughput isn't a concern.

I am concerned with ZFS because:
1.) If something goes wrong with ZFS, i.e. I lose a drive - does all the storage disappear until I re-add a new drive?
2.) If I remove a 1TB drive, and add a 2TB drive - does it increase the storage space with no headache?
3.) If I lose a drive, is my data safe using raidz1? I can't afford to use raidz2 as that will take up a considerable amount of my storage. For example, if I lose a drive, will the resource usage of my other drives increase so much that it kills another drive?

Mdadm/Windows Server
I could alternatively use these solutions as I know if something drastic happens, I can whack the drives in another PC and restore the files. With ZFS I am stuck?

Any help is appreciated.
 
1) If you have raidz(1, 2, or 3) your storage continues to work, even before a replacement. However you might notice a speed reduction and you are less protected from other events.

2) Assume 5x1TB raidz , replace 1 drive with 2Tb, no new capacity will be detected... replace all 5 drives, one at a time, you double your capacity.

3) Loosing a single drive is ok. If certain errors occur during the outage, ZFS will report corruption (if it happens), but not be able to heal that corruption.
 
Honestly, the all-in-one configuration works really well for this. I use WHS 2011 as a VM for PC backup as well as the limited web server configuration to provide a message board for my gaming group.

I used Openindiana + Napp-it as a VM for storage. I shared 8x 2TB drives (RAIDZ2) via SMB (for workstation access) as well as NFS (for ESXi datastore access). This datastore is only really used for ISO storage for ESXi and a vdisk (about 300GB) for WHS 2011 to use for backups. All my movies/music/TV/etc are stored and shared via SMB instead of using WHS 2011 file share capabilities. Not to mention, I thought that with WHS 2011 they made it so that you can't just pull a drive and load it in another system to get the data off anymore. Honestly, the changes to WHS makes me feel like it is pretty much worthless except as a cheap backup system for clients ;)

All my other vdisks are stored on a striped-mirror array of 4x 750GB drives shared via NFS from the OI VM.

One caveat for the all-in-one is that you have to reserve the memory for the storage VM. In my case I had 10GB RAM assigned to it, but I'm pretty sure that was overkill.

I will say that I recently made the decision to seperate out my storage to a stand-alone box, but that is mainly because I have two hosts and was experiencing some issues with my configuration (dropped network connectivity via SMB, high vCPU usage, sporatic transfer rates). Others have it working in a stable environment, so I'm pretty sure it was something with my setup. The new box has a x3430 and 8GB RAM and is humming along nicely :)

I am concerned with ZFS because:
1.) If something goes wrong with ZFS, i.e. I lose a drive - does all the storage disappear until I re-add a new drive?
2.) If I remove a 1TB drive, and add a 2TB drive - does it increase the storage space with no headache?
3.) If I lose a drive, is my data safe using raidz1? I can't afford to use raidz2 as that will take up a considerable amount of my storage. For example, if I lose a drive, will the resource usage of my other drives increase so much that it kills another drive?

I'm sure others can provide more detailed info as I'm still learning ZFS but...
1. depending on the array, the data is still available and accessible if you lose 1 drive (mirror or raidz) or 2 drives (raidz2)
2. I don't think storage space will increase until you replace all the drives in the array with the larger drives. I'm not sure if you have to run a command to see the additional space at that point or not.
3. I don't think there will be significantly more stress on the drives until you replace teh drive. At that point ZFS will resilver the data across the drives, which will be more stressful and there is always the possibility another drive will fail. Just like hardware and software RAID isn't a replacement for backup, you should have a backup of your data if you care about not losing it.
 
Awesome answers guys I really appreciate that.

One more quick question however, if I DO have a drive fail, will I get very early warning? WHS uses S.M.A.R.T data and it is very up to date.
 
Smartvalues sometimes helps to discover disk problems prior a failure.
But mostly you get no Smart warning when a disk dies.

On ZFS most warnings are checksum warnings or too much error warnings due to disk problems,
silent data errors, cabling or controller problems - errors that WHS cannot detect due to the missing data checksum feature.
Smartcheck example with Smartmontools can add only a small level of extra security.

What you more need is redundancy (you must allow one or more disks to fail). You should have a hotplug disk to
reduce time to restore redundancy and you should get alert emails when it happens.
 
I have a question along these lines ... lets say a WHS server has 5 x 1TB storage drives in it. And each shared folder (Pics, Movies, Docs, Recorded TV & Music) was moved to one of the 1TB drives. If the Movies drive fails, you would replace that drive and restore the data from your backup.

But how would that same scenario work if you had a drive pooling add-in installed, like StableBit or Drive Bender?? How would you know what data was on the drive that failed??
.
 
I have a question along these lines ... lets say a WHS server has 5 x 1TB storage drives in it. And each shared folder (Pics, Movies, Docs, Recorded TV & Music) was moved to one of the 1TB drives. If the Movies drive fails, you would replace that drive and restore the data from your backup.

But how would that same scenario work if you had a drive pooling add-in installed, like StableBit or Drive Bender?? How would you know what data was on the drive that failed??
.

Unlike striped Raid solutions whera all data is striped over all disk with redundancy and cumulated performance of all data disks, where any one, two or three disks can fail without data loss, these pooling solutions work by copying the content of a folder to another disk or by creating a parity information on one additional disk on demand to rebuild data for one failed disk.

Main advantage: you keep your regular and independent ntfs disks. Only one disk is used during reads
so all other disks can sleep (very energy efficient so well suited for media servers without valuable data).

But these solutions are miles behind the performance and data security of a professional Raid-6 solution
which is far behind of newest filesystem developments (especially ZFS, parts of its features found in
WinReFS and BTRFS) when performance and data security is the major concern.
 
3. I don't think there will be significantly more stress on the drives until you replace teh drive. At that point ZFS will resilver the data across the drives, which will be more stressful and there is always the possibility another drive will fail. Just like hardware and software RAID isn't a replacement for backup, you should have a backup of your data if you care about not losing it.
I read somewhere here. There was a discussion if it is better to migrate the data off the zpool, if a disk dies, or repair the zpool with a new disk. The answer was something like this: if you migrate the data off the zpool, then zfs needs to read all disks and recreate the missing data which is the same thing as repair the zpool. Thus, migrate all data, or repair the raid - is equally resource intensive. Can someone confirm?
 
Maybe I'm looking for more operating system pooling features than a revolutionary file server.

I am happy with what Windows Home Server (2003, v1) offers me in the sense that it pools my disks together, and lets me specify redundancy for particular files. I don't really need a filesystem to manage this is what I'm thinking. I am nervous of drive failure causing complete data loss due to the impacted remainder of the pool as it re-slivers or rebuilds the data. With WHSv1, I get a warning that my disks are failing according to S.M.A.R.T and I can simply right click and remove the disk, then re-add.

I think I am going to stick WHSv1 on, and manage my shared files with it using drive extender (which seems ultra reliable) and also whack WHSv2 on as a VM to manage computer backups. This also separates the controllers used.
 
SMART is extremely unreliable. If disk failure concerns you and you put so much weight in SMART you will be disappointed one day.
 
Unlike striped Raid solutions whera all data is striped over all disk with redundancy and cumulated performance of all data disks, where any one, two or three disks can fail without data loss, these pooling solutions work by copying the content of a folder to another disk or by creating a parity information on one additional disk on demand to rebuild data for one failed disk.

Main advantage: you keep your regular and independent ntfs disks. Only one disk is used during reads
so all other disks can sleep (very energy efficient so well suited for media servers without valuable data).

But these solutions are miles behind the performance and data security of a professional Raid-6 solution
which is far behind of newest filesystem developments (especially ZFS, parts of its features found in
WinReFS and BTRFS) when performance and data security is the major concern.

So if you have a 5TB storage pool (5 x 1TB storage drives and the StableBit add-in installed) and one dives fails. After replacing the drive, what would the process be to restore the data that had been on the drive that failed??

Is it as simple as just replacing the drive??
 
Maybe I'm looking for more operating system pooling features than a revolutionary file server.

I am happy with what Windows Home Server (2003, v1) offers me in the sense that it pools my disks together, and lets me specify redundancy for particular files. I don't really need a filesystem to manage this is what I'm thinking. I am nervous of drive failure causing complete data loss due to the impacted remainder of the pool as it re-slivers or rebuilds the data. With WHSv1, I get a warning that my disks are failing according to S.M.A.R.T and I can simply right click and remove the disk, then re-add.

I think I am going to stick WHSv1 on, and manage my shared files with it using drive extender (which seems ultra reliable) and also whack WHSv2 on as a VM to manage computer backups. This also separates the controllers used.

If WHSv1's drive pooling is your gold standard for reliability, then you must have VERY low standards. Much as I liked it in theory, it was nothing but hassle in practice. In my experience, it's very brittle; and when the pool got corrupted or a drive started to go, it inevitably took hours of hand-holding to manually re-merge the pool contents. There's a good reason why Microsoft opted to dump it.

In my estimation, none of the WHS pooling solutions have any kind of reliability track record; they're little more than bandaids holding your precious filesystem together.
 
Back
Top