Question before dropping a load of cash...

movax

2[H]4U
Joined
Aug 12, 2005
Messages
3,679
Hey all,

Finally looking (now that I have some scrylla in my bank account) to setup a nice RAID5 array to consolidate all my data onto (movies, music, etc), rather than my current desktop with a bunch of drives added in (3 640s, 1 750 and 1 1000, lots of space, 0 redundancy).

Was looking at the Areca ARC-1220 coupled with 8 WD 1TB blacks (WD1001FALS), but upon perusing this subforum, I've noted a lot of folks utilizing non-RAID controller cards (hence cheaper!) plus drives, in conjunction with WHS.

What's the rationale behind that? Software RAID vs hardware RAID offered by a controller? I'd like to not drop $420 on a controller if I don't have too. My primary concern would be reliability, followed by performance (I'm running a GigE network, and don't foresee too many concurrent access onto this machine).

My initial plan was to drop Server 2k8 on the box (TBD CPU/mobo/PSU...shooting for low power consumption, plus hopefuly a mobo that has been reported to play nice with the ARC-1220), setup the RAID (so end up with whatever 7x1TB formatted is...around 6.6TB I think), and make a few shares on it - music, video, etc.

Would like some input from the insane storage guru'd people *stares at Ockie*
 
What's the rationale behind that? Software RAID vs hardware RAID offered by a controller? I'd like to not drop $420 on a controller if I don't have too. My primary concern would be reliability, followed by performance (I'm running a GigE network, and don't foresee too many concurrent access onto this machine).

My initial plan was to drop Server 2k8 on the box (TBD CPU/mobo/PSU...shooting for low power consumption, plus hopefuly a mobo that has been reported to play nice with the ARC-1220), setup the RAID (so end up with whatever 7x1TB formatted is...around 6.6TB I think), and make a few shares on it - music, video, etc.

Those people running WHS either have no on-server fault tolerance at all, or 1:1 duplication of specific folders they deem important.

Now, as for software RAID: My next build will probably be a Solaris/OpenSolaris based ZFS software RAID-Z2 (Think RAID-6). A RAID-Z implementation has the advantage of a variable stripe width(depending on data being written) and the array being aware of the checksum of individual files. This is because with ZFS the filesystem and the block device are considered one. It will checksum data as its written and read, if the reading gives a non-matching checksum, it attempts to repair and will mark the bad sector on one of the hard drives. So, as far as RAID implementations go, RAID-Z is the pinnacle.

By the way. If you are looking at 8 drives(8x1TB R5 = 6.36TB), consider spending the extra cash and going with 1.5TB or 2TB drives. Once you factor in cost per port, the larger drives begin to make a lot of sense.
 
Those people running WHS either have no on-server fault tolerance at all, or 1:1 duplication of specific folders they deem important.

Now, as for software RAID: My next build will probably be a Solaris/OpenSolaris based ZFS software RAID-Z2 (Think RAID-6). A RAID-Z implementation has the advantage of a variable stripe width(depending on data being written) and the array being aware of the checksum of individual files. This is because with ZFS the filesystem and the block device are considered one. It will checksum data as its written and read, if the reading gives a non-matching checksum, it attempts to repair and will mark the bad sector on one of the hard drives. So, as far as RAID implementations go, RAID-Z is the pinnacle.

By the way. If you are looking at 8 drives(8x1TB R5 = 6.36TB), consider spending the extra cash and going with 1.5TB or 2TB drives. Once you factor in cost per port, the larger drives begin to make a lot of sense.

I see. I'm not too familiar with Solaris/family myself, so I think I'll be shying away from that. If anything though, I like the independence a hardware controller will have from any software RAID issues (e.g. just move the controller + drives to a different machine if needed).

I do see the point about getting bigger drives though...seeing as I'm already paying nearly $55/port, I may as well shoot for the most efficient utiliization of each.
 
i have used WHS, software raid 5 (windows XP pro hack) and hardware raid 5 (highpoint rocketraid)

From my experience i like RIAD 5 hardware the best. ODE and ORE both provide what WHS gives and any nice hardware card will have online expansion.


i love the addins for WHS and the remote access, nothing compairs to its simplicity and usage.

what i hated was 1:1 redundancy, waste of space and $$$. my highpoint 2220 i got for $100. it is much cheaper for me to use raid 5 for storage rather than double my drive numbers.

Drives are $70 each now (were more prior) 750gb drives
card was $100 used on the BST

i have 7 750gb drives in RAID 5 gives me 3.7tb of storage with 1 hot spare and i have an 8th 750gb for a cold spare.

that cost me around $700 ($600 for drives and $100 for card)

to get the same storage with WHS i would need to have 8 1TB drive @$90 each = $720 w/o a hot or cold spare and that would max my SATA ports out.


I love WHS and have and will again run a WHS, but it will never be my storage box.

I am looking at running a storage server that is iSCSI and mount it to a WHS head node.....

it is above my knowledge level but it would give me the RAID 5 i want for data storage and the ease of use of WHS...

dont know, i could just run WHS with my hardware card as well....


there are a lot of posibilities and only you will be able to create what you need and want... check out the free trial of WHS and see how you like it..
 
Well I'd say Software RAID is 'safer' than any HW RAID controller, at least comparing Linux md array or ZFS in BSD or whatever.... probably even Windows software raid but whatever.

Going pure software is the cheapest but requires the most work. If you want to save every dime though its the best.

WHS and HW RAID is more expensive but 'easier'. At 8 drives HW RAID will give you more space but cost more. Adding more drives WHS becomes a better choice than HW RAID IMO.

Eg eventual plan of WHS + Norco case is real nice upgrade path. This is assuming you plan to need more space. If you'll likely stick with those 8 drives for long long time HW RAID is not a bad choice.
 
Thanks for your posts gentlemen, but I'm still having some trouble wrapping my head around the "point" of WHS I guess.

I understand that picking up the Supermicro (or other) SATA controller card (non-RAID) is quite cheap - I also understand that I can apply a myriad of software RAID solutions therefore to them. Is WHS a software RAID setup then, or is it as Dew stated, simple file serving, with certain drives mirrored?

Sorry for the noob questions!
 
I guess upon more research, I think I've "figured" out WHS...please tell me if I'm right:

WHS, I can have a conglomerate of disks added to a "storage pool", and then create shares which automatically span across these disks (or, up to the size of a single disk). I can then also assign drives to backup shares in particular. So if I have a storage pool of say, 3 640GB drives, and a 720GB folder called 'pics', I can have that share in the storage pool, and then have it mirrored to a 1TB drive as backup, and pray that I do not suffer a multiple drive failure that nukes the pool, and the backup?
 
I guess upon more research, I think I've "figured" out WHS...please tell me if I'm right:

WHS, I can have a conglomerate of disks added to a "storage pool", and then create shares which automatically span across these disks (or, up to the size of a single disk). I can then also assign drives to backup shares in particular. So if I have a storage pool of say, 3 640GB drives, and a 720GB folder called 'pics', I can have that share in the storage pool, and then have it mirrored to a 1TB drive as backup, and pray that I do not suffer a multiple drive failure that nukes the pool, and the backup?

Is that 1TB drive an external drive or a drive inside the system?
 
Is that 1TB drive an external drive or a drive inside the system?

I would be keeping it internal, to benefit from case fan cooling, SATA vs USB connected, etc. In my ideal configuration, I would probably have all identical drives (starting w/ 1TB WD Blacks since they seem to be very reasonably price/perf matched...and toss in 1.5 / 2 TB drives within the next year or so).
 
WHS, I can have a conglomerate of disks added to a "storage pool", and then create shares which automatically span across these disks (or, up to the size of a single disk). I can then also assign drives to backup shares in particular. So if I have a storage pool of say, 3 640GB drives, and a 720GB folder called 'pics', I can have that share in the storage pool, and then have it mirrored to a 1TB drive as backup, and pray that I do not suffer a multiple drive failure that nukes the pool, and the backup?

I would be keeping it internal, to benefit from case fan cooling, SATA vs USB connected, etc.

Well then, yes you can do that particular example with the 720GB folder and separate 1TB drive as backup.
 
Well then, yes you can do that particular example with the 720GB folder and separate 1TB drive as backup.

Very cool. Sounds like coupling WHS with a decent horsepower CPU and a mobo with plenty of SATA ports (or w/ that ever so popular Supermicro controller) will provide a more flexible solution than my initial plan (not to mention cheaper!).

How often does WHS perform backups/synchronization/reconciliation of the drives? Should I expect lots of late-night disk thrashing as it crunches data, or is it not a terribly big deal with adequate airflow (determining case still, but basically going to place drives in the midst of a large multi 120mm fan wind tunnel).
 
How often does WHS perform backups/synchronization/reconciliation of the drives? Should I expect lots of late-night disk thrashing as it crunches data, or is it not a terribly big deal with adequate airflow (determining case still, but basically going to place drives in the midst of a large multi 120mm fan wind tunnel).

IIRC, you can set the backup times and frequency to whatever you want. As for drive balancing, not too sure about that TBH. I usually don't notice when my WHS setup does its drive balancing. But it isn't a big deal as long as you have decent airflow.
 
IIRC, you can set the backup times and frequency to whatever you want. As for drive balancing, not too sure about that TBH. I usually don't notice when my WHS setup does its drive balancing. But it isn't a big deal as long as you have decent airflow.

Mmkay. So the only thing left to "worry" about really is the chances of a pool drive + the duplication target drive failing at the same time, which I think I'm OK with - assuming WHS is smart enough to pick up SMART statuses and stop whatever it's doing when shit hits the fan.
 
Mmkay. So the only thing left to "worry" about really is the chances of a pool drive + the duplication target drive failing at the same time, which I think I'm OK with - assuming WHS is smart enough to pick up SMART statuses and stop whatever it's doing when shit hits the fan.

Well WHS will warn you if it detects a drive problem via the Connector software you have to install in order to access the WHS.
 
Well WHS will warn you if it detects a drive problem via the Connector software you have to install in order to access the WHS.

Ahkay. Well, I've done more research, and after discovering how cheap PERC5s are, I'm pretty torn between doing a RAID5 and WHS. WHS more expandable, PERC5 gives me hw RAID5...sigh, decisions, decisions...
 
Ahkay. Well, I've done more research, and after discovering how cheap PERC5s are, I'm pretty torn between doing a RAID5 and WHS. WHS more expandable, PERC5 gives me hw RAID5...sigh, decisions, decisions...

Well there is a 120 day trial available for WHS. So you could try out WHS first, see if it does or doesn't meet your needs, and then pick up the Perc 5 if WHS isn't for you.
 
I guess upon more research, I think I've "figured" out WHS...please tell me if I'm right:

WHS, I can have a conglomerate of disks added to a "storage pool", and then create shares which automatically span across these disks (or, up to the size of a single disk). I can then also assign drives to backup shares in particular. So if I have a storage pool of say, 3 640GB drives, and a 720GB folder called 'pics', I can have that share in the storage pool, and then have it mirrored to a 1TB drive as backup, and pray that I do not suffer a multiple drive failure that nukes the pool, and the backup?

Not quite. You don't get to determine where your data is stored in the pool. From your perspective, you have a single massive block device. WHS will determine which drive gets what data. You really don't have any control over where it goes.

Now, back to what I was saying about ZFS. As long as you have a system with a decent amount of ram (4+GB on x64) you would be perfectly fine with a FreeBSD based ZFS implementation(I would wait for 7.2 and upgrade to 8.0 as soon as its out). The management of a RAIDZ is trivial compared to any other implementation involving RAID on even Windows. The more research I do on ZFS the more I like it. At this point, I am certain that I will be running FreeBSD 8.0 with a RAID-Z2 config around this time next year. In fact, if I wasn't under the time constraints I am right now, I would start the ZFS migration now(referring to the 200+ hours of data transfer involved, not the difficulty of starting an array from scratch).
 
like i said, the only thing i dont like about WHS is the 1:1 redundancy.... its a waste of space... and yes i want everything backed up.

I dont want to use the WHS to backup other PCs, i want it to hold primary data that is not on other machines.....
 
My suggestion is to use software raid via server 2008. The performance is very good on server 2008 with software raid5 and will beat any software raid solution and comparable to hw raid for speed and beat it for reliability. Server 2003 had a terrible software raid5 btw so dont compare to that as it was a proven terrible solution. Server 2008 changes that with excellent sw raid. Also i say it beats reliability of hardware raid because if your 400 dollar raid controller has any issues your usually screwed. If its sw raid you can just take the drives and pop them in to any server 2008 or vista ultimate system and the drive system is good to go. Also its not controller or motherboard or system specific. So if you upgrade your server you just move your drives. Also you can use 5 motherboard sata ports and add cheap 2 or 4 port controller cards for 20-30 dollars to add ports and it works fine with server 2008 raid5.
 
There's no getting around the 1:1 duplication. The design goal of WHS was to keep things simple, if things go bad you can simply take the disks out and access your files, there is no dependency on a Raid card, or any sort of software, to interpret the data. Another goal was you get to choose what you want to duplicate.

With RAID you don't really get protection, you get availability. Personally, I think disk space is cheap and only going to get cheaper, I like simplicity and the added features of WHS that make it more than just a NAS.
 
There's no getting around the 1:1 duplication. The design goal of WHS was to keep things simple, if things go bad you can simply take the disks out and access your files, there is no dependency on a Raid card, or any sort of software, to interpret the data. Another goal was you get to choose what you want to duplicate.

With RAID you don't really get protection, you get availability. Personally, I think disk space is cheap and only going to get cheaper, I like simplicity and the added features of WHS that make it more than just a NAS.

I don't quite understand the space aspect of this. If you ran both WHS and Raid 5, and 1 drive fails:
- WHS pools data, but you don't know where. If a hard drive fails, and you didn't set 1:1 duplication, you've got missing files. So, 3x 1TB hard drives becomes 1.5TB of space-pre format.
- Raid5 has a 3:2 so if a hard drive fails, you've still got a copy on the other two. So, 3x 1TB hard drives becomes 2TB of space-pre format.
 
actually R5 is = size of smallest disk * (number of disks - 1) so 2 of 3, 3/4, 4/5, etc.
the not being able to control which disks get what is what bothers me. I could do 2 1.5TB drives and put one of each big folder on each and mirror the rest of my files. But WHS wouldn't let me do that, it'd mix the two folders AFAIK.
 
the not being able to control which disks get what is what bothers me. I could do 2 1.5TB drives and put one of each big folder on each and mirror the rest of my files. But WHS wouldn't let me do that, it'd mix the two folders AFAIK.

Yeah, I'm on the fence on this. I just made my wife's HTPC, and I want to build a NAS for all the shows and movies she records. I definately looked into Windows 2008 Server for the SW Raid which should work on my 6 year old P4 2.4, but the software is so expensive ($300-400?) compared to WHS for $100.

Nevermind about the price, was looking at the wrong thing. Bleh!
 
Last edited:
I discovered that Dell PERC5s, basically identical to the ARC-1220 are ~$110 on eBay, so I just went ahead and ordered 8 1.5TB Seagates from a member here on [H], and will try 'em out in WHS/software RAID first. If I don't like it, then I'll just add them to the PERC5.

Its an interesting choice, that's for sure...WHS expandibility/simplicity versus having a single ~9TB "block" device visible from a RAID controller.
 
as soon as you pass 4 or 5 1.5tb drives in raid 5, you should use raid 6 instead

you should know that it takes forever to rebuild, and while rebuilding, there is a good chance another one of those identical drives you got will break, and then you're fucked

this problem is magnified as the disk size increases

just one (or was that two) of the many reasons raid is so not fool proof, it isnt even expert proof
 
While WHS is built on Server 2003, it doesn't support GPT without hacking.

So beware the 2TB volume limit if yoiu go the WHS route.
 
I imagine most people who use WHS, such as myself, don't use a bunch of matched drives. We have drives of all different sizes and types that we've had for years. WHS lets us use them all without losing space. PC backups are nice and has saved my ass twice. I also store all my data on the server and access it via all my machines.

Sure, mirroring your data is not the most efficient use of space but it's more reliable than some other options.

RAID 5 has it's own draw backs. Bit rot and the write hold error. Also WHS does not support RAID. While many do use RAID with WHS that should be kept in mind.
 
While WHS is built on Server 2003, it doesn't support GPT without hacking.

So beware the 2TB volume limit if yoiu go the WHS route.


Yes, this was quite annoying on my old fileserver (8x1TB R5) that I gave to my parents. If I ever lose the system drive, I will probably dump WHS as I'm not sure I will be able to add it to the storage pool on a new install without formatting the array.

I still believe you should at least consider ZFS. Even if you don't end up going with it, you are in a prime position to at least try it and see if you like it. At worst you lose about 2 hours of your time installing and setting up FreeBSD(setting up the RAIDZ takes all of 30 seconds, there is no array building like on a RAID controller). That assumes you have a x64 machine with 2GB RAM at a MINIMUM.


For the OP's needs (NAS), his best option right now is FreeBSD. Not that R5 is a bad option, but with ZFS now out there, it blows R5/R6 away.
Some RAID-Z pros/cons versus R5:
-PRO: File integrity.
RZ checksums each file on every read and write, it detects any errors, where they are coming from, and fixes them on the fly.

-PRO: Rebuild time.
On a R5 array, if it needs to be rebuilt, it must re-write the entire array. Even if you had 100MB of data on a 10TB array, you still have to generate and write parity calculations of 9.999TB of garbage. This whole time, if you have a secondary drive failure, you may be looking at a total data loss and have to build from scratch.
On a RZ array, it will rebuild the files on the volume, not the blank space.

-PRO: Build time.
On a R5 array, it needs to write to the entire array to build the initial parity data.
On a RZ array, it will build the checksums as it writes files.

-CON: Expansion.
If you need OCE/ORLM so you can add 1HDD at a time and expand your array, RAID-Z is not for you(feature is not present).
 
Last edited:
Yes, this was quite annoying on my old fileserver (8x1TB R5) that I gave to my parents. If I ever lose the system drive, I will probably dump WHS as I'm not sure I will be able to add it to the storage pool on a new install without formatting the array.

I still believe you should at least consider ZFS. Even if you don't end up going with it, you are in a prime position to at least try it and see if you like it. At worst you lose about 2 hours of your time installing and setting up FreeBSD(setting up the RAIDZ takes all of 30 seconds, there is no array building like on a RAID controller). That assumes you have a x64 machine with 2GB RAM at a MINIMUM.


For the OP's needs (NAS), his best option right now is FreeBSD. Not that R5 is a bad option, but with ZFS now out there, it blows R5/R6 away.
Some RAID-Z pros/cons versus R5:
-PRO: File integrity.
RZ checksums each file on every read and write, it detects any errors, where they are coming from, and fixes them on the fly.

-PRO: Rebuild time.
On a R5 array, if it needs to be rebuilt, it must re-write the entire array. Even if you had 100MB of data on a 10TB array, you still have to generate and write parity calculations of 9.999TB of garbage. This whole time, if you have a secondary drive failure, you may be looking at a total data loss and have to build from scratch.
On a RZ array, it will rebuild the files on the volume, not the blank space.

-PRO: Build time.
On a R5 array, it needs to write to the entire array to build the initial parity data.
On a RZ array, it will build the checksums as it writes files.

-CON: Expansion.
If you need OCE/ORLM so you can add 1HDD at a time and expand your array, RAID-Z is not for you(feature is not present).

You've almost got me convinced...the way I see the options,

Hardware RAID5
- Performance (questionable I suppose)
- Hardware - controller + drives can travel from system to system (?)
- "simpler" - controller makes drives appear as a single array to whatever OS it wants
- the PERC5 is awesome, and perhaps would consider if it could do RAID6, but that's just solving a problem the wrong way - what if 2 more drives die during a rebuild? still fucked.
- cheaply available battery backup for the controller + UPS covers ass for write-hole problems

Software RAID5/Z
- no expensive hw controller needed
- limited by gigE bandwidth anyways
- with a modern CPU (I picked a $60 AMD Athlon 64 X2 7750 Kuma 2.7GHz), 4GB RAM, etc, that is dedicated server, plenty of spare CPU
? - how resilient is this to OS failure? Say I have OS on a RAID1 array, and then a R5/RZ of my other drives, how recoverable is my data if the OS decides to go poof?

Windows Home Server
- flexible as all hell
- since I want everything backed up, I drop to total capacity / 2 usable space :(
- expansion is stupidly easy

Questions on the array sizes - do client operating systems care, if its offered as a SMB share? e.g., will XP32 happily handle a 8TB network drive?

I did pull the trigger on a PERC5, but I can always re-sell it or find another use for it. I did go ahead and order the drives though :) I chose this mobo, because it has 2 x16 slots available, for multiple RAID/non-RAID controllers. Chose this CPU, and this PSU.

I guess this is sort of turning into a build thread now - what case should I be looking at? Probably not rack-mounting, but it seems like its cheaper to fill a case with 5.25" bay modules for drives, then to find a case with like 12 internal bays.
 
I have no idea on the recoverability of a SW R5 on Windows.

With RZ/Linux SW R5 this is a non issue, the new OS install can pick up where the old one left off(just import the array, done).

As far is redundancy goes. RZ = R5, RZ2 = R6.
Oh, and write hole is non-existent with RZ.

As for XP. Here is an old screenshot demonstrating how it copes with large volumes over SMB: http://pages.woods.us/no-such-thing-as-too-much.gif

Look at it this way. You can decide you completely hate ZFS and FreeBSD and all you lost is time. No $400 server license, no $1200 R6 card.
 
Last edited:
I have no idea on the recoverability of a SW R5 on Windows.

With RZ/Linux SW R5 this is a non issue, the new OS install can pick up where the old one left off(just import the array, done).

As far is redundancy goes. RZ = R5, RZ2 = R6.
Oh, and write hole is non-existent with RZ.

As for XP. Here is an old screenshot demonstrating how it copes with large volumes over SMB: http://pages.woods.us/no-such-thing-as-too-much.gif

Look at it this way. You can decide you completely hate ZFS and FreeBSD and all you lost is time. No $400 server license, no $1200 R6 card.

Yes...the more I read, the more it seems that R5 would be a better option with smaller drives (like if I did it with 640s or something). The thought of waving bye to 9TB of data does not sit well with me.

Seems like RAID-Z2 is the sane option...up to two drives can die (but does this not negatively/severely affect performance, like R6?), but rebuilds are much much faster? Time to break out those old x64 BSD ISOs (I'm an experienced *nix user, so not worried about learning)
 
Seems like RAID-Z2 is the sane option...up to two drives can die (but does this not negatively/severely affect performance, like R6?), but rebuilds are much much faster? Time to break out those old x64 BSD ISOs (I'm an experienced *nix user, so not worried about learning)

It does not have near the performance impact of a RAID controller losing a drive, no. Rebuilds are faster because you only rebuild data that is on the array. So if you only have 2TB used, you only rebuild 2TB of data.

Just so you know, this is what 8x1.5TB drives in R5 look like: http://pages.woods.us/8xWD15EADS.jpg

Z2 would be 8.2TB. Make sure you have the latest release of FreeBSD, may as well start off with the most recent ZFS implimentation.
 
I don't understand using WHS as your sole storage. Like other have mentioned, you either have JBOD or RAID1. To me, WHS is useful as a backup.
Something like what I have
Machine 1: JBOD or RAID5
Machine 2: WHS that makes nightly back ups of Machine 1

that is true rundancy
 
It does not have near the performance impact of a RAID controller losing a drive, no. Rebuilds are faster because you only rebuild data that is on the array. So if you only have 2TB used, you only rebuild 2TB of data.

Just so you know, this is what 8x1.5TB drives in R5 look like: http://pages.woods.us/8xWD15EADS.jpg

Z2 would be 8.2TB. Make sure you have the latest release of FreeBSD, may as well start off with the most recent ZFS implimentation.

I don't understand using WHS as your sole storage. Like other have mentioned, you either have JBOD or RAID1. To me, WHS is useful as a backup.
Something like what I have
Machine 1: JBOD or RAID5
Machine 2: WHS that makes nightly back ups of Machine 1

that is true rundancy

I think I've been sold on RAID-Z now; from a point-per-point view, it doesn't suffer from RAID 5 write-hole error, and rebuilds happen quickly enough that the chances of a 2nd drive dying and hosing everything are far, far lower as well. Seems like it is easier to expand as well...

EnderW, I like the integration that WHS has with Windows clients for backup purposes; perhaps I'll setup an old box with a large drive or two to act as a backup machine for my parents.

Now to get a good non-RAID SATAII add-in card, perhaps that Marvel I've been hearing so much about... (and sell that PERC 5 now, that's what I get for jumping the gun)
 
that is true rundancy

Yes. I have 90% of my data on a separate fileserver 1100 miles away. RAID is not a backup. RAID allows fault tolerance and mechanical failure of a limited number of drives at once. It does not protect against user error, viruses, or catastrophic failure.

A true backup is not at the same location as the original.
 
Yes. I have 90% of my data on a separate fileserver 1100 miles away. RAID is not a backup. RAID allows fault tolerance and mechanical failure of a limited number of drives at once. It does not protect against user error, viruses, or catastrophic failure.

A true backup is not at the same location as the original.

Yes - I'm looking for mechanical security first and foremost right now. And just to double-check, RAID-Z much like most other RAID levels, requires drives of the same capacity? Any other suggestions in addition to the AOC-SASLP-MV8 card?
 
Yes - I'm looking for mechanical security first and foremost right now. And just to double-check, RAID-Z much like most other RAID levels, requires drives of the same capacity? Any other suggestions in addition to the AOC-SASLP-MV8 card?

Yes, treat RZ just like you would R5 when it comes to picking HDDs. The pool will be limited to the size of the smallest disk.

That said, you can do a (RISKY) process to upgrade the array. Replace and rebuild, one disk at a time. Each new disk is of larger capacity. Once you have all, say 2TB disks and have rebuilt, ZFS will automatically claim the additional space. Lets face it though, at that point, you should be looking at buying all the new disks at once and just creating a new vdev and transferring the data.
 
Last edited:
Yes, treat RZ just like you would R5 when it comes to picking HDDs. The pool will be limited to the size of the smallest disk.

That said, you can do a (RISKY) process to upgrade the array. Replace and rebuild, one disk at a time. Each new disk is of larger capacity. Once you have all, say 2TB disks and have rebuilt, ZFS will automatically claim the additional space. Lets face it though, at that point, you should be looking at buying all the new disks at once and just creating a new vdev and transferring the data.

Makes sense; also, I forgot to ask, that R5 screenshot you linked too; are you planning on moving that to RZ or something else at some point, and hoping it doesn't suffer a drive failure/neccessitate a rebuild in the meantime?
 
Makes sense; also, I forgot to ask, that R5 screenshot you linked too; are you planning on moving that to RZ or something else at some point, and hoping it doesn't suffer a drive failure/neccessitate a rebuild in the meantime?

I've already had to rebuild twice and it scares the shit out of me every time. Drive 8 seems to be a dud, so a replacement is on the way.

I should be deploying soon, so changing over to RZ is not an option at this point. I'll go RZ2 when I get back this time next year and do a new build(Think about a Norco 4020 with as many 2TB HDDs as it can hold).
 
Back
Top