Storage Spaces Win8

yodamalk

n00b
Joined
Mar 1, 2013
Messages
14
set up parity with 6 3TB drives (2.7 TB) and end up with it looking like this... I though windows storage spaces took only one disk for parity?

http://imgur.com/9xDTQXf

also tried multiple times same result
 
From my understanding the system will automatically configure the pool (when using parity mode) to the best fit of (2^n)+1 drives. in your case the best fit would be (2^2)+1 = 5 drives. If you were to remove one drive from the pool, or manually designate that drive as a hot spare, the remaining 5 drives should net you the same capacity result. Try it out and let us know.
 
Very strange, i stand corrected.

you may want to try and manually configure the array from the powershell, it will give you a bit more control.
 
not a bad idea though.

Im not great with powershell commands.. saw a couple tutorials online about it but I might try out windows server 2012 see if that helps any (added benefit of ReFS also)

unless anyone else knows why this is happening?
 
Yeah its happening because storage spaces is garbage. For all intents and purposes they added a metro themed wizard to dynamic disks and called it a day.

There are much better options for JBOD pooling in windows.. Stablebit DrivePool, DriveBender, FlexRaid. If youre going to be storing media then absolutely no reason to stripe your disks. If you insist on using storage spaces then good luck and better be religious about backing up.
 
Last edited:
I tryed zfsforlinux but whenever I would restart the 4 out of the 6 disks would say were bad and the pool would go bad so I figured I didnt want that happening when I had valuable info on there
 
I tryed zfsforlinux but whenever I would restart the 4 out of the 6 disks would say were bad and the pool would go bad so I figured I didnt want that happening when I had valuable info on there

If you are getting errors like that then I would investigate that possibility. I've set up many ZoL installations and it's pretty straight forward.

That being said you don't have to go there. If I was into Windows I probably would go straight to Flex Raid and bypass storage spaces all together. It's been said many times that Storage Spaces doesn't really hold up to what's available, especially if you are trying to enable parity. All bets are off then because performance will be worse than anything else out there.
 
I can't help you with your issue, but I can give you my strongest recommendation to move away from storage spaces, if you don't have data on the drives yet.

It doesn't have one feature that works better than most of the other solutions out there, and it's horribly slow.
 
welp seems like a lot of you feel storage spaces will not hold up so Ive been looking into flexraid...

I seem to remember reading about another option out there also.. cant remember the name.

Flexraid seems simple enough to implement.. Does it make it easy to rebuild drives ect.?
 
AFAIK, it does.

Other options are:
Drive Bender, StableBit Drive Pool, SnapRAID

Adding to this, FlexRAID / SnapRAID / disParity are all very easy to setup and restore from. Essentially once a drive is lost, you point the respective software to your new drive and rebuild the missing data. You only lose your largest drive to store your parity data. All of these systems are snapshot based; perfect for large, non-changing data like media files.

Out of these three, FlexRAID is the only one that can also pool data, but it's a separate pay feature and I've heard not-so-great things about it. I run SnapRAID now and it works perfectly for me.

Drive Bender / StableBit Drive Pool are designed more to pool your data together. Redundancy is achieved like WHS v1 Drive Extender, which is a pseudo-RAID-1 system: all your files are duplicated in a way that there is at least 2 copies on 2 separate disks. Out of these two, DrivePool is much, much simpler to setup and use, and generates a smaller amount of "associated files" than Drive Bender.

Hope this helps!
 
FlexRAID storage pool is excellent IMO.

It also has a feature that I deem necessary when dealing with snapshot RAID.

When a file is deleted from the FlexRAID pool, it is moved into the proprietary FlexRAID recycle bin. This is necessary because if a file on say Disk1 were actually deleted, and then Disk2 were to fail before a parity update, you would suffer data corruption. When using the FlexRAID pool though, any file that is deleted is moved to the FlexRAID recycle bin and is not finally deleted until after a successful parity update. This would prevent data corruption in my example.

The only way you can suffer data corruption in FlexRAID is to modify files followed by a disk failure before a parity update. Adding files and deleting files won't affect successful recovery even if you lose a disk before a parity update. Except for any files added to the disk that has failed before a parity update.
 
Back
Top