Storage Spaces R2 Question

Franko

Weaksauce
Joined
Jan 17, 2009
Messages
121
I looked into Storage Spaces R1 when it was released and decided to stay away. I am now playing around with FreeNAS and NAS4Free for a new ZFS based NAS build for my home and was wondering if I should be looking at WSE 2012 R2 with Storage Spaces as well.

If you work with Storage Spaces R2 I would appreciate your thoughts on it:

1) Is it now reliable enough to be used with my precious data?
2) Have the write speeds increased? (I remember R1 write speeds were terrible when parity and mirroring were used)
3) I would appreciate hearing your general thoughts on it. I intend to use this for general storage and as a media server, I also will use it to serve as storage for a small virtulization box I just built.
4) How much better is ZFS at preserving data integrity? (I am worried about bitrot)

My household is all windows machines and my technical skill level is decent for a hobbyist (I don't work with computers for a living).

Thank you for your attention and feedback.
 
Storage pooling and storage spaces is storage management not storage security.
If it comes to data integrity you must look at:

- Realtime checksums on metadata and data (data is checked for corruption on every read)
- Copy on Write filesystems (always consistent filesystems, no offline chkdsk needed)

Currently there are three filesystems (beside special solutions like NetApp) that offer both
- btrfs
- Windows ReFS (as a possible and secure successor for quite old ntfs)
- ZFS

Among these ZFS is the current champion.
 
Last edited:
Currently there are three filesystems (beside special solutions like NetApp) that offer both
- btrfs
- Windows ReFS (as a possible and secure successor for quite old ntfs)
- ZFS

Among these ZFS is the current champion.


I was excited about ReFS before Windows 8/2012 officially came out, but it appears to be a pretty big disappointment in comparisons. Very young, too young some say, where its only valuable use is in a virtualized environment.
 
If you work with Storage Spaces R2 I would appreciate your thoughts on it:

1) Is it now reliable enough to be used with my precious data?
2) Have the write speeds increased? (I remember R1 write speeds were terrible when parity and mirroring were used)
3) I would appreciate hearing your general thoughts on it. I intend to use this for general storage and as a media server, I also will use it to serve as storage for a small virtulization box I just built.
4) How much better is ZFS at preserving data integrity? (I am worried about bitrot)

1) No
2) No
3) Its garbage, only use it if you hate your data
4) ZFS is good, but there's a myth that seems to get perpetuated by its fans that its the only solution capable of detecting and fixing bitrot, and its the only solution that can really maintain file level data integrity. That's false.

Because you're on Windows, I highly recommend looking into the combination of SnapRAID for parity protection, and Stablebit DrivePool for pooling. That combo running on top of Windows is the perfect solution for home media storage.
 
Last edited:
I built a 2012 R2 server that uses Storage Spaces to hold backups. One group has nine 4TB drives with dual parity, ReFS format. Only downside is that it's slower than I think it should be (it's far faster than needed, I just think it should be faster given the number of spindles). On the same server I have two more Storage Spaces mirrored drive sets, also using 4TB drives, NTFS format - those drives are much faster.

System has been in production since around mid November. Currently stores a total of ~20TB. Data is transferred from 30 client sites every night using Rsync over SSH (Cygwin). I test restore random files regularly and have had no data integrity issues whatsoever. Did a major restore for a client consisting of ~800GB / 1.2 million files recently and there were no issues.
 
I have a small low power i3 with 2x 3tb toshibas in raid1 refs on 2012r2. It reaches 1gbit which is all i need for now. Will probably add a ssd for cache and 3 more disks in the future update. SS version of raid10 uses 5 drives so that you can loose 2 of any drives.
 
4) ZFS is good, but there's a myth that seems to get perpetuated by its fans that its the only solution capable of detecting and fixing bitrot, and its the only solution that can really maintain file level data integrity. That's false.

Because you're on Windows, I highly recommend looking into the combination of SnapRAID for parity protection, and Stablebit DrivePool for pooling. That combo running on top of Windows is the perfect solution for home media storage.
How is ZFS data protection capability a myth? Researchers have examined several filesystems and concluded they all failed to protect against bit rot, even worse, they could not even _detect_ bit rot. If you can not detect something, how can you repair it?

CERN did a study on enterprise very expensive storage solutions and concluded the same: they do not protect against bit rot. That is the reason CERN have switched to ZFS long term storage for their large amounts of LHC collider data, of petabytes.

Thus, there are several researchers, all of them independently saying that no storage solution suffices, except ZFS. Hard disks have checksums all over the place, and still you get bit rot on disks - CERN explains that "adding checksums it not enough to detect bit rot". NetApp and all other enterprise vendors are spending huge amounts of Research and Development on combating data corruption. Amazon get data corruption everywhere, all the time, they write.

I can post lot of research papers and articles and PhD thesis on this, if you want. One PhD thesis shows how broken NTFS is, with respect to data corruption.


On the other hand, there is a MYTH that snapraid, flexraid, etc have good data protection. Why is is a myth? Because there are no research, no serious studies, no nothing that proves they have good data protection. They all rely on NTFS, which is provenly notoriously bad on protecting against data corruption. The only thing that says Snapraid, flexraid, unraid, etc - have good data protection - are some random guys on some forums. And if you ask about research papers and studies - they can not show any, and STILL they claim snap/un/flex/etc are safe. Now is that a myth or not?

The claims that ZFS is one (of the only?) safe solution is backed up by professors and researchers in computer science. The claims that flexraid is safe, is backed up by... nothing, only some rumours by some random guys here and there. You decide what is a myth, and what is based on science and research.
 
Thanks to all for the information. At this point I have decided to go with Nas4Free and ZFS for my NAS. I will experiment with Storage Spaces and Snapraid to backup my ZFS box.
 
Are you going to backup your ZFS box with Snapraid? But there is no research showing that Snapraid is safe, and all storage solutions which vendors are trying to make safe, are not (CERN studies shows that even very large expensive storage solutions are not safe, so why would home brewn Snapraid created by 1-2(?) developers be safe?). So to me it sounds less optimal to backup your provenly safe storage ZFS, with Snapraid that is not researched. The backup should be safe, and Snapraid is not.

Here are lot of links on data corruption. Read them before you make a choice?
http://en.wikipedia.org/wiki/ZFS#Data_integrity
 
My plan is as follows:

Worstation, laptops, etc...

will backup to NAS4Free with ZFS

will backup to a windows 8.1 box running storage spaces (REFS with either parity or 1 way mirror).


The storage spaces box is there just in case lightning strikes the ZFS box (and will be located offsite). I am going to spend some time testing to see if a windows 8.1 box with storage spaces will work as a backup to the backup; I do not want to have to pony up $1100 or so for the harddrives I would need for a second NAS4Free box unless storage spaces turns out to be a bust.

Thanks for the feedback as I really appreciate it.

P.S. It seems that stable bit might be working on getting their drive pooling software working with REFS and if so I would give that a try as well
 
I have 4 new WD 4tb SAS drives in an MD1000 connected to a Dell R220 through a Sas 6 /e HBA. I can see all of the drives and format them within disk management, or use storage spaces. I want to set up a ReFS storage space using parity, but whenever I select parity, the file system reverts back to NTFS. If I set mirror it will let me use REFS. How can I get it to let me select parity with ReFS?
 
storage spaces are logical volume manager they don't do any data integrity checks (officially)

look at refs as it does what you want (zfs is a bad example as it's lvm and fs glued together)

you're confusing copy-on-write doing in-place updates so both dangerous and slow with redirect-on-write being both safe (no data is overwritten) and fast (no read-write-write instead of a single write)

netapp has wafl and nimble has casl and starwind has lsfs

all of them do hash summing and redirect-on-write style log-structuring

all of them can support any commodity file system on top (on wafl iscsi and fc luns would be just a files on wafl partitions, casl & lsfs would manage them directly as block devices)

zfs loses to wafl in terms of writes and snapshot efficiency and deduplication implementation

so "champion" is "not always"

Storage pooling and storage spaces is storage management not storage security.
If it comes to data integrity you must look at:

- Realtime checksums on metadata and data (data is checked for corruption on every read)
- Copy on Write filesystems (always consistent filesystems, no offline chkdsk needed)

Currently there are three filesystems (beside special solutions like NetApp) that offer both
- btrfs
- Windows ReFS (as a possible and secure successor for quite old ntfs)
- ZFS

Among these ZFS is the current champion.
 
it cannot be used in virtualized environ,tents because refs does not support data integrity streams with running vms

+ no dedupe

refs = bummer

I was excited about ReFS before Windows 8/2012 officially came out, but it appears to be a pretty big disappointment in comparisons. Very young, too young some say, where its only valuable use is in a virtualized environment.
 
storage spaces + refs/ntfs are still ages behind zfs (maybe except dedupe which kind of sucks on zfs)

parity storage spaces write performance still sucks as windows uses fixed size strips and zfs does variable resulting huge penalty for random writes with storage spaces + refs/ntfs

I looked into Storage Spaces R1 when it was released and decided to stay away. I am now playing around with FreeNAS and NAS4Free for a new ZFS based NAS build for my home and was wondering if I should be looking at WSE 2012 R2 with Storage Spaces as well.

If you work with Storage Spaces R2 I would appreciate your thoughts on it:

1) Is it now reliable enough to be used with my precious data?
2) Have the write speeds increased? (I remember R1 write speeds were terrible when parity and mirroring were used)
3) I would appreciate hearing your general thoughts on it. I intend to use this for general storage and as a media server, I also will use it to serve as storage for a small virtulization box I just built.
4) How much better is ZFS at preserving data integrity? (I am worried about bitrot)

My household is all windows machines and my technical skill level is decent for a hobbyist (I don't work with computers for a living).

Thank you for your attention and feedback.
 
Back
Top