Storage Spaces and ZFS

Joined
Jul 8, 2015
Messages
7
Hi There,

I've been reading a lot over the last few days about ZFS and it's strengths but theres not much to read up when comparing it to Storage Spaces, yes I understand that ZFS is better over NTFS but there's something in my setup which it seems not many people have talked about.

So the rig I built about 2 months ago was for specifically intended for FreeNAS, I can't remember all the specs but roughly:

Silverstone DS380 Case
C2550D4I Motherboard
16GB Ram(Non ECC)
6 Seagate 3TB Drives

Now I had the config in a RAIDZ2 and I'm backing up the data on my PC to it daily, about 8TB, this consists of various files from a few kb to 50GB.

Over a GB link, with ZFS I saw speeds peak at around 90-100 MB/s and slowly drop to 40MB/s which I found to be really slow but that was during large file copys which I don't do that often.

The problem I have is that EVERYONE is saying you will get terrible write speeds with Storage Spaces, I configured the same drives as a Parity Storage Space with a 100GB cache(that had to be done via PowerShell) and I also flipped a "switch" which made Storage Spaces think it was battery backed so it removes a few limits Windows has. The resulting speeds were pretty dam good, Consistent 100-130 MB/s, never dipped below 90 MB/s. I haven't seen anyone talk about this yet and am pulling my hair out trying to decide between the two.

Overall I will probably go with FreeNAS as the initial issue I had with Timestamps not being copied over correctly I think I fixed now and mainly because I spent the extra money to make sure it was FreeNAS compatible but wanted your opinions as I'm still deciding between the two.

Thanks

Kishan
 
ZFS is pooling + a new generation filesystem
Storage Spaces is pooling only, you can add ReFS that adds some features of ZFS like Copy On Write and checksums

As an alternative to BSD/FreeNAS you can try OmniOS/Solaris that is often faster than SAMBA
 
What switch did you flip, and also where did you set your cache? Into another SSD? Into the same pool of disks hosting your data? Did these have to be done @ implementation or were you able to make the changes after the pool and virtual disks were stood up?
 
Hiya,

Thanks for the replys, will look look into Solaris/OmniOS, thanks for the recommendation.

@CombatChrisNC

Right so the switch I mentioned is to make the Storage Pool think it was UPS backed, the command is: Set-StoragePool -FriendlyName POOLNAME -IsPowerProtected $true

BEFORE I did the command, I would begin a large file copy to the Pool and it would start at around 100-105 MB/s, You could see in Task Manager that RAM would slowly go up to a peak of 8GB(out of the 16GB available), once it hit 8GB you would immediatley see the file copy transfer speed start to slow down as Windows starts to flush that cache it put into RAM(the speed slowly drops to around 60 or less), you would start to see it drop from 8GB to around 1.9-2GB which what it normally uses.

AFTER I did this command I started the same file copy, it started at around 110-130 MB/s and Windows would use up to aroud 6GB of RAM or so and about half way through a 650GB file copy did it start to throttle the speed down a bit to around 90 MB/s and stay there.

Now for the cache, the Server Manager GUI I find to be extremely fustrating, clunky and slow, by default the Storage Spaces section doesn't allow you to adjust the cache size, and as long as your "Media Types" are identified correctly(half the time there not and you have to set them via a PowerShell command) as either SSD or HDD it will set a maximum cache size of 1GB, this would be on Pool itself. Using PowerShell to create the Virtual Disk you can set the cache size to 100GB, above that it wouldn't allow. This cache is on the pool itself, from what I could see it takes 100GB of all drives.

After that last adjustment it never dips below 100 MB/s after a 700GB file copy of various files sizes and types.

I intially had an 128GB SSD in the pool but this restricts the pool size which isn't made clear, I copied 650GB of data to it and all of sudden the computer I was copying from said the destination was out of space, I checked on the server and NOWHERE did it say the Pool was out of space, everything said it was ok and it showed space available, only after I checked each individual HD in the Virtual Disk settings did it show that there was 0 left on the SSD.

Lastly I had to create the Virtual Disk with a 100GB cache from scratch(@ implementation), I think you can adjust the cache size after VD creation but it's a bit difficult. I also did random "pull the plug" on the machine to see how it would react during a file copy, powered back on the VD is still healthy.
 
Last edited:
With non-ecc memory I would suggest staying away from ZFS. There have been articles published on how ZFS scrubs with memory errors can write bad data over the entire pool. ZFS expects to be run on error free memory.
 
Last edited:
With non-ecc memory I would suggest staying away from ZFS. There have been articles published on how ZFS scrubs with memory errors can write bad data over the entire pool. ZFS expects to be run on error free memory.

Have you ever looked at your mcelog (which if enabled logs every single ECC correction) on servers with ECC ram? My expectation is if my ram is not faulty and I am not overclocking I will not experience a single ECC correction. I have many servers at work that have never had a single ECC correction in years of power on time. With that said at work I will continue to use ECC at home what ever is available. In both cases I have backups.
 
Last edited:
This has helped me. I'm not going to rebuild the whole thing for the improved cache, but this is nice.


-----------------------------------------------------------------------
CrystalDiskMark 4.0.3 x64 (C) 2007-2015 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 86.061 MB/s
Sequential Write (Q= 32,T= 1) : 1.048 MB/s
Random Read 4KiB (Q= 32,T= 1) : 0.792 MB/s [ 193.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 0.032 MB/s [ 7.8 IOPS]
Sequential Read (T= 1) : 156.861 MB/s
Sequential Write (T= 1) : 9.227 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.836 MB/s [ 204.1 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.055 MB/s [ 13.4 IOPS]

Test : 500 MiB [S: 11.5% (234.9/2049.8 GiB)] (x2)
Date : 2015/07/08 12:39:14
OS : Windows Server 2012 R2 Server Standard (full installation) [6.3 Build 9600] (x64)
Pre-Power Change - 500mb on 2TB pool



-----------------------------------------------------------------------
CrystalDiskMark 4.0.3 x64 (C) 2007-2015 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 91.756 MB/s
Sequential Write (Q= 32,T= 1) : 2.857 MB/s
Random Read 4KiB (Q= 32,T= 1) : 0.620 MB/s [ 151.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 0.103 MB/s [ 25.1 IOPS]
Sequential Read (T= 1) : 142.595 MB/s
Sequential Write (T= 1) : 18.664 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.677 MB/s [ 165.3 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.118 MB/s [ 28.8 IOPS]

Test : 500 MiB [S: 11.5% (234.9/2049.8 GiB)] (x2)
Date : 2015/07/08 12:49:59
OS : Windows Server 2012 R2 Server Standard (full installation) [6.3 Build 9600] (x64)
Post-Power Change - 500mb on 2TB pool
 
Glad to see it helped, I'm currently setting up FreeNAS again to see how it performs, there's a slight chance I might stick with Storage Spaces but will see.

Storage Spaces was quite annoying because it couldn't see all the drives, seems like some kind of glitch, drives connected to the same controller seem to sometimes give the same unique ID therefor Storage Spaces only sees the 1 drive, not the 2, a reboot will sometimes fix it, although even if it sees the drive, it doesn't add it to the pool and says unknown or incorrect parameter, quite flaky but eventually I get them added, as it was working it was brilliant with the speeds, will see how ZFS does.
 
I prefer and use storage spaces because the data is easily recovered and I like the features of refs. To recover the data, should your boot drive die (which happened to me) all you have to do is plug the drives into a compatible Windows pc that supports storage spaces. In my case I put a new drive in, reinstalled Windows and that was it, I had regained access to my data.
 
I prefer and use storage spaces because the data is easily recovered and I like the features of refs. To recover the data, should your boot drive die (which happened to me) all you have to do is plug the drives into a compatible Windows pc that supports storage spaces. In my case I put a new drive in, reinstalled Windows and that was it, I had regained access to my data.
Whereas booting a bsd/linux w/zol live disk is oh so hard. Both are easily recovered in that situation.
 
Ok so I tested FreeNAS again it peaked at around 90-95 MB/s and dropped to around 60 MB/s, that was on a 6 Disk RAIDZ2 with a 128GB SSD L2ARC, I've decided I'll give Windows a try as it seems there's not much testing been done with and although I know Microsoft are shit when it comes to stuff like this I still do have at least a little bit of faith in their intentions.

Needed some help though, everything I've read so far says ReFS is not compatible with Parity, but I just did the following:

1.) Created 6 Disk Storage Pool
2.) Created Parity Vitual Disk of 20TB
3.) Created a ReFS Volume assigned letter D

Everything I've read so far says ReFS dont work with Parity, what gives ?

EDIT: Started a 450GB file copy to it a while ago, various files sizes and types, hasn't dripped below 100MB/s yet.
 
Last edited:
I think you have a network related problem. For your setup and use there is no need for a L2ARC.

Either your network card is not well supported/has a poor driver, or your share is not configured correctly, or FreeNAS is just slow (I have no XP with it, only OpenIndiana).
 
I have had similar experiences to kaplankishan with FreeNAS lately. Just not impressed with it's performance. It seems relatively stable (as long as you don't update it), but I get better performance from an old Xeon box with 4 mechanical JBOD disks of varying sizes, then I do from my FreeNAS box with 4 4tb mechanical drives.
 
On my main setup a short while ago I wasn't using an L2ARC and I still had the same performance, I thought I'd add one anyway this time but It didn't make a difference, I'm absolutely sure that my network is fine.

Just ran into a dam silly issue which I wasn't expecting, ReFS still has the same 256 CHAR limit that NTFS does, unbelievable.
 
Whereas booting a bsd/linux w/zol live disk is oh so hard. Both are easily recovered in that situation.

I have no reason to doubt you, but I also have no experience trying to recover my data on a zfs partition in a situation like that.
 
Write the folks as bsdnow.TV bet you they can get you squared away they knownzfs really well and do consistent gigabit speeds with it as NFS. Also I didn't look up your mb but if it's a realtek nic or cheap broadcom bsd doesn't like them. Windows puts a lot of the work on the CPU instead of having nic do all the work so it doesn't effect it really
 
Ok so I tested FreeNAS again it peaked at around 90-95 MB/s and dropped to around 60 MB/s, that was on a 6 Disk RAIDZ2 with a 128GB SSD L2ARC, I've decided I'll give Windows a try as it seems there's not much testing been done with and although I know Microsoft are shit when it comes to stuff like this I still do have at least a little bit of faith in their intentions.

Different implementations of ZFS can behave differently. OpenIndiana and ZFSonLinux are my preferences. Here is my output from a RAIDZ2 pool with WD Greens. I have no L2ARC.

------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
MADFILESRV 16000M 63 99 295319 83 181426 72 145 99 596663 88 243.4

As you can see I get 596 MB/s on reads and 295MB/s on writes. I will say this ZFS loves large pools and in particular striping the vdevs within the pool will give you the type of performance shown here and it's sustained. Instead of one vdev in Raidz2, try 2 vdevs (3 disks each) in RAIDz, not RAIDZ2 since you only have 6 disks.

So what you do is create the pool like this:
Code:
zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 raidz c0t3d0 c0t4d0 c0t5d0

Try that and report back.
 
kac77,

Yeah I was looking into that yesterday but it looked a bit confusing, think I wrapped my head around it now and am going to try it out, getting the software now. I might try install napp-it on a headless OpenIndianna to give me a gui.

Question though, so you said to try 2 vdevs in raidz, so I can create 2 vdevs of 3 disks each but do they each need to be raidz ?, then do I create a new vdev which is both of them combined in Stripe mode ?, apologies, bit confused.

EDIT: Ok, think I figured it out, tried this in a VM of FreeNAS and it seems to half the available space, is this correct ?, if so I definitely cant do that.
 
Last edited:
A zpool consists of at least one vdev. The command kac77 posted creates one zpool with 2 vdevs, each consisting of 3 disks in RAIDZ configuration. The usable space resulting from that should be the same as 6 disks in RAIDZ2 (net capacity of 4 disks, i.e. 2/3rds).
Double the vdevs means double the IOps/s, but RAIDZ2 offers much better survivability. Sequential transfer speeds should be comparable.
 
Question though, so you said to try 2 vdevs in raidz, so I can create 2 vdevs of 3 disks each but do they each need to be raidz ?, then do I create a new vdev which is both of them combined in Stripe mode ?, apologies, bit confused.

EDIT: Ok, think I figured it out, tried this in a VM of FreeNAS and it seems to half the available space, is this correct ?, if so I definitely cant do that.
No. Try making a raidz with 3 drives, then extending it with a 2nd one.
 
kac77,

Yeah I was looking into that yesterday but it looked a bit confusing, think I wrapped my head around it now and am going to try it out, getting the software now. I might try install napp-it on a headless OpenIndianna to give me a gui.

Question though, so you said to try 2 vdevs in raidz, so I can create 2 vdevs of 3 disks each but do they each need to be raidz ?, then do I create a new vdev which is both of them combined in Stripe mode ?, apologies, bit confused.

EDIT: Ok, think I figured it out, tried this in a VM of FreeNAS and it seems to half the available space, is this correct ?, if so I definitely cant do that.

You need to add the second vdev. You should have the same space or close to it that you had before. When you do a zpool status it should look like this:

Code:
        TEST_POOL                     ONLINE       0     0     0
          raidz-0                            ONLINE       0     0     0
            c0t5000C500345D2343  ONLINE       0     0     0
            c0t5000C500345E343    ONLINE       0     0     0
            c0t5000C500345E43434 ONLINE       0     0     0

          raidz-1                            ONLINE       0     0     0
            c0t5000C500345E3243  ONLINE       0     0     0
            c0t5000C500345E6fdg4  ONLINE       0     0     0
            c0t5000C500345fgdgfd   ONLINE       0     0     0

Now the down side to this is that your fault tolerance stays the same as before but only if the stars align. Two disks in the same vdev can stop the whole pool. Really this is more of a test than anything though. I run two 6 disk vdevs in raidz2 at home.

Generally speaking single dev performance is OK, definitely better than most. However, I find that performance doesn't give that wow factor until you start adding multiple vdevs to a pool.
 
Have you ever looked at your mcelog (which if enabled logs every single ECC correction) on servers with ECC ram? My expectation is if my ram is not faulty and I am not overclocking I will not experience a single ECC correction. I have many servers at work that have never had a single ECC correction in years of power on time. With that said at work I will continue to use ECC at home what ever is available. In both cases I have backups.

eh, while that might work for you, it's not worth it for me. anything mission critical to me (a ZFS filestore that I'm depending on to store movies, music etc) get's ECC and a server cpu. It's just something I don't think about.

memory errors happen more than you might think
 
eh, while that might work for you, it's not worth it for me. anything mission critical to me (a ZFS filestore that I'm depending on to store movies, music etc) get's ECC and a server cpu. It's just something I don't think about.

memory errors happen more than you might think

This
 
Rite so currently I've left everything on the Server 2012 R2 + Storage Spaces setup until I can get the RAIDZ Pool with 2 vdevs figured out.

Im running a VM of FreeNAS with 1 x 20GB HD for OS and 6 x 20GB HD's for config testing, I tried kac77's method which create a single Pool with 2 vdevs but halfs the space available, I then also tried creating 2 seperate RAIDZ pools but this has just simply created 2 seperate RAIDZ pools, is there something I'm missing ?
 
A ZFS pool build from 6 disks in two raidz1 vdevs is similar to a conventional raid 50 setup as it stripes data over two raidz1, I would not suggest such a config, not in production nor at home.

Your ZFS options from 6 disks are mainly
A pool with one raid-z1 vdev (5 datadisks + 1 parity disk: two disks fail=pool lost)
A pool with one raid-z2 vdev (4 datadisks + 2 parity disk: three disks fail=pool lost)

I would go with the z2 option that offers a sequential performance of 4 disks and the iops of one disk


If this is a production system ex an ESXi datastore the common config would be
a pool with 3 mirror vdevs. This gives a similar sequential performance but 3 x iops of a single disk.
Your capacity is 3 disks with 3 parity disks. Up to three disks may fail but only one per vdev.
 
A ZFS pool build from 6 disks in two raidz1 vdevs is similar to a conventional raid 50 setup as it stripes data over two raidz1, I would not suggest such a config, not in production nor at home.

I'm definitely not suggesting he run essentially RAID 50 on those disks in production.
Really this is more of a test than anything though. I run two 6 disk vdevs in raidz2 at home.

However, he's having some performance issues with 1 vdev in raidz2. So we need to figure out if performance increases or not. if it doesn't then the problem lays somewhere else. If it does then at least he knows it's possible to achieve better performance.
 
Last edited:
don't use file copy as a benchmark for anything

here's why

http://blogs.technet.com/b/josebda/...good-idea-and-what-you-should-do-instead.aspx

intel i/o meter or new microsoft diskspd are your best friends

http://blogs.technet.com/b/josebda/...for-both-local-disks-and-smb-file-shares.aspx

we also use oracle vdbench as it's very easy to simulate proper load for systems doing spoofing and deduplication

http://www.oracle.com/technetwork/server-storage/vdbench-downloads-1901681.html

in any case storage spaces suck badly

mostly because unlike zfs they are not aware of write sizes so instead of using erasure coding or parity or whatever you have to use blatant replication for performance reasons

refs has checksums but not for running virtual machines and it's kind of sucks

refs can heal itself but lack of dedupe takes away all fun

freebsd + zfs = you're good :)

With non-ecc memory I would suggest staying away from ZFS. There have been articles published on how ZFS scrubs with memory errors can write bad data over the entire pool. ZFS expects to be run on error free memory.

Ok so I tested FreeNAS again it peaked at around 90-95 MB/s and dropped to around 60 MB/s, that was on a 6 Disk RAIDZ2 with a 128GB SSD L2ARC, I've decided I'll give Windows a try as it seems there's not much testing been done with and although I know Microsoft are shit when it comes to stuff like this I still do have at least a little bit of faith in their intentions.

Needed some help though, everything I've read so far says ReFS is not compatible with Parity, but I just did the following:

1.) Created 6 Disk Storage Pool
2.) Created Parity Vitual Disk of 20TB
3.) Created a ReFS Volume assigned letter D

Everything I've read so far says ReFS dont work with Parity, what gives ?

EDIT: Started a 450GB file copy to it a while ago, various files sizes and types, hasn't dripped below 100MB/s yet.
 
Wish I could do more than bookmark this thread, didn't know about diskspd, thanks!
 
There have been articles published on how ZFS scrubs with memory errors can write bad data over the entire pool..
I have been following ZFS very closely, even before it was released. Ive read most of the literature and articles and blogs (by ZFS architects) and I have never seen any such article. I have read a blog post on FreeBSD(?) forum where someone speculated that it might be the case that ZFS writes bad data over the entire pool when non ECC ram goes faulty - but I have never seen it reported in real life. Never ever. I have been in this discussion and I dismissed it as pure speculations - they say something like:

"if you have faulty ram then ZFS might write bad data" - but guess what, if you have faulty ram then even ntfs/ext4 or any other filesystem might write bad data all over the disk too - but them people never thought of that. Instead they conclude that ZFS is a major no no with non ECC - but that problem applies to all other filesystems as well. That was a really stupid discussion, so I left it.

Can you link to such articles where it did happen that faulty non ecc ram trashed the disks? Ive never seen any.
 
I have been following ZFS very closely, even before it was released. Ive read most of the literature and articles and blogs (by ZFS architects) and I have never seen any such article. I have read a blog post on FreeBSD(?) forum where someone speculated that it might be the case that ZFS writes bad data over the entire pool when non ECC ram goes faulty - but I have never seen it reported in real life. Never ever. I have been in this discussion and I dismissed it as pure speculations - they say something like:

"if you have faulty ram then ZFS might write bad data" - but guess what, if you have faulty ram then even ntfs/ext4 or any other filesystem might write bad data all over the disk too - but them people never thought of that. Instead they conclude that ZFS is a major no no with non ECC - but that problem applies to all other filesystems as well. That was a really stupid discussion, so I left it.

Can you link to such articles where it did happen that faulty non ecc ram trashed the disks? Ive never seen any.

So, I haven't done any research on this for a while, but it looks like there are quite a few counter points for ZFS being fine with non-ecc memory. At least as good as any other filesystem.
http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/


The original point I made was probably based on this or other similar forum posts:
https://forums.freenas.org/index.php?threads/ecc-vs-non-ecc-ram-and-zfs.15449/
 
why all this hating on ECC?

if (isAServer()){
cout << "USE ECC" << endl;

:)
 
Back
Top