StorageSpaces windows 2016

Freak1

Limp Gawd
Joined
Sep 9, 2009
Messages
191
Hi.

I currently have a StorageSpace with 24x 3TB HGST drives with dual parity.

Its working fine but its really slow.

I made a new server with space for 3 nvme m.2 and 8 SATA. I want to get 8x 12TB HGST/Seagate and 2 m.2 250GB Samsung evo 960 for ssd cache. and the last slot m.2 for OS.

What i can't seems to figure out is do i need 3 SSD cash drives for running the drives as dual parity or is it not needed for the cache?
 
How slow are we talking? Just curious, also any experiences you want to share? I was debating with myself Storage Spaces in Server 2016 with ReFS vs StableBit Drive Pool and in the end went for StableBit Drivepool where on my WD Red 3x3TB non-RAID in a HP Microserver running off the SATA controller pool I get 49~55 MB/s constant write speeds and slightly higher read within the subnet (FTP/SFTP transfer as test method).

(I did in the end go with Drive Pool for conveniency/flexibility though, it was the deciding factor as going with Storage Spaces you are kinda stuck with it, not very easy to change config)
 
Last edited:
100-300 mb/sec read and 20 mb/sec write

Will DrivePool give me the same functionality? And good speed?
 
If you insist on sticking with Windows as the underlying OS then get yourself a current Areca or LSI RAID card and do RAID 6. Storage spaces with parity is painfully slow, and double parity even more so. You will still have w drive uptime and will have great speeds.
 


This is something I've been itching to see pulled off in the real world.

EDIT: I mean, obviously you won't be doing this in a cluster based of your post but the setup for the single server seems to still be relevant.
 
Last edited:
I made a test with 3 SSD and 3 3TB HGST disks i had laying arround. I got something like 300 MB/sec read and write with parity.
 
Storage Spaces seemed slower to me as well when using it compared even to fake raid.

I don't have anything against software defined storage, I just don't know why the performance was so bad.
 
Just about any software RAID is going to be on the slow side with out a RAM or SSD cache when you are working with Parity.

Storage Spaces with parity was really meant to be used as a backup target with some SSD cache to speed up writes. Application Data should be on mirrors.
 
I'm going to try it out. Can anyone tell me if using only 2 NVMes for dualparity will work?
 
I have not had good results with regards to write speeds when using Storage Spaces with Parity either. Granted, my setup consisted of four 8TB drives in a MediaSonic USB 3.0 enclosure. I don't know why your 24-drive array would be quite so slow.

I look forward to seeing benchmarks from your new build!
 
100-300 mb/sec read and 20 mb/sec write

Will DrivePool give me the same functionality? And good speed?

You will absolutely LOVE Stablebit DrivePool & Stablebit Scanner.

If you use a couple SSD's as cache drives your speed limit is pretty much the limit of your network connection.

I think you will enjoy the control that file level duplication provides, especially in such a large installation. I've got 48~ drives in two separate pools and it's been absolutely wonderful for years.

Give it a shot, it's inexpensive and there is a free trial.

~RF
 
So storage spaces give me 800 mb/sec read but only 20-40 mb/sec write. That is with 6x12TB and 2xSSD, in dual parity.


I also tried Stabelbit DrivePool. not sure how the parity works tho. I get 200-300 mb/sec write without the SSDs so its much better. But as i understand it, dual parity meens it copies the files 2x times. So i only have 1/3 of the space? With storage spaces i will loose 2 disks for parity with drivepool 5,3 disk, out of my 8.
 
Last edited:
You will absolutely LOVE Stablebit DrivePool & Stablebit Scanner.

If you use a couple SSD's as cache drives your speed limit is pretty much the limit of your network connection.

I think you will enjoy the control that file level duplication provides, especially in such a large installation. I've got 48~ drives in two separate pools and it's been absolutely wonderful for years.

Give it a shot, it's inexpensive and there is a free trial.

~RF

Personally I can also really vouch for StableBit DrivePool, it's such a hasslefree solution, I've only been using it for a trial period still but I'm definitely going to purchase and continue using it. I previously reported SSH transfer speed (write) but I tested later with network shares and that way I can copy with pretty much the normal WD Red HDD transfer speed, jumps up/down between 90 - 130 MB/s mostly depending on files etc. This is without any SSD cache disk obviously. I also installed Ordered File Placement balancer plugin for it so it fills up one HDD at a time which I consider is a pro due following: less power use as the others can be asleep, thus also less noise and also less "total wear", one disk is more written to before the pool is filled up completely, meaning it's more likely that the more "written to" HDD gives in first instead of the whole batch starting to go bad at a similar time, making it easier to manage in the long run.

EDIT: Make sure to enable the optimized i/o network boost setting, it does help a little.

Backup-wise I prefer using SyncBack from 2BrightSparks rather than using mirroring inside DrivePool as I prefer manual offline backup to USB HDD in my case every once in a while (but I'm at a relatively low capacity of 9TB atm and for me it's not ubercritical if my backup is like 1 month or so old. As long as my capacity needs doesn't climb over 16TB and I can stick to 2x8TB USB drives for backup I consider it reasonable & manageable from cost and conveniency perspective, 3 or more USB drives would be a bit PITA though).
 
Last edited:
If you have a UPS (which I suggest you should with that much HDD and data to be stored), then try running this in powershell: Set-StoragePool -FriendlyName _POOL_NAME_HERE_ -IsPowerProtected $true

...although, I don't know if that command still is valid in 2016 or 10.

I also had that slow write speed with Parity, and adding an SSD cache didn't help much at all. My gigabit network became the bottleneck in terms of speed between computer and server.

I'm doing this on my mini backup server with 4 HDDs of 4TB running a small UPS that is capable of shutting down the server properly.
 
Back
Top