Storage Spaces 3x10 TB + 120 GB SSD Cache, ReFS

Rudde93

Limp Gawd
Joined
Nov 19, 2010
Messages
137
Hello!

So I wanted to test out Storage Spaces in Windows Server 2016 for my new NAS I as I wanted to expand storage dynamically (with irregular drive sizes??) with parity drive.

I expected this to be very simple all done in GUI and done, that was not the case.

I don't have very much windows server 2016 experience, and I really have no idea how to properly configure this.

I would like to know if 512 or 4096 sector size for this kind of setup, and how to configure this all with parity and making it possible to expand in future. I've done a GUI configure with the 3 10 TB drives and made a parity drive where I got horrific write speeds at under 40 MB/s and I was told the only way to combat this in storage spaces is with a write back cache SSD disk, and I didn't figure out how to add this to my pool and use it for the virtual disk, or as cache for any virtual disk for that matter.

Is there anyone with any experience in Powershell storage spaces who would share their knowledge about how to do a best practices setup with storage spaces? :S
 
You should be able to do it via GUI or via powershell. About the only way to get decent speed out of storage spaces is to use tiering. For that you need at least 2 matching SSDs and that basically makes the I/O happen on the ssds and then data moved to the big drives in the background
 
I have a 5x4TB storage space in a windows10 VM, formatted ReFS with 2-way mirroring that I've pushed to about 400MB/sec write just fine...

I fried one of my 10Gig-e cards last week, so I can't reproduce it right now, but I will get some numbers when my replacement card arrives.
 
Alright, new 10gig card is in... I was definitely wrong about the 400MB/sec, that must have been when I was playing with it on SSDs. With my 5 disks and 2-way replication, I can read/write to it around 160MB/sec.

OEBhgbq.png


It definitely doesn't like writes larger than 1MB, so I cut it off after it completed the 2MB write test.

Now that all of this is up and running again, I'll see if I can pass my ioDrives back through and do the test on two of those...
 
I wasn't able to turn on Refs with 3 drives in a parity setup. in windows 10 pro. write speeds were slow as hell, 25MB/s
 
I get decent enough speeds with a simple RAID1. Any time you do a parity based config you'll get crap write speeds but good read speeds. It's just the nature of it.

The odd thing was it seemed that until 4MB reads, it didn't read from both drives but after that you can clearly see it starts pulling reads from both drives. Either way, for Plex and Data Storage it's fine for me.

Capture.JPG
 
We just purchased a DataON (SOFS) solution 4 x 70 drive enclosures. 10% SSD and the rest 8TB SED drives (encrypted with bitlocker) connected to 3 servers with 2 SAS 12GB connection to each enclosure and 3 x 10Gb Ethernet each.

The intended purpose was hyper-v (dev / test) and archive data, but the performance has been better than our VNX2 arrays (8Gb FC) and nearly on par with a VMax 10K (built formcapacity not performance with 3 tiers)

You can only enable and set size of ssd cache via powershell

https://redmondmag.com/articles/2013/10/28/ssd-write-back-cache.aspx?m=1

It is recommended you use mirroring, tiering, and ssd cache for best performance and reliability.
 
We just purchased a DataON (SOFS) solution 4 x 70 drive enclosures. 10% SSD and the rest 8TB SED drives (encrypted with bitlocker) connected to 3 servers with 2 SAS 12GB connection to each enclosure and 3 x 10Gb Ethernet each.

The intended purpose was hyper-v (dev / test) and archive data, but the performance has been better than our VNX2 arrays (8Gb FC) and nearly on par with a VMax 10K (built formcapacity not performance with 3 tiers)

You can only enable and set size of ssd cache via powershell

https://redmondmag.com/articles/2013/10/28/ssd-write-back-cache.aspx?m=1

It is recommended you use mirroring, tiering, and ssd cache for best performance and reliability.

That's a hell of a deployment for a sandbox and archive! That puts a lot of people's productivity SAN to shame in terms of speed and space.
 
That's a hell of a deployment for a sandbox and archive! That puts a lot of people's productivity SAN to shame in terms of speed and space.
If the solution works well and is stable / reliable the plan is to move to RDMA interfaces (infiniband) with hyperconverged (storage spaces direct) for high iops requirements and SOFS for the rest.

But to the posters point, let me know if ssd cache helps with parity performance. We avoided it due to the reported poor performance. We are doing 3 copy so we lose quite a bit of space, it would be nice to use parity to regain some capacity. These solutions are Still 1/4 the price of a commercial EMC VNX / Unity SAN for same usable capacity.
 
Just put together a storage space with 3 of my ioDrives using ReFS.

Two-way mirror.
3w5pdBX.png


Parity:
5MWBnYa.png


No resiliency (clearly it doesn't stripe):
PvCS8qo.png


For comparison here's the same 3 ioDrives in a normal striped dynamic disk volume on the same host (no storage spaces):
30jmjyg.png



Specs on these drives for large block read is 1.5GB/sec each, 1.3GB/sec write.

I might not actually be able to hit peak write with this tool, I may need a lot more threads. I usually use the fio benchmark so I can tune to the system and find the sweet spot, but since we're using ATTO, that's what I used.

Also, this is all running in a windows 10 VM with the drives passed through. I don't have the bios on the box set for max perf, so there could be some CPU throttling affecting results.

-- Dave
 
Last edited:
Transferring my data off my disk-based storage space so that I can mess with it tomorrow...

Getting consistent read speeds off of it, but my ioDrives are getting warm =)

QnaY2pk.png


It's okay though, the ioDrives are good for 100C, so I'm not really worried.

-- Dave
 
So I moved my storage over to the ioDrives, now I'm going to test speeds of the disk array with 4 and 5 disks in the storage space... I have this strange feeling that 2-way mirroring will be faster on 4 disks vs 5, but I want to confirm.
 
Alright, did a bunch of tests, not gonna do an ssd cache, going to do a full-on SSD tier via powershell instead.

Confirmed a few things.

1) LSI RAID card in JBOD mode is not the same as SATA passthrough "IT" mode in terms of large block performance. I cross-flashed to IT mode and now I have consistent results.
PI8vcLD.png
rHbcePL.png


2) No resiliency mode does not stripe by default:
DeFvvoz.png


3) 4-Disk and 5-disk 2-way mirror has near-identical perf until you get to larger reads:
81BmfOS.png
IkDG5MH.png


BUT, with 4-disk, you can setup a 2-column stripe and double your speeds over a 5-disk mirror.
a0ghLEU.png


I'm now setting up two of my ioDrives as a mirrored SSD tier, with 4 drives beneath it.

-- Dave
 
No way, built an ssd tier and even had write caching enabled with the ssds and the perf was worse than a 4-disk mirror with striping...

Qg6KzTw.png


This is how I built it... anything wrong you can see in there?

$pd = (Get-PhysicalDisk -CanPool $True | Where MediaType -NE UnSpecified)
New-StoragePool -PhysicalDisks $pd –StorageSubSystemFriendlyName “Windows Storage*” -FriendlyName “StoragePool”

$ssdTier = New-StorageTier -StoragePoolFriendlyName "StoragePool" -FriendlyName SSDTier -MediaType SSD
$hddTier = New-StorageTier -StoragePoolFriendlyName "StoragePool" -FriendlyName HDDTier -MediaType HDD

New-VirtualDisk -StoragePoolFriendlyName "StoragePool" -FriendlyName TieredSpace -StorageTiers $ssdtier, $hddtier -StorageTierSizes 1090GB, 7260GB -ResiliencySettingName Mirror -WriteCacheSize 1GB

Get-VirtualDisk TieredSpace | Get-Disk | Initialize-Disk -PartitionStyle GPT
Get-VirtualDisk TieredSpace | Get-Disk | New-Partition -DriveLetter “E” -UseMaximumSize
Initialize-Volume -DriveLetter “E” -FileSystem REFS -Confirm:$false


Thanks,

-- Dave
 
run diskspd with 4kb 100% read, write and r/w and you'll be amazed

tl;dr x-put isn't storage spaces' weakest point ;)
 
Since the latest updates, I'm not sure what weakpoints storage spaces has now...

I put 4 of my ioDrives in now with a 2-column stripe and it hauls ass...

70bqqoE.png
 
Hello! Thanks for all the great responses, I set up my system before I got any replies here.

The system is now (unfortunately) populated with 15,2 TB of data.

Mistakes I have done, I did fixed instead of thin provisioning, huge mistake, impossible for me now to move data over from a virtual disk with an undesirable config to a new config.

You can use a partition of your system disk to your storage space, so I did the 120 GB SSD and 50 GB of my system drive SSD for cache, I gave my disk 50 GB of Write-back cache (which did solve my write speed issues), however everything locks when that cache is full, no IO, everything freezes on the system until it can write again, can't read, can't do anything that has anything remotely to do with that device.

I also figured out I can't change the write-back cache after it's set!

I needed two SSD devices with minimum 50 GB to give the disk 50 GB Cache, because they follow the redundancy patterns even at cache, so in one parity virtual disk I need 1 disk fault tolerance even at cache level (I do not want this, the data isn't THAT important, is there any way, to overrun this and just say it's okey the cache doesn't have the tolerance?) I imagine this would be horrible at 2-parity virtual disk.

I do not know what to do anymore, the system is expensive and now is kinda useless, since it freeze all the time because the cache is full...


Also wondering if it is possible to open the Windows 10 Storage Spaces GUI on Windows server 2016? It seems way nicer, and the gui in Windows server 2016 seems very limited.
 
We just purchased a DataON (SOFS) solution 4 x 70 drive enclosures. 10% SSD and the rest 8TB SED drives (encrypted with bitlocker) connected to 3 servers with 2 SAS 12GB connection to each enclosure and 3 x 10Gb Ethernet each.

The intended purpose was hyper-v (dev / test) and archive data, but the performance has been better than our VNX2 arrays (8Gb FC) and nearly on par with a VMax 10K (built formcapacity not performance with 3 tiers)

You can only enable and set size of ssd cache via powershell

https://redmondmag.com/articles/2013/10/28/ssd-write-back-cache.aspx?m=1

It is recommended you use mirroring, tiering, and ssd cache for best performance and reliability.

See in my dev I'm running 3 VNX 5400;s wth 88 600gb physical 10k drives, 36 200gig SSD's and 11 Flash (for flash cache.) We've set up a couple pools with fastcaching and SSD's heavier for the pool to serve our connected SQL boxes. Those are 32 core with 384 gig running duald dedicated 8gb fiber connects. (More concerned with high speed transaction reads and writes than we are with big file throughput.)

I need to throw a good Iometer on a lab box and see what kind of speed/performance we are getting like above. I know we get insane IOP's right now.
 
Back
Top