Become a Patron!

Storage Spaces 3x10 TB + 120 GB SSD Cache, ReFS

Discussion in 'SSDs & Data Storage' started by Rudde93, Jan 11, 2017.

  1. Rudde93

    Rudde93 Limp Gawd

    Messages:
    136
    Joined:
    Nov 19, 2010
    Hello!

    So I wanted to test out Storage Spaces in Windows Server 2016 for my new NAS I as I wanted to expand storage dynamically (with irregular drive sizes??) with parity drive.

    I expected this to be very simple all done in GUI and done, that was not the case.

    I don't have very much windows server 2016 experience, and I really have no idea how to properly configure this.

    I would like to know if 512 or 4096 sector size for this kind of setup, and how to configure this all with parity and making it possible to expand in future. I've done a GUI configure with the 3 10 TB drives and made a parity drive where I got horrific write speeds at under 40 MB/s and I was told the only way to combat this in storage spaces is with a write back cache SSD disk, and I didn't figure out how to add this to my pool and use it for the virtual disk, or as cache for any virtual disk for that matter.

    Is there anyone with any experience in Powershell storage spaces who would share their knowledge about how to do a best practices setup with storage spaces? :S
     
  2. bigdogchris

    bigdogchris Wii was a Novelty

    Messages:
    16,565
    Joined:
    Feb 19, 2008
  3. DeChache

    DeChache [H]ardness Supreme

    Messages:
    6,281
    Joined:
    Oct 30, 2005
    You should be able to do it via GUI or via powershell. About the only way to get decent speed out of storage spaces is to use tiering. For that you need at least 2 matching SSDs and that basically makes the I/O happen on the ssds and then data moved to the big drives in the background
     
  4. acquacow

    acquacow Limp Gawd

    Messages:
    273
    Joined:
    Mar 7, 2016
    I have a 5x4TB storage space in a windows10 VM, formatted ReFS with 2-way mirroring that I've pushed to about 400MB/sec write just fine...

    I fried one of my 10Gig-e cards last week, so I can't reproduce it right now, but I will get some numbers when my replacement card arrives.
     
  5. acquacow

    acquacow Limp Gawd

    Messages:
    273
    Joined:
    Mar 7, 2016
    Alright, new 10gig card is in... I was definitely wrong about the 400MB/sec, that must have been when I was playing with it on SSDs. With my 5 disks and 2-way replication, I can read/write to it around 160MB/sec.

    [​IMG]

    It definitely doesn't like writes larger than 1MB, so I cut it off after it completed the 2MB write test.

    Now that all of this is up and running again, I'll see if I can pass my ioDrives back through and do the test on two of those...
     
  6. opfreak

    opfreak Limp Gawd

    Messages:
    424
    Joined:
    Nov 17, 2005
    I wasn't able to turn on Refs with 3 drives in a parity setup. in windows 10 pro. write speeds were slow as hell, 25MB/s
     
  7. Budwise

    Budwise [H]ard|Gawd

    Messages:
    1,738
    Joined:
    Dec 7, 2004
    I get decent enough speeds with a simple RAID1. Any time you do a parity based config you'll get crap write speeds but good read speeds. It's just the nature of it.

    The odd thing was it seemed that until 4MB reads, it didn't read from both drives but after that you can clearly see it starts pulling reads from both drives. Either way, for Plex and Data Storage it's fine for me.

    Capture.JPG
     
  8. dobieg2002

    dobieg2002 Limp Gawd

    Messages:
    220
    Joined:
    May 7, 2007
    We just purchased a DataON (SOFS) solution 4 x 70 drive enclosures. 10% SSD and the rest 8TB SED drives (encrypted with bitlocker) connected to 3 servers with 2 SAS 12GB connection to each enclosure and 3 x 10Gb Ethernet each.

    The intended purpose was hyper-v (dev / test) and archive data, but the performance has been better than our VNX2 arrays (8Gb FC) and nearly on par with a VMax 10K (built formcapacity not performance with 3 tiers)

    You can only enable and set size of ssd cache via powershell

    https://redmondmag.com/articles/2013/10/28/ssd-write-back-cache.aspx?m=1

    It is recommended you use mirroring, tiering, and ssd cache for best performance and reliability.
     
  9. CombatChrisNC

    CombatChrisNC Gawd

    Messages:
    633
    Joined:
    Apr 3, 2013
    That's a hell of a deployment for a sandbox and archive! That puts a lot of people's productivity SAN to shame in terms of speed and space.
     
  10. dobieg2002

    dobieg2002 Limp Gawd

    Messages:
    220
    Joined:
    May 7, 2007
    If the solution works well and is stable / reliable the plan is to move to RDMA interfaces (infiniband) with hyperconverged (storage spaces direct) for high iops requirements and SOFS for the rest.

    But to the posters point, let me know if ssd cache helps with parity performance. We avoided it due to the reported poor performance. We are doing 3 copy so we lose quite a bit of space, it would be nice to use parity to regain some capacity. These solutions are Still 1/4 the price of a commercial EMC VNX / Unity SAN for same usable capacity.
     
  11. acquacow

    acquacow Limp Gawd

    Messages:
    273
    Joined:
    Mar 7, 2016
    Just put together a storage space with 3 of my ioDrives using ReFS.

    Two-way mirror.
    [​IMG]

    Parity:
    [​IMG]

    No resiliency (clearly it doesn't stripe):
    [​IMG]

    For comparison here's the same 3 ioDrives in a normal striped dynamic disk volume on the same host (no storage spaces):
    [​IMG]


    Specs on these drives for large block read is 1.5GB/sec each, 1.3GB/sec write.

    I might not actually be able to hit peak write with this tool, I may need a lot more threads. I usually use the fio benchmark so I can tune to the system and find the sweet spot, but since we're using ATTO, that's what I used.

    Also, this is all running in a windows 10 VM with the drives passed through. I don't have the bios on the box set for max perf, so there could be some CPU throttling affecting results.

    -- Dave
     
    Last edited: Feb 3, 2017
  12. acquacow

    acquacow Limp Gawd

    Messages:
    273
    Joined:
    Mar 7, 2016
    Transferring my data off my disk-based storage space so that I can mess with it tomorrow...

    Getting consistent read speeds off of it, but my ioDrives are getting warm =)

    [​IMG]

    It's okay though, the ioDrives are good for 100C, so I'm not really worried.

    -- Dave
     
  13. acquacow

    acquacow Limp Gawd

    Messages:
    273
    Joined:
    Mar 7, 2016
    So I moved my storage over to the ioDrives, now I'm going to test speeds of the disk array with 4 and 5 disks in the storage space... I have this strange feeling that 2-way mirroring will be faster on 4 disks vs 5, but I want to confirm.
     
  14. acquacow

    acquacow Limp Gawd

    Messages:
    273
    Joined:
    Mar 7, 2016
    Alright, did a bunch of tests, not gonna do an ssd cache, going to do a full-on SSD tier via powershell instead.

    Confirmed a few things.

    1) LSI RAID card in JBOD mode is not the same as SATA passthrough "IT" mode in terms of large block performance. I cross-flashed to IT mode and now I have consistent results.
    [​IMG][​IMG]

    2) No resiliency mode does not stripe by default:
    [​IMG]

    3) 4-Disk and 5-disk 2-way mirror has near-identical perf until you get to larger reads:
    [​IMG] [​IMG]

    BUT, with 4-disk, you can setup a 2-column stripe and double your speeds over a 5-disk mirror.
    [​IMG]

    I'm now setting up two of my ioDrives as a mirrored SSD tier, with 4 drives beneath it.

    -- Dave
     
  15. acquacow

    acquacow Limp Gawd

    Messages:
    273
    Joined:
    Mar 7, 2016
    No way, built an ssd tier and even had write caching enabled with the ssds and the perf was worse than a 4-disk mirror with striping...

    [​IMG]

    This is how I built it... anything wrong you can see in there?

    $pd = (Get-PhysicalDisk -CanPool $True | Where MediaType -NE UnSpecified)
    New-StoragePool -PhysicalDisks $pd –StorageSubSystemFriendlyName “Windows Storage*” -FriendlyName “StoragePool”

    $ssdTier = New-StorageTier -StoragePoolFriendlyName "StoragePool" -FriendlyName SSDTier -MediaType SSD
    $hddTier = New-StorageTier -StoragePoolFriendlyName "StoragePool" -FriendlyName HDDTier -MediaType HDD

    New-VirtualDisk -StoragePoolFriendlyName "StoragePool" -FriendlyName TieredSpace -StorageTiers $ssdtier, $hddtier -StorageTierSizes 1090GB, 7260GB -ResiliencySettingName Mirror -WriteCacheSize 1GB

    Get-VirtualDisk TieredSpace | Get-Disk | Initialize-Disk -PartitionStyle GPT
    Get-VirtualDisk TieredSpace | Get-Disk | New-Partition -DriveLetter “E” -UseMaximumSize
    Initialize-Volume -DriveLetter “E” -FileSystem REFS -Confirm:$false


    Thanks,

    -- Dave
     
  16. Olga-SAN

    Olga-SAN Limp Gawd

    Messages:
    301
    Joined:
    Mar 14, 2011
    run diskspd with 4kb 100% read, write and r/w and you'll be amazed

    tl;dr x-put isn't storage spaces' weakest point ;)
     
  17. acquacow

    acquacow Limp Gawd

    Messages:
    273
    Joined:
    Mar 7, 2016
    Since the latest updates, I'm not sure what weakpoints storage spaces has now...

    I put 4 of my ioDrives in now with a 2-column stripe and it hauls ass...

    [​IMG]
     
    PliotronX likes this.
  18. Rudde93

    Rudde93 Limp Gawd

    Messages:
    136
    Joined:
    Nov 19, 2010
    Hello! Thanks for all the great responses, I set up my system before I got any replies here.

    The system is now (unfortunately) populated with 15,2 TB of data.

    Mistakes I have done, I did fixed instead of thin provisioning, huge mistake, impossible for me now to move data over from a virtual disk with an undesirable config to a new config.

    You can use a partition of your system disk to your storage space, so I did the 120 GB SSD and 50 GB of my system drive SSD for cache, I gave my disk 50 GB of Write-back cache (which did solve my write speed issues), however everything locks when that cache is full, no IO, everything freezes on the system until it can write again, can't read, can't do anything that has anything remotely to do with that device.

    I also figured out I can't change the write-back cache after it's set!

    I needed two SSD devices with minimum 50 GB to give the disk 50 GB Cache, because they follow the redundancy patterns even at cache, so in one parity virtual disk I need 1 disk fault tolerance even at cache level (I do not want this, the data isn't THAT important, is there any way, to overrun this and just say it's okey the cache doesn't have the tolerance?) I imagine this would be horrible at 2-parity virtual disk.

    I do not know what to do anymore, the system is expensive and now is kinda useless, since it freeze all the time because the cache is full...


    Also wondering if it is possible to open the Windows 10 Storage Spaces GUI on Windows server 2016? It seems way nicer, and the gui in Windows server 2016 seems very limited.
     
  19. Grimlaking

    Grimlaking [H]ard|Gawd

    Messages:
    1,604
    Joined:
    May 9, 2006
    See in my dev I'm running 3 VNX 5400;s wth 88 600gb physical 10k drives, 36 200gig SSD's and 11 Flash (for flash cache.) We've set up a couple pools with fastcaching and SSD's heavier for the pool to serve our connected SQL boxes. Those are 32 core with 384 gig running duald dedicated 8gb fiber connects. (More concerned with high speed transaction reads and writes than we are with big file throughput.)

    I need to throw a good Iometer on a lab box and see what kind of speed/performance we are getting like above. I know we get insane IOP's right now.
     
  20. CombatChrisNC

    CombatChrisNC Gawd

    Messages:
    633
    Joined:
    Apr 3, 2013
    Are those bare metal SQL boxes? How many of them do you operate?