Steam library on NAS?

Discussion in 'SSDs & Data Storage' started by /dev/null, Oct 18, 2018.

  1. /dev/null

    /dev/null [H]ardForum Junkie

    Messages:
    13,843
    Joined:
    Mar 31, 2001
    Hey guys,

    I keep running out of disk space on my local Win10 machine for my steam library. I've got a recycled HP DL380G6 with 12 SAS 3TB drives & an 400GB SLC cache & 10Gbit adapter running FreeNAS. I've traditionally only used this for VM backups but I've been thinking of carving out a slice of 1-4TB and serving up a SMB share for my win10 box.

    Rather than run locally on SSDS, do you think there is going to be any noticeable difference between running it off a network drive on 10Gbit fiber vs local ssd? I have an older NVME SSD (PM951 I think, 1TB) drive that seems to max out on reads sub 1GB/s.
     
  2. Mr.OppressoLiber

    Mr.OppressoLiber [H]ard|Gawd

    Messages:
    1,115
    Joined:
    Aug 8, 2004
    I recently added another NVMe drive for the steam library due to space, but I am curious on if a NAS solution would be beneficial in the long run. I'll keep an eye on your thread to see how it works out for you if you should try it.
     
  3. Aireoth

    Aireoth 2[H]4U

    Messages:
    2,324
    Joined:
    Oct 12, 2005
    Not an expert but in my experience in my enterprise environment any NAS is always slower than a local HDD let alone an SSD.
     
  4. H2R2P2

    H2R2P2 Limp Gawd

    Messages:
    403
    Joined:
    Jun 18, 2006
    There is only one sure way to find out......... :)

    On paper, you shouldnt see much of a difference as long as the following are true:

    1) Your drives in the SAN/NAS are able to match the speed of your local storage option
    2) You limit the work the SAN/NAS is doing with other systems during game sessions
    3) Your 10Gbit switch actually has the capacity to "switch" at full line rate
    4) Your 10Gbit network adapters are good ones that also are able to keep up at full 10Gbit line rate

    I really like the idea!
     
    IdiotInCharge likes this.
  5. jad0083

    jad0083 Limp Gawd

    Messages:
    134
    Joined:
    Apr 30, 2006
    try to carve out an iscsi volume and serve it as a block store directly. Should have less latency than SMB, as it doesn't have an extra overhead and has less abstraction. a separate isolated vlan on that iscsi traffic should help as well.
     
    IdiotInCharge likes this.
  6. /dev/null

    /dev/null [H]ardForum Junkie

    Messages:
    13,843
    Joined:
    Mar 31, 2001
    I think that is too complex for what I want to do. I don't really want to add iscsi onto the nas.

    Any benchmarks you guys want me to run?

    So I copied some files there initiallyi and I was seeing (on the large files at least) around ~ 870MB (not Mbit...MiB)/s transfers, to that is a bit less than 7Gbit/s.
     
  7. /dev/null

    /dev/null [H]ardForum Junkie

    Messages:
    13,843
    Joined:
    Mar 31, 2001
    1) I can get ZFS scrubs at > 1GB/s so I think I should be OK here.
    2) Backups only run when I'm sleeping so it should be idle when I'm playing
    3) I have a CRS317 that is asic based to switch at line rate
    4) I have Mellanox Connectx-2....so not the best, but i could probably spend $30 to pick up a ConnectX-3 if this becomes an issue.

    Only big weakness I see is I'm pretty much out of pci-e lanes on my board with 2 x 1070. The "x16" slot is pci-e 2.0 with just 4 lanes for the 10Gbit card.
     
  8. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,731
    Joined:
    Jun 13, 2003
    Main issue is that you'd be adding TCP/IP latency on top of OS/filesystem latency to drive latency versus a native SSD option. iSCSI would limit some of that, and having a 10Gbps setup with a 4x 6TB 7200RPM mirror already set up, I may look into doing this myself.

    Not sure how complicated iSCSI is to share from FreeNAS 11.2 back to Windows 10 Pro, though.
     
  9. /dev/null

    /dev/null [H]ardForum Junkie

    Messages:
    13,843
    Joined:
    Mar 31, 2001
    I understand that & it makes sense....however even if it's higher latency, is it going to be noticeable ? If it's 3% slower, I don't care. if everything takes 50% longer to load, I'm going back to SSD.
     
  10. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,731
    Joined:
    Jun 13, 2003
    That's going to be game dependent. I load random stuff to a spinner, AAA stuff to an SSD, and really don't worry about load times. At worst, be specific about what you put where. One example of terrible spinner performance is the Battlefield series.
     
  11. Eickst

    Eickst [H]ard|Gawd

    Messages:
    1,802
    Joined:
    Aug 24, 2005
    You're going to add network latency, using a network based file share with SMB overhead on top of it, with spinning disks....from a local SSD

    Yea you'll probably notice a difference and I doubt it's only 3%. Not in sequential read benchmarks but in real life settings, you are going to feel it.

    If you can do an iscsi mount to the box over 10gb I'd look in to that. Much less overhead
     
  12. /dev/null

    /dev/null [H]ardForum Junkie

    Messages:
    13,843
    Joined:
    Mar 31, 2001
    Well, I have moved Metro 2033 Redux...don't notice any speed difference, yet. So far so good :)
     
  13. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,731
    Joined:
    Jun 13, 2003
    So I've been running a test- a pair of 500GB SATA SSDs across the 10Gbase-T link (Aquantia NICs and an HP switch), shared using iSCSI and mounted on the workstation, can't tell the difference in terms of load times.