SSD transfer speeds for development

Discussion in 'SSDs & Data Storage' started by elgerm, May 2, 2014.

  1. elgerm

    elgerm n00b

    Messages:
    5
    Joined:
    Jul 21, 2013
    I'm thinking of building a new pc, I want to put 4 samsung EVO ssd's in there in raid 5. I'm doing lots of development work. When compiling there's lots of small files written and read and that is a real bottleneck.

    But now I see Z97 motherboards are coming out, would it be worth waiting for those to be available? There's very little on the internet on ssd's and random read/write performance. What good are those high sequential speeds if the bottleneck is almost always random read/write performance? Which never go higher than say 100mb. Or, is m.2 the way to go?
     
  2. ochadd

    ochadd Gawd

    Messages:
    894
    Joined:
    May 9, 2008
    If you are coming from a standard hard drive you're going to be floored by the improvement in random performance. Read or write. In RAID 5 I'd stick with something like the Seagate 600, Corsair Neutron, Intel 730, which were really tweaked for the consumer and have good enterprise roots where they almost always live in RAID arrays.

    It can be argued that sequential transfer speeds are a SSDs weakest part. Random performance is orders of magnitude faster.

    http://www.anandtech.com/bench/product/182?vs=964
     
  3. cyclone3d

    cyclone3d [H]ardForum Junkie

    Messages:
    13,067
    Joined:
    Aug 16, 2004
    I am all for SSDs, but from what people say, RAID on SSDs tends to introduce latency.

    What about doing a RAM cache or RAM drive when compiling? That would probably be quite a bit faster than SSDs for small files.

    Of course the EVO software already does this, but not sure if it would work in a RAID setup.

    Even better would be a nice RAID card with a RAM cache built in.
     
  4. ochadd

    ochadd Gawd

    Messages:
    894
    Joined:
    May 9, 2008

    Once an SSD has had all the flash cells written to it performance will stop dropping and level out. A single stand alone drive would normally receive the trim command from the OS or other software and cells no longer actively storing data would be free to be written to. Performance would return to nearly new levels. If the trim command isn't received, and it won't in RAID 5, the flash cell first has to be freed up before it can be written to again and that only happens when new data is attempted to be written. This is where the added latency from a RAID array comes from.

    However another factor working in your favor is background garbage collection. This is baked into the firmware of a drive and is independent of any outside sources. As long as the RAID array receives an occasional break garbage collection can run to clean things up without trim. How aggressive, often, and thorough garbage collection is depends on the mfg., drive, and firmware.

    RAM drives and caching are awesome but they can run out of steam very quickly and their use situational.
     
  5. Liger88

    Liger88 2[H]4U

    Messages:
    2,657
    Joined:
    Feb 14, 2012

    Because it's easier to market sequential read/write performance than 4K Queue Depth and random read/writes. Same reason marketing IOP's on consumer drives is a joke and incredibly inaccurate and misleading.

    The question is: Have you every owned or used an SSD for your environment? RAID'ing 5 of them seems like overkill and like cyclone3d I've always heard mixed reports other than a huge boost in sequential read/writes.

    The difference in random read/writes of an SSD to HDD starts at about 20-30x improvement. This is taking into account the cheapest, least powerful SSD you can find v.s. the highest end hard drive you can find.
     
  6. MrGuvernment

    MrGuvernment [H]ard as it Gets

    Messages:
    19,169
    Joined:
    Aug 3, 2004
    DO NOT USE RAID 5 my god man!

    raid 5 was dead in 2009! Anyone who recommends raid 5 this day in age should be shot on the spot.

    Raid 10 if you want raid and performance and redundancy or just do a raid 0 working array and save your work elsewhere
     
  7. elgerm

    elgerm n00b

    Messages:
    5
    Joined:
    Jul 21, 2013
    Maybe I should elaborate a bit more: I was going to do raid 5 (or raid 0 or 10 whatever) with ms-storage spaces. So my boot drive will be a single ssd and via storage spaces create a raid array with the other ssd's. That way, the trim commands are still being sent and you can just plug the drives in the SATA ports of the mainboard.

    Let me rephrase the question:
    Does M.2 or drives connected with PCIe for higher bandwidth do anything for random read/write performance? Since the bandwidth isn't an issue there.
     
  8. spazoid

    spazoid Limp Gawd

    Messages:
    289
    Joined:
    Jun 16, 2008
    This is a ridiculous comment.
    Especially with SSD's RAID-5 can make a lot of sense.

    Most of the reasons the industry moved away from RAID-5 "in 2009" (why exactly this year? To sound like you have some incredible insight? As if RAID-5 stopped being used from one day to the other...) are mitigated by the smaller drive sizes and increased speed of SSD's.
     
  9. spazoid

    spazoid Limp Gawd

    Messages:
    289
    Joined:
    Jun 16, 2008
    No. Once they get rid of AHCI we will probably see lower latencies, though.
     
  10. NetJunkie

    NetJunkie [H]ardForum Junkie

    Messages:
    9,682
    Joined:
    Mar 16, 2001
    Uhm. How much storage work do you do?
     
  11. Romale23

    Romale23 Gawd

    Messages:
    866
    Joined:
    Dec 12, 2006
    I'm no expert but I do know every system my work currently uses is 100% raid 5 servicing 68k users
     
  12. Starrbuck

    Starrbuck 2[H]4U

    Messages:
    2,478
    Joined:
    Jun 12, 2005
  13. jwcalla

    jwcalla 2[H]4U

    Messages:
    3,629
    Joined:
    Jan 19, 2011
    In my experience with a rather ordinary SSD, the bottleneck in compiling is always the CPU.
     
  14. cbf123

    cbf123 n00b

    Messages:
    50
    Joined:
    Dec 12, 2013
    Do you know for sure that this is a bottleneck? Have you looked at iowait times (or whatever the equivalent is for your OS)?

    The reason I ask is that I'm a professional software developer and even with spinning rust drives I can max out all the CPU cores on the system long before I hit the limit on disk I/O. Generally source files are small so reading doesn't take much time, the amount of CPU required is large, and the result will be cached anyways so the write times don't really matter.

    The one place where disk I/O matters is building a disk image, which involves a lot of copying from one place to another. For that we do hit the limits of the disk. But for actual compiling...never.
     
  15. TeeJayHoward

    TeeJayHoward Limpness Supreme

    Messages:
    9,631
    Joined:
    Feb 8, 2005
    Also note that a new interface is coming out. If you're concerned about future drives, it might be worth holding off.
     
  16. TeeJayHoward

    TeeJayHoward Limpness Supreme

    Messages:
    9,631
    Joined:
    Feb 8, 2005
    This link alone is a very good example of why you shouldn't get all your information from a single source. 5 drive minimum for RAID6? What?

    RAID5 is fine for uptime for the vast majority of people. If you lose two disks out of the array, oh no... It's time to restore from backup. Even with 6x6TB drives and a measly 15MB/s rebuild speed, you're looking at over 1200 years mean time to data loss. For 4x1TB SSDs with the same worst-case scenario, you're at over 20,000 years.

    Personally, I'll trust the experts. NetApp. EMC. Hitachi. If they're okay with using RAID5 for their customers, I think it's safe to say it'll do for a home user.
     
  17. brutalizer

    brutalizer [H]ard|Gawd

    Messages:
    1,593
    Joined:
    Oct 23, 2010
    Raid-5 is not recommended when using arrays with large disks, becase repair time will take forever. During that time, another disk might crash. Instead, use raid-6. But this applies only to situations when repair time takes forever (several days or even weeks(?) with future 8-10TB disks).

    If you have a SSD raid, repair times will be very fast so that is not an issue anymore. In that case raid-5 might be ok, because the repair window is small.
     
  18. MrGuvernment

    MrGuvernment [H]ard as it Gets

    Messages:
    19,169
    Joined:
    Aug 3, 2004
    Yes, confirmed, i apologies for my "assumptions"

     
  19. rsq

    rsq Limp Gawd

    Messages:
    246
    Joined:
    Jan 11, 2010
    I am a software developer and I use 1 samsung 840pro 512Gb disk.

    Performance is stunning. You do not need more SSD.

    I do have a trick for you if you want to accelerate from ridiculous speed to ludicrous speed (who remembers that movie?):
    Stuff a lot of RAM in the machine. Setup ZFS with a block size of 4 or 8k and set sync=disabled. Writes then go to RAM and are flushed to SSD later. All files also end op in the ARC and are basically handled from RAM. My whole code base comes from and goes to RAM, and the ZFS is then the background syncing service. As the machine is a laptop, sudden power cuts are not an issue (internal UPS in the form of battery) so sync=disabled is not a big deal. This is the fastest I could make my development environment. (snapshotting my code base is also cool)
     
  20. levak

    levak Limp Gawd

    Messages:
    386
    Joined:
    Mar 27, 2011
    Then again, EMC doesn't use consumer grade SSDs...

    Matej