Using cheap SSDs in RAID in 2019?

ajm83

n00b
Joined
Jan 25, 2012
Messages
32
I'm looking for some advice on a cheap solution for fast(ish) storage for VMs.

I was thinking of using 2 x 1TB Samsung 860 Evo in an MDADM RAID 1 setup on Linux.

Since I will be sharing out this storage as ISCSI for ESXI (i.e. it will be a big chunk of storage shared out, not individual files visible to Linux), I guess TRIM will not work, so is it still best practise to overprovision by 10% or so?

Open to other suggestions on how to achieve similar results if there are better ways that won't cost the earth..!

Thanks
 
860 EVOs are a good choice. I'd probably suggest more like 20%, although modern controllers/drives are much better at garbage collection and simply having that much space free will help.
 
RAID 1 you will be fine with those drives but I wouldn't RAID 0 SSD's. Theoretical performance is better but in real terms, there are some tasks that will be slower than a single drive would be. I did try RAID 0 with a pair of Samsung 840 Pro 256GB drives and found that they performed better as single drives for both writes & read in real world usage.
 
How many PCIe lanes you got? Fast 1TB NVMe are $100 now. Even a single drive is going to slap a couple SATA drives around.

If your board can bifurcate there are some really interesting and affordable options to have an array of drives with software defined storage. (since HW raid is rightfully on its way out)
 
Thanks for the replies.

It's an old motherboard I'll be using so I'm not exactly sure if it will be any good for NVME or not. It's a Gigabyte GA-Z77X-D3H. Here's the spec: https://www.gigabyte.com/uk/Motherboard/GA-Z77X-D3H-rev-10#sp
Booting from the NVME drive wouldn't be needed, just for sharing out as ISCSI.

The problem I'm trying to overcome is poor IOPS on an existing ZFS RAID z10 setup we have. It's great on sequential loads but like a bloody snail as soon as different VMs start hitting the disk.
Worst thing I've seen was a power cut where the UPS batteries got used up and everything shut down. With the cold ARC and L2ARC, machines were taking hours to boot up (not helped by some machines then applying their pending Windows updates of course :rolleyes:).
 
Back
Top