pure-SSD VM storage

Joined
Oct 9, 2013
Messages
8
the time has come to build a full-SSD based storage.

I have the following components in mind:
- Supermicro 216BE16-R920LPB
- 16 x Intel S3500 480GB SSD
- LSI 9211-8i or LSI 9207-8i (besides PCI-e v2 vs v3, and a newer chip on 9207, any difference?)

the box would run freebsd+zfs (or nexenta) with a raidz2 pool on top of these disks, and exported via NFS to host VMs.

ideas? suggestions?
 
I've never done this type of setup before but my recommendations are as follows.

- use mirrors instead of raidz2 (better iOPS performance)
- get a case without an expander and attach each drive directly to an HBA. That way your really taking advantage of the SSD's instead of creating a bottleneck on a single SAS2 connection and avoiding potential issues with running SATA drives on the Expander.
 
- get a case without an expander and attach each drive directly to an HBA. That way your really taking advantage of the SSD's instead of creating a bottleneck on a single SAS2 connection and avoiding potential issues with running SATA drives on the Expander.
it's still 24 Gbps, since it's a mini-sas connection from the backplane to the card, right? (which is 4x SAS, which is full-duplex 6gbit, so 4x2x6)?
 
Yes that's right. I guess depending how your connecting your hosts to the storage it may not matter too much.
 
Get the newer chip LSI card - you'll want it being able to handle as many IOPS with as little latency as possible.

What size capacity do you require for the VMs? That is actually more relevant for the Mirror vs RaidZ2 decision. You are also going to be using devices that have 100-1000x the IOPS of traditional drives. You don't really need to be worrying about conserving IOPS as much as ensuring you have adequate space.
 
SATA is not full duplex, so it's limited to 4x6gbit

IF it was SAS drives, then you would get full duplex speeds.
 
Get the newer chip LSI card - you'll want it being able to handle as many IOPS with as little latency as possible.

What size capacity do you require for the VMs? That is actually more relevant for the Mirror vs RaidZ2 decision. You are also going to be using devices that have 100-1000x the IOPS of traditional drives. You don't really need to be worrying about conserving IOPS as much as ensuring you have adequate space.

with a mirror, I'd get ~3.8TB, which is barely enough. of course if it grows a lot more, we could buy more...
 
If you get the dual expander version, it would be a waste of money.

The SATA drives are not dual ported, so the second expander would never be used.
 
I have 4x1T SSD in RAID10 running OmniOS for my lab, not much faster than hybrid.

Better get SAS disks with SSD cache, you get similar performance, high capacity and much lower cost.
 
I have 4x1T SSD in RAID10 running OmniOS for my lab, not much faster than hybrid.

Better get SAS disks with SSD cache, you get similar performance, high capacity and much lower cost.

why wouldn't it be faster?

I'm gonna drive this over dual 10Gb links.
 
why wouldn't it be faster?

I'm gonna drive this over dual 10Gb links.

It's fast inside the storage, but when you go through all the layers to VM, doesn't feel much faster than hybrid configuration.

I use 4Gb FC with MPIO. SEQ is fast, but RANDOM IO isn't much better than hybrid. I don't feel general purpose OS is optimized for pure SSD yet.
 
Back
Top