Hello,
We are currently testing our using ZFS on Linux as a storage platform for our VPS nodes but we don't seem to be getting the performance figures we expected. Can you please provide some suggestions as to what we should be tweaking to reach higher iops?
Hardware is SuperMicro with the MegaRAID 2108 chipset as a daughter card on each server. We had three servers that we tested: pure SSD with 4 x 480GB Chronos drives, 4 x 600GB SAS 10k drives and 480GB SSD cache, and lastly 4 x 1TB SAS 7.2k with 480GB SSD cache.
We set the onboard raid controller to essentially JBOD (raid0 per drive with cache turned off). We got the best performance when using Z2 with LZ4 compression. Here are the results we saw:
Server ----- RAID ----- Filesystem ---- Read Speed ----- Write Speed ------ Read IOPS ------ Write IOPS
pure SSD with 4 x 480GB ------ Soft - Z2 ------- ZFS without compression ------- 4.1GB/s ------ 778 MB/s ----- 23025 ----- 7664
Chronos drives
pure SSD with 4 x 480GB ------ Soft - Z2 ------- ZFS with lz4 compression ------- 4.6GB/s ----- 1.8GB/s ------ 47189 ----- 15715
Chronos drives
4 x 600GB SAS 10k drives ------- Soft - Z2 --- --- ZFS without compression ------ 4.0Gb/S ------ 486Mb/s ----- 10234 ------ 3413
with 480GB SSD cache
4 x 600GB SAS 10k drives ------ Soft - Z2 ------ ZFS with lz4 compression ------- 4.8Gb/s -------- 2.2Gb/s ----- 51056 ------- 17077
wtth 480GB SSD cache
4 x 1TB SAS 7.2k drives ----- Soft - Z2 ------- ZFS without compression --------- 4.1Gb/s ------- 1.4Gb/s -------- 53486 -------- 17840
with 480GB SSD cache
4 x 1TB SAS 7.2k drives
with 480GB SSD cache ------- Soft - Z2 ------ ZFS with lz4 compression -------- 4.4Gb/s -------- 1.7Gb/s ------- 37803 --------- 12594
It doesn't seem like there is a big difference between the pure SSD setup and the others, even without the SSD cache on the other setups. Is there something we are missing here or something we should be looking into? We were expecting iops to be a lot higher than the results.
Thank you for your help!
We are currently testing our using ZFS on Linux as a storage platform for our VPS nodes but we don't seem to be getting the performance figures we expected. Can you please provide some suggestions as to what we should be tweaking to reach higher iops?
Hardware is SuperMicro with the MegaRAID 2108 chipset as a daughter card on each server. We had three servers that we tested: pure SSD with 4 x 480GB Chronos drives, 4 x 600GB SAS 10k drives and 480GB SSD cache, and lastly 4 x 1TB SAS 7.2k with 480GB SSD cache.
We set the onboard raid controller to essentially JBOD (raid0 per drive with cache turned off). We got the best performance when using Z2 with LZ4 compression. Here are the results we saw:
Server ----- RAID ----- Filesystem ---- Read Speed ----- Write Speed ------ Read IOPS ------ Write IOPS
pure SSD with 4 x 480GB ------ Soft - Z2 ------- ZFS without compression ------- 4.1GB/s ------ 778 MB/s ----- 23025 ----- 7664
Chronos drives
pure SSD with 4 x 480GB ------ Soft - Z2 ------- ZFS with lz4 compression ------- 4.6GB/s ----- 1.8GB/s ------ 47189 ----- 15715
Chronos drives
4 x 600GB SAS 10k drives ------- Soft - Z2 --- --- ZFS without compression ------ 4.0Gb/S ------ 486Mb/s ----- 10234 ------ 3413
with 480GB SSD cache
4 x 600GB SAS 10k drives ------ Soft - Z2 ------ ZFS with lz4 compression ------- 4.8Gb/s -------- 2.2Gb/s ----- 51056 ------- 17077
wtth 480GB SSD cache
4 x 1TB SAS 7.2k drives ----- Soft - Z2 ------- ZFS without compression --------- 4.1Gb/s ------- 1.4Gb/s -------- 53486 -------- 17840
with 480GB SSD cache
4 x 1TB SAS 7.2k drives
with 480GB SSD cache ------- Soft - Z2 ------ ZFS with lz4 compression -------- 4.4Gb/s -------- 1.7Gb/s ------- 37803 --------- 12594
It doesn't seem like there is a big difference between the pure SSD setup and the others, even without the SSD cache on the other setups. Is there something we are missing here or something we should be looking into? We were expecting iops to be a lot higher than the results.
Thank you for your help!