Entry Level SSDs for Enterprise?

mda

2[H]4U
Joined
Mar 23, 2011
Messages
2,207
Hey all,

Just need some info on Enterprise SSDs.

While reviews, benchmarks and overall anecdotal info is widely available on consumer SSDs all the way up to the 960 Pro, I don't really see much info on enterprise SSDs.

I'm looking to move an 8 Drive (300GB each Drive) 15K RPM Seagate RAID10 (1.2GB Useable) on an IBM M1115 to an SSD configuration.

We have about a 400GB Database on it. Some questions:

1. Should I even be considering SSDs at all?
2. Googling up old info as of 2011-2014 says SSDs for DBs are not such a good idea. Haven't seen much newer info though.
3. What are decent 'entry level' SSDs I should be considering? Main concern will be reliability and not outright speed.

Thanks!
 
I've gotta admit since I'm shopping for a homelab I only really pay attention to used enterprise SSD's that can be found for reasonable prices on eBay.

That said I hear good things about Toshiba, Micron, and obviously Intel.

Current entry level Intel is the S5310 but go to the manufacture's sites (for example; https://www.micron.com/products/solid-state-storage)
I know you can find reviews for the more common models: Micron's M510DC, etc.
 
I've gotta admit since I'm shopping for a homelab I only really pay attention to used enterprise SSD's that can be found for reasonable prices on eBay.

That said I hear good things about Toshiba, Micron, and obviously Intel.

Current entry level Intel is the S5310 but go to the manufacture's sites (for example; https://www.micron.com/products/solid-state-storage)
I know you can find reviews for the more common models: Micron's M510DC, etc.
Solid rec's!
 
Thanks for the answer guys.

One thing I'd also like to know -- are the entry level ones more reliable than the higher end/ more expensive drives?

I'm guessing that a 4 drive SSD RAID 10 will outperform my 8 drive 15k RPM RAID 10 especially for random read applications... but will be a moot point if these things suddenly die on me.
 
Thanks for the answer guys.

One thing I'd also like to know -- are the entry level ones more reliable than the higher end/ more expensive drives?

I'm guessing that a 4 drive SSD RAID 10 will outperform my 8 drive 15k RPM RAID 10 especially for random read applications... but will be a moot point if these things suddenly die on me.


How are you connecting to that high speed RAID10 wonder project?
 
Not sure what you mean by this question...

But to elaborate, I plan these to be somehow of a drop in replacement. The server will be on a LAN and will be on an M1115 or similar RAID card.

Also, as a supplementary question -- how long are server grade hard drives supposed to last before I need to replace them as a preventive measure?
 
Last edited:
Well what I mean is you can have all the speed on tap you like but if everyone is accessing it through say a single gigabit connection...
 
For enterprise usage, even the good old Intel 320 is like 10 times faster than Samsung 960 Pro.
For SQL Server, what matter is sync write performance, which means, write 8kb, and flush it, write another 8kb and flush it and so on, the write cache on the SSD drive doesn't really matter much.
Read performance rarely matter either, as most of it will be cached by your system memory. But even here enterprise SSD's are more reliable and have less latency, but it is way harder to benchmark. Still I think this review show how poor a "top" consumer SSD actually performs.

Here are some numbers I have from pg_test_fsync on Linux

Intel 320
Code:
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      7536.101 ops/sec     133 usecs/op
        fdatasync                          7689.040 ops/sec     130 usecs/op
        fsync                              6983.023 ops/sec     143 usecs/op
        fsync_writethrough                            n/a
        open_sync                          7053.565 ops/sec     142 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      3829.058 ops/sec     261 usecs/op
        fdatasync                          5226.290 ops/sec     191 usecs/op
        fsync                              4606.950 ops/sec     217 usecs/op
        fsync_writethrough                            n/a
        open_sync                          3532.281 ops/sec     283 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write          4602.549 ops/sec     217 usecs/op
         2 *  8kB open_sync writes         3542.094 ops/sec     282 usecs/op
         4 *  4kB open_sync writes         2146.523 ops/sec     466 usecs/op
         8 *  2kB open_sync writes         1165.538 ops/sec     858 usecs/op
        16 *  1kB open_sync writes          603.864 ops/sec    1656 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                6721.514 ops/sec     149 usecs/op
        write, close, fsync                6663.931 ops/sec     150 usecs/op
                                                                                                                                                           
Non-sync'ed 8kB writes:                                                                                                                                   
        write                            621169.579 ops/sec       2 usecs/op

Micron M500DC
Code:
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                     10927.203 ops/sec      92 usecs/op
        fdatasync                         10677.827 ops/sec      94 usecs/op
        fsync                             10111.280 ops/sec      99 usecs/op
        fsync_writethrough                            n/a
        open_sync                         10354.950 ops/sec      97 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      5494.180 ops/sec     182 usecs/op
        fdatasync                          7756.420 ops/sec     129 usecs/op
        fsync                              7499.449 ops/sec     133 usecs/op
        fsync_writethrough                            n/a
        open_sync                          5200.128 ops/sec     192 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write          7742.726 ops/sec     129 usecs/op
         2 *  8kB open_sync writes         5198.035 ops/sec     192 usecs/op
         4 *  4kB open_sync writes         3081.131 ops/sec     325 usecs/op
         8 *  2kB open_sync writes          923.314 ops/sec    1083 usecs/op
        16 *  1kB open_sync writes          458.814 ops/sec    2180 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                9979.190 ops/sec     100 usecs/op
        write, close, fsync                9978.986 ops/sec     100 usecs/op

Non-sync'ed 8kB writes:
        write                            637686.835 ops/sec       2 usecs/op

Intel S3710
Code:
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      6526.556 ops/sec     153 usecs/op
        fdatasync                          6423.496 ops/sec     156 usecs/op
        fsync                              6349.335 ops/sec     157 usecs/op
        fsync_writethrough                            n/a
        open_sync                          6388.924 ops/sec     157 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      3256.142 ops/sec     307 usecs/op
        fdatasync                          3540.772 ops/sec     282 usecs/op
        fsync                              3463.122 ops/sec     289 usecs/op
        fsync_writethrough                            n/a
        open_sync                          3157.743 ops/sec     317 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write          3474.978 ops/sec     288 usecs/op
         2 *  8kB open_sync writes         3161.509 ops/sec     316 usecs/op
         4 *  4kB open_sync writes         2522.673 ops/sec     396 usecs/op
         8 *  2kB open_sync writes          116.569 ops/sec    8579 usecs/op
        16 *  1kB open_sync writes           55.359 ops/sec   18064 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                6288.686 ops/sec     159 usecs/op
        write, close, fsync                6267.458 ops/sec     160 usecs/op
                                                                                                                                                           
Non-sync'ed 8kB writes:                                                                                                                                   
        write                            630383.644 ops/sec       2 usecs/op

OCZ Trion 100
Code:
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                       995.930 ops/sec    1004 usecs/op
        fdatasync                          1055.991 ops/sec     947 usecs/op
        fsync                               664.282 ops/sec    1505 usecs/op
        fsync_writethrough                            n/a
        open_sync                           650.991 ops/sec    1536 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                       531.542 ops/sec    1881 usecs/op
        fdatasync                          1016.294 ops/sec     984 usecs/op
        fsync                               763.904 ops/sec    1309 usecs/op
        fsync_writethrough                            n/a
        open_sync                           330.426 ops/sec    3026 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write           756.973 ops/sec    1321 usecs/op
         2 *  8kB open_sync writes          331.186 ops/sec    3019 usecs/op
         4 *  4kB open_sync writes          203.435 ops/sec    4916 usecs/op
         8 *  2kB open_sync writes           78.496 ops/sec   12740 usecs/op
        16 *  1kB open_sync writes           47.396 ops/sec   21099 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                 660.695 ops/sec    1514 usecs/op
        write, close, fsync                 795.775 ops/sec    1257 usecs/op

Non-sync'ed 8kB writes:
        write                            431654.682 ops/sec       2 usecs/op
 
Well what I mean is you can have all the speed on tap you like but if everyone is accessing it through say a single gigabit connection...

It is all on a single gigabit connection. Right now we don't have a lot of users on this and I don't think we come close to saturating a gigabit link. I will need to check this sooner or later though. Thanks for reminding me!

For enterprise usage, even the good old Intel 320 is like 10 times faster than Samsung 960 Pro.
For SQL Server, what matter is sync write performance, which means, write 8kb, and flush it, write another 8kb and flush it and so on, the write cache on the SSD drive doesn't really matter much.
Read performance rarely matter either, as most of it will be cached by your system memory. But even here enterprise SSD's are more reliable and have less latency, but it is way harder to benchmark. Still I think this review show how poor a "top" consumer SSD actually performs
[/code]

This is quite a big jump! I'll have to check around to see which SSDs are actually available in this part of the world...
 
Just as an additional question if I am to migrate...

Can I assume that for Oracle DB usage purposes, a 4 Disk SSD RAID10 will be faster than an 8 Disk 15K Cheetah RAID10?

Or should I just go cheap out with a 2 Disk RAID1 SSD?
 
Last edited:
Go with the cheap option first and see if it fits the bill.

If not then just get the extra two drives for the RAID10.

Maybe also get a dual/quad Intel NIC for some load balancing if multiple users are hitting it.
 
Alright. Am currently looking around for quotations on enterprise SSDs at the moment.

Our server is an IBM X3500. It has 4 LAN ports but 3 are unused at the moment... I suppose I can look into link aggregation or something.

Thanks!
 
You will probably need to look into 10G lan. Intel x520-da2 or da1 cards can be had under 100$ (natex), and switches for under 300.

SSDs to look for: Intel S3500/3510/3700/3710. Enterprise level with powerloss protection. I bought 4-400GB 3700's for 125$/each, in raid 10 they max my 10G lan on seq read and writes.
 
You will probably need to look into 10G lan. Intel x520-da2 or da1 cards can be had under 100$ (natex), and switches for under 300.

SSDs to look for: Intel S3500/3510/3700/3710. Enterprise level with powerloss protection. I bought 4-400GB 3700's for 125$/each, in raid 10 they max my 10G lan on seq read and writes.


This guy gets it. You want fast? This is fast.

My only question is: what method are you using the set up this RAID10 array? Built in hardware RAID? ZFS HBA? Hardware RAID AIC?
 
ZFS. Raid10 array is for my clustered VMs over ISCSI. Hence the need for 10G. Bulk storage is on a RaidZ2 array using 6-4tb 2.5" seagate drives with a 200GB S3700 as cache. That array of slow drives can sustain ~350MB/s steady writes, and easily maxes out a 1Gb line.
 
I run raid0 on Intel S3700, they are so reliable that there is no need for mirroring, I do still have backups though!
 
I can't afford to have a single drive failure take down my entire cluster, and they are plenty fast in Raid10 for my needs.
 
Alright. Am currently looking around for quotations on enterprise SSDs at the moment.

Our server is an IBM X3500. It has 4 LAN ports but 3 are unused at the moment... I suppose I can look into link aggregation or something.

Thanks!




I'm an enterprise architect for a VAR in the midwest region, though we do business everywhere. We're not doing Billions a year like CDW, but we get around 120 Million in revenue.

When you say X3500, do you mean the old Machine Type 7977? Or do you have an M2/M3/M4? The M1115 is compatible with M4 and M5 variants per Lenovo documentation, so I'm guessing that's what you have. Couple of things to think about:

- Endurance is the most important aspect of the SSD. Seeing as SSDs have a limited number of write cycles, you need to make sure your SSD can handle your daily change rate (how much data is actually written daily). As you've probably seen, these are measured in Drive Writes Per Day (DWPD - aka Endurance). Endurance is calculated by Capacity*DWPD*365*5. So a single 400GB SSD with 3X endurance can do 400*3*365*5 = 2.19 Petabytes of writes before it clocks out. Typical consumer and enterprise value drives can only do .3 DWPD, so you need to make sure you buy the correct endurance for your use case.

- Forget about thoughput. Once you have the data on the disks, throughput generally won't matter for a database your size. You're not going to saturate a 6Gb SAS/SATA backplane - your network limited anyway at 1Gb.

- Access Time is what drives performance. All SSDs pretty much have very similar access times.

- Don't tear your hair out trying to find which drives have the best throughput/IOPS/access times. Anything you put in there will be a lot better than what you have and will make you happy. Focus on Endurance and Price.

- If you want your configuration to be supported by Lenovo, you need to purchase components that are certified by Lenovo. All of your certified / supported options are here. https://lenovopress.com/tips0856-serveraid-m1115 . This doesn't mean drives not on this list won't work - they probably will. However, if you ever have an issue and need to call Lenovo for support, you need to stay inside the fence or they'll disavow you.

- I'd recommend no longer buying IBM/Lenovo servers. They're not innovating in the marketplace, they're the most expensive server vendor out there, and their sales force is pretty much clueless. The pony I'd recommend you ride is Dell because they have the best support in the industry (it's based in the US), their prices are the best, and they have a massive selection of solid state disk at excellent prices. I usually can sell SSD (1x-3x endurance) for close to what customers pay for 10K and less than 15K.
 
Last edited:
I'm an enterprise architect for a VAR in the midwest region, though we do business everywhere. We're not doing Billions a year like CDW, but we get around 120 Million in revenue.

When you say X3500, do you mean the old Machine Type 7977? Or do you have an M2/M3/M4? The M1115 is compatible with M4 and M5 variants per Lenovo documentation, so I'm guessing that's what you have. Couple of things to think about:

- Endurance is the most important aspect of the SSD. Seeing as SSDs have a limited number of write cycles, you need to make sure your SSD can handle your daily change rate (how much data is actually written daily). As you've probably seen, these are measured in Drive Writes Per Day (DWPD - aka Endurance). Endurance is calculated by Capacity*DWPD*365*5. So a single 400GB SSD with 3X endurance can do 400*3*365*5 = 2.19 Petabytes of writes before it clocks out. Typical consumer and enterprise value drives can only do .3 DWPD, so you need to make sure you buy the correct endurance for your use case.

- Forget about thoughput. Once you have the data on the disks, throughput generally won't matter for a database your size. You're not going to saturate a 6Gb SAS/SATA backplane - your network limited anyway at 1Gb.

- Access Time is what drives performance. All SSDs pretty much have very similar access times.

- Don't tear your hair out trying to find which drives have the best throughput/IOPS/access times. Anything you put in there will be a lot better than what you have and will make you happy. Focus on Endurance and Price.

- If you want your configuration to be supported by Lenovo, you need to purchase components that are certified by Lenovo. All of your certified / supported options are here. https://lenovopress.com/tips0856-serveraid-m1115 . This doesn't mean drives not on this list won't work - they probably will. However, if you ever have an issue and need to call Lenovo for support, you need to stay inside the fence or they'll disavow you.

- I'd recommend no longer buying IBM/Lenovo servers. They're not innovating in the marketplace, they're the most expensive server vendor out there, and their sales force is pretty much clueless. The pony I'd recommend you ride is Dell because they have the best support in the industry (it's based in the US), their prices are the best, and they have a massive selection of solid state disk at excellent prices. I usually can sell SSD (1x-3x endurance) for close to what customers pay for 10K and less than 15K.

Spot on! The machine is an IBM branded M4. It's already out of warranty though, which is also why I'm looking at upgrade options. Lenovo won't sell me any upgrade parts for these and would rather sell me a new server.

Will consider Dell for our future servers. They traditionally don't have good presence in my country though, which has typically been an HP and IBM dominated scene. Will check as to who carries those.

Thanks for the info on SSDs in general. Not sure how I can compute the endurance I'd need though. Is there any way to check writes made on a RAID card? Also, any cheaper drives you'd recommend to start?
 
That's incredibly strange that lenovo won't sell you upgrade parts; especially since the 3500M4 is fairly new. Don't you have any IBM/Lenovo VARs nearby? What country are you located in?

- You can renew the maintenance on that box. It's going to be supported by Lenovo for at least 3-4 more years easy if that's of interest

- One way to look at daily change rate can be to look at your backups. If you do daily incremental backups, how big are the incrementals? Or if you do weekly backups, just take a look at the difference in size and see if you can detect a pattern from day to day or week to week.

- Most DB providers should have a command or tool you can run to monitor read/write statistics on the DB

With a DB of only 400GB, you could pretty much re-write the entire DB 2-3x daily with 1 DWPD SSDs. It sounds like you work for a fairly small business and don't really have a whole lot of change. The below drives are compatible with your server and RAID card and are rated at 2.4 DWPD. They should be more than sufficient for what you're looking to do.


00AJ365 A571 480GB SATA 2.5" MLC HS Enterprise Value SSD
00AJ370 A572 800GB SATA 2.5" MLC HS Enterprise Value SSD
IOPS read*: 63,000
IOPS write*: 35,000
Sequential read rate†: 425 MBps
Sequential write rate†: 375 MBps
Read latency: 0.5 ms
Write latency: 1.5 ms

I think I'd recommend 3-5 of the 00AJ370 drives. Just 1 of them will blow away your current drives in performance, so really we're just looking at capacity. So 3 of them in RAID 5 or maybe 4 in RAID 10. Either way, that would give you approx 1.45TB of useable capacity and if you need more space, just add another drive (or two) to the array. The M1115 can only do RAID 5 with the below feature key:


ServeRAID M1100 Series Zero Cache/RAID 5 Upgrade 81Y4542 A1X1


Also, the M1115 doesn't have a cache on it. If you want a RAID card with cache that will help recover from power failures, get the below two features:


81Y4481 A347 ServeRAID M5110 SAS/SATA Controller
81Y4559 A1WY ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade

And a port-channel with at least 2 links if your switches can handle it is a good idea for redundancy.
 
That's incredibly strange that lenovo won't sell you upgrade parts; especially since the 3500M4 is fairly new. Don't you have any IBM/Lenovo VARs nearby? What country are you located in?

- You can renew the maintenance on that box. It's going to be supported by Lenovo for at least 3-4 more years easy if that's of interest

- One way to look at daily change rate can be to look at your backups. If you do daily incremental backups, how big are the incrementals? Or if you do weekly backups, just take a look at the difference in size and see if you can detect a pattern from day to day or week to week.

- Most DB providers should have a command or tool you can run to monitor read/write statistics on the DB

With a DB of only 400GB, you could pretty much re-write the entire DB 2-3x daily with 1 DWPD SSDs. It sounds like you work for a fairly small business and don't really have a whole lot of change. The below drives are compatible with your server and RAID card and are rated at 2.4 DWPD. They should be more than sufficient for what you're looking to do.


00AJ365 A571 480GB SATA 2.5" MLC HS Enterprise Value SSD
00AJ370 A572 800GB SATA 2.5" MLC HS Enterprise Value SSD
IOPS read*: 63,000
IOPS write*: 35,000
Sequential read rate†: 425 MBps
Sequential write rate†: 375 MBps
Read latency: 0.5 ms
Write latency: 1.5 ms

I think I'd recommend 3-5 of the 00AJ370 drives. Just 1 of them will blow away your current drives in performance, so really we're just looking at capacity. So 3 of them in RAID 5 or maybe 4 in RAID 10. Either way, that would give you approx 1.45TB of useable capacity and if you need more space, just add another drive (or two) to the array. The M1115 can only do RAID 5 with the below feature key:


ServeRAID M1100 Series Zero Cache/RAID 5 Upgrade 81Y4542 A1X1


Also, the M1115 doesn't have a cache on it. If you want a RAID card with cache that will help recover from power failures, get the below two features:


81Y4481 A347 ServeRAID M5110 SAS/SATA Controller
81Y4559 A1WY ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade

And a port-channel with at least 2 links if your switches can handle it is a good idea for redundancy.

Either Lenovo won't sell to me, or the resellers claim Lenovo won't sell to me in order to sell me a newer machine. Yes, we work for a fairly smaller company in the SME range. One X3500 M4 is all I have to work with, apart from older out of commission towers.

All I get is "Out of Warranty" and nothing we can do about it -- I can't even get a 2nd CPU for this machine.

Thanks for these. Will look into going with the RAID10 option. I suppose a RAID5/6 rebuild will be nowhere near as punishing for an SSD based system as it is for a HDD based one.

To play devil's advocate and cost issues aside -- is there any reason I should consider staying on these hard disks instead of upgrading?

Also, would there be any way to rebuild an 8 drive RAID 10 into a 4 drive RAID 10 without migrating the whole array? Was looking to do a drop in replacement, pull a disk and have it rebuild...
 
It's really all going to come down to cost. If the Lenovo SSD option is really expensive, then buying new might make sense. Various vendors have varying levels of presence in the world, so I'm not sure how good Dell is where you are. I can tell you that on the attached T430 configuration I just ran for you, the typical customer pricing would be around $9,000. I just configured it as a guess and to give you some ammo.

The 3500M5 customer buy price would be around $11,500


They're functionally equivalent configs (for the most part).

Edit:

I'm not aware of a way do what you want other than a backup/restore onto the new array. If you had a 4-drive RAID 5, it might be possible, but it sounds like all of your disk slots are full and you need to have all 4 SSDs in there to create your RAID set.

Might make sense to start running some free virtualization like free ESXi if you aren't already.
 

Attachments

  • Test+T430_1.pdf
    60.2 KB · Views: 114
  • 3500M5 Test.pdf
    92.5 KB · Views: 63
Sorry, forgot to add that -- I'm in the Philippines, SEA. Thanks for this.

I got quoted an X3500 M5 with the same CPU, 8GB of RAM, 8 300GB 15K SAS drives, an M1200 -- It came out at about 5600$ including tax. It didn't come out with an itemized breakdown so I can't do an apples to apples comparison on the parts but this will help me a lot.

BTW -- Thanks so much! You really didn't need to go through the trouble!

Edit: Looking at either 8x RAID10 480GB 3520s or 4x RAID10 800GB 3520s.

The 800GBs seem to be the better choice but migration doesn't seem to be easy.
 
Last edited:
3520s are a good drive. 1DWPD in a good pricepoint. If you're buying a new server, I don't see how migration is going to be any different with 4 vs 8? You're still doing a full backup/restore of data?
 
I'm looking to upgrade the server for now... Don't think I'll get the justification to purchase a new server at this point.

Looking at a new/second CPU, RAM and SSDs...
 
Back
Top