15K SAS vs SSDs; Enterprise SSD Pricing

mda

2[H]4U
Joined
Mar 23, 2011
Messages
2,207
Hello all,

Looking to move up our server from 8 300GB Seagate 15K SAS drives in RAID10 to something a lot faster.

Based on a quick check, the old server is a SB-E Xeon with these Seagate Savio 15k 15.3s

I was looking at having 4 960GB Micron 5100 PROs in RAID 10. I know this will be faster, but how much faster?

Main use for this is our OLTP Oracle based database, 350-400GB in size.

Related to the above question, I'm currently speccing out a new server to replace our old one. WRT to Enterprise SSD Pricing, I have an HP dealer charging me about 2300$ (including tax) for each piece of SSD.
HPE 960GB SATA 6G Mixed Use SFF (2.5in). I'm quite sure this is on the high side, but would like confirmation. The actual server being quoted without the drives are cheap though.

Any input would be a lot of help!

Thanks!
 
You're going to want to go with SSDs if they are monetarily feasible. 15k drives are just slightly faster versions of 7200rpm and 10k drives, delivering 190-220 Iops whereas even old and slower SSDs maintain tens of thousands of Iops. For databases this means huge gains. Also on reliability, it seems like working with servers using 15k drives that replacing them is routine as they are just about at the physical limit of spinning rust and generate more heat than even 10k drives. I always prefer 10k for this reason as 15k is not much faster. Not to mention being able to use higher areal density which sometimes makes them faster in sequential throughput.
 
Hello mda! Seagate here. We just wanted to throw this your way. Nytro SAS 12Gb/s and Nytro SATA 6Gb/s. Take a look and see if this helps you out.

Will send you a PM in a bit. Not in the USA, though. But thanks for the info! Haven't seen these before.

You're going to want to go with SSDs if they are monetarily feasible. 15k drives are just slightly faster versions of 7200rpm and 10k drives, delivering 190-220 Iops whereas even old and slower SSDs maintain tens of thousands of Iops. For databases this means huge gains. Also on reliability, it seems like working with servers using 15k drives that replacing them is routine as they are just about at the physical limit of spinning rust and generate more heat than even 10k drives. I always prefer 10k for this reason as 15k is not much faster. Not to mention being able to use higher areal density which sometimes makes them faster in sequential throughput.

Yes I think we can push for these. Just unsure of how much an SSD should really cost or if I'm getting ripped off by the channel retailers. Any idea on the approximate pricing for these? i.e. I have my doubts why an entry level (by HP standards) 960GB HPE drive rated for 1 DWPD should cost me 2200$...
 
The Nytro SSD's looks really awesome.

Another thing though, enterprise SSD's are so reliable, that mirroring them is in my opinion just a waste of performance/money and time (adds complexity). At ServeTheHome they have some statistics on Intel S3700 drives where almost noone ever failed. In fact it is a lot more likely that you have to replace all system fans before you have to replace a SSD.

Instead you should spend time on a good backup and restore procedures.
 
Will send you a PM in a bit. Not in the USA, though. But thanks for the info! Haven't seen these before.



Yes I think we can push for these. Just unsure of how much an SSD should really cost or if I'm getting ripped off by the channel retailers. Any idea on the approximate pricing for these? i.e. I have my doubts why an entry level (by HP standards) 960GB HPE drive rated for 1 DWPD should cost me 2200$...

It sounds about right for true SAS SSDs and mind you SAS is even faster than SATA SSDs but there is a gimongous price difference there as you see from the quote. For example, about a year and a half ago we ordered R630s with just 200GB SAS SSDs and each of those was $1k. I get yelled at for talking about consumer grade SSDs in servers but everywhere we've used them in RAID be it 10 or 5 to perk up old servers, they have operated beautifully. As for finding something in between consumer grade and SAS drives, I have not looked into it. Keep in mind also that the SAS drives deliver more consistent performance for larger workloads and queue depths and the loads I'm dealing with aren't that big so it may be best to err on the side of SAS.

Get NVME SSD, cheaper and faster than.
For hot swap capability, you'd need U.2 drives or adapters and special controllers. It would probably end up costing about the same as SAS but indeed would be even faster.
 
The Nytro SSD's looks really awesome.

Another thing though, enterprise SSD's are so reliable, that mirroring them is in my opinion just a waste of performance/money and time (adds complexity). At ServeTheHome they have some statistics on Intel S3700 drives where almost noone ever failed. In fact it is a lot more likely that you have to replace all system fans before you have to replace a SSD.

Instead you should spend time on a good backup and restore procedures.
This doesn't exactly work for maintaining uptime. Can't have the server offline particularly during business hours and have to order off a replacement drive and wait for it and the restoration procedure. Its a very slim risk with SSDs but because there is any risk at all, you must prepare like its a when and not if scenario. High risk, low probability is just the same as low risk high probability in this situation.
 
It sounds about right for true SAS SSDs and mind you SAS is even faster than SATA SSDs but there is a gimongous price difference there as you see from the quote. For example, about a year and a half ago we ordered R630s with just 200GB SAS SSDs and each of those was $1k. I get yelled at for talking about consumer grade SSDs in servers but everywhere we've used them in RAID be it 10 or 5 to perk up old servers, they have operated beautifully. As for finding something in between consumer grade and SAS drives, I have not looked into it. Keep in mind also that the SAS drives deliver more consistent performance for larger workloads and queue depths and the loads I'm dealing with aren't that big so it may be best to err on the side of SAS.

Thanks. But these are SATA SSDs though! For comparison's sake, I checked out the Lenovo online configurator and specced out an ST550 tower. The price for a Micron 5100 Pro 960GB was in the 1000-1200$ range. Not sure what's so good about these HPEs that it needs to cost almost double. That, and it seems there is no other background info on who makes them for HPE.
 
This doesn't exactly work for maintaining uptime. Can't have the server offline particularly during business hours and have to order off a replacement drive and wait for it and the restoration procedure. Its a very slim risk with SSDs but because there is any risk at all, you must prepare like its a when and not if scenario. High risk, low probability is just the same as low risk high probability in this situation.

There are thousands of other reasons that machine will go offline before the SSD will fail. If uptime is important, you need some kind of system redundancy. Mirroring SSDs is still a poor solution.
 
Thanks. But these are SATA SSDs though! For comparison's sake, I checked out the Lenovo online configurator and specced out an ST550 tower. The price for a Micron 5100 Pro 960GB was in the 1000-1200$ range. Not sure what's so good about these HPEs that it needs to cost almost double. That, and it seems there is no other background info on who makes them for HPE.
Oh well then yeah that's too much. "Enterprise" SATA SSDs are not in the same league as SAS SSDs. Check out the HGST SS200, around $1,100 for each 800GB. Micron is great too (even if it is SATA) so go with what suits your situation!

There are thousands of other reasons that machine will go offline before the SSD will fail. If uptime is important, you need some kind of system redundancy. Mirroring SSDs is still a poor solution.
Again, the scenario where a single drive fails is still a reality and you cant have a business down while securing a replacement and restoring data from the previous evening (mornings work gone). The whole point of redundancy is keeping things running and using single drives is foolish in this case. SSDs can and do fail, in fact one of the aforementioned Toshiba SAS SSDs in our R630s did fail within a year (under warranty) but because it was in an array and under warranty it was not a showstopper.
 
Last edited:
Unfortunately I'll need the warranty so that's probably out of the question.

Got a quote on a Lenovo Rack server worth about half what the other vendor was offering... Same CPU, SSDs, and the Lenovo had double the RAM AND a hot spare SSD. Either HP or the vendor is making a killing off these servers:|
 
Now throw that Lenovo quote back to HP and see if you can start a pissing contest for lowest price.
 
Last edited:
The funny thing is that I've dealt with the HP vendor before for other products... The Lenovo offer arrived after 2 days after I sent an email inquiry to their office.
 
Mirroring SSDs is still a poor solution.

Horrible advice.
I manage a ton of servers with “Enterprise” SSDs. They still fail. I’ve seen controllers just give up the ghost *poof* and the drive disappears.
Doesn’t happen often, but just like spinning discs, the controllers can fail.
Don’t throw rules of RAID out the window because of this guy ^^^
I don’t build in single points of failure in my critical systems - most people don’t.
 
New semi related question...

I'm picking between a Lenovo 530 RAID card and a 930.

Obviously, one has cache and the other one doesn't.

How much performance difference for database work will I get from moving from the cheaper one to the more expensive one?

I'll be running 4 960GB drives in RAID10.
 
New semi related question...

I'm picking between a Lenovo 530 RAID card and a 930.

Obviously, one has cache and the other one doesn't.

How much performance difference for database work will I get from moving from the cheaper one to the more expensive one?

I'll be running 4 960GB drives in RAID10.

For most business/enterprise solutions, I always recommend going with a cached controller. Especially for something as critical as an Oracle database. ...unless the server is set up to auto archive constantly, or multiple times throughout each day, to a separate medium. I get that, literally, double the price may be hard to swallow, but: What is the overall impact business cost to rebuild and recover in the event of a RAID failure? Likely much more than the price difference of these two controllers.

Have you looked at cached SAS controllers from Adaptec, LSI, Areca, HPE, or SuperMicro?
 
For most business/enterprise solutions, I always recommend going with a cached controller. Especially for something as critical as an Oracle database. ...unless the server is set up to auto archive constantly, or multiple times throughout each day, to a separate medium. I get that, literally, double the price may be hard to swallow, but: What is the overall impact business cost to rebuild and recover in the event of a RAID failure? Likely much more than the price difference of these two controllers.

Have you looked at cached SAS controllers from Adaptec, LSI, Areca, HPE, or SuperMicro?
For mechanical drives, I agree but for SSDs, controller cache can actually reduce performance. PERC (LSI based) controllers feature Cut Through IO (CTIO) which is enabled when both read and write cache are disabled. This bypasses the RAID driver stack and is preferable for SSDs. What I look for is the ability to create custom size virtual disks which tend to be available with cached controllers but will enable cache only in certain circumstances. These more expensive controllers, at least in the case of PERCs, usually feature deeper queue depths as well which is beneficial for SSDs. I would always recommend going for the most expensive controller that the budget allows and perhaps scaling back on drive pricing.
 
How about a hybrid... 2 15k Sas drives mirrored for the OS boot volume. OLTP DB usually means highly random IO db.. I've personally managed 65TB oltp dbs. Fill the rest of the server up with SSDs and raid5 them. I know the raid 5 part will not be popular with some folks, but raid1 in some instances is a waste of space with SSDs. SSDs in raid5 will run pretty well. The only way to figure out if the performance will be right is to use IOMETER on a mirrored set, and then raid5 set.
 
For mechanical drives, I agree but for SSDs, controller cache can actually reduce performance. PERC (LSI based) controllers feature Cut Through IO (CTIO) which is enabled when both read and write cache are disabled. This bypasses the RAID driver stack and is preferable for SSDs. What I look for is the ability to create custom size virtual disks which tend to be available with cached controllers but will enable cache only in certain circumstances. These more expensive controllers, at least in the case of PERCs, usually feature deeper queue depths as well which is beneficial for SSDs. I would always recommend going for the most expensive controller that the budget allows and perhaps scaling back on drive pricing.

I agree with getting the most controller that the money allows...I will still recommend a cached controller, because repurposing happens and spindle drives may wind up on it in the future. Also helps for potential resale value, if it ever gets replaced with a controller featuring more ports.


What are your thoughts on the LSI Nytro controllers?
- the claim is that Nytro cache modules add a ton of performance, even with an SSD array
 
The HP drives are hugely expensive because of the HP markup for extra profit for putting their own custom "validated" firmware on them.
If you have to have brand new, then you're stuck, but great deals can be found on used ones.

If you don't have to use HP drives (normally you don't), then you can get the same models for way cheaper & be about the same (possibly some warnings about not "official" drive in your management software).

If your controller is a 6gb/s SAS controller, you won't see much difference in performance between the high end SATA ones (such as the Intel S3710 ones) and the higher end SAS ones.
If you have a 12gb/s SAS controller, then the latest & greatest high speed SAS 12gb/s will give you nearly double the throughput.

Make sure you have good high end enterprise class SSDs, the cheap desktop / consumer grade ones will cause you no end of grief.

While SSDs very rarely fail, they do fail sometimes, usually a problem with the controller circuit & then that's it.

The NVMe SSDs can be way faster (if your slot supports it), but any redundancy will need to be done via software.

For the ideal setup, 3 different SSDs, one for Data, Index and transaction logs. You could put the Transaction logs on an NVMe PCIe drive as you may not need to have those redundant.

The speed improvement in a highly used database is amazing when jumping to high end SSDs.
 
Thanks. I plan to buy everything including the RAID card and the SSDs from the server vendor, regardless of which vendor we end up choosing. HP / Dell / Lenovo are still on the table, although it seems most of the Lenovo vendors are giving me prices between 20-40% cheaper than the similarly specced Dells or HPs.

Likely going with Micron 5100 PROs or Intel S4600 drives, depending on the price difference between them. If the price difference isn't so big as well, I'm leaning towards a cached controller if only for the less chance of data loss...

Not planning such a big array (our DB size is just ~400GB after about 8 years), and we're targetting this machine to keep us afloat for another 5-8 years. Since this is the case, I'm probably sticking with RAID10, since RAID5 with this will offer less redundancy and we don't really need the space at this point assuming I go for 4 960GB drives.

Thanks again for the inputs!
 
Considered getting 6-8 480-512GB SSDs and going with nested RAID 15 (if you have the rack space)?

Not sure if the controller would natively support it...
 
Considered getting 6-8 480-512GB SSDs and going with nested RAID 15 (if you have the rack space)?

Not sure if the controller would natively support it...

Unfortunately not natively supported.

According to the RAID controller, it will only do 0/1/10/5/50/6/60

Better for me too. It takes the vendors about a week or two to get back to me with a revised quote when I said I only want a change in SSDs... Don't know why it should take more than 2 days to give me a revised / itemized figure for a different type of SSDs.

Make sure you have good high end enterprise class SSDs, the cheap desktop / consumer grade ones will cause you no end of grief.

On this topic but for a different use/machine, I was planning to replace some WD Black 2TB with some Samsung 860 Evo 1TB drives on a clone PC "server" running mySQL (<100GB Database), setup with CENTOS and linux mdadm raid. I suppose there shouldn't be much of a problem with this?
 
Last edited:
Unfortunately not natively supported.

According to the RAID controller, it will only do 0/1/10/5/50/6/60

Better for me too. It takes the vendors about a week or two to get back to me with a revised quote when I said I only want a change in SSDs... Don't know why it should take more than 2 days to give me a revised / itemized figure for a different type of SSDs.



On this topic but for a different use/machine, I was planning to replace some WD Black 2TB with some Samsung 860 Evo 1TB drives on a clone PC "server" running mySQL (<100GB Database), setup with CENTOS and linux mdadm raid. I suppose there shouldn't be much of a problem with this?


Ah, ok. RAID 15 would have to be attained with dual channels/controllers with the mirroring done at the software level, then...not worth it for a business critical database, IMO.

An Evo SSD in your clone PC should be just fine. Also consider the new Crucial MX500 (stout TDP endurance and a 5yr warranty for quite a bit less than the Samsung).
 
Last edited:
Back
Top