Quick Noobish question: Enterprise VS Consumer SSDs

Morphes

Supreme [H]ardness
Joined
Jul 16, 2001
Messages
4,336
Looking for a new SSD on newegg and just trying to figure out what the difference is between enterprise SSDs and consumer grade? Is it just a longer warranty, or are they actually faster or what.

Thanks for the help. :)
 
Enterprise drives are not generally faster but have a longer lifetime / increased write endurance.
 
Most Enterprise ssd's are SLC which is better than MLC. Faster, longer lifespan, more reliable, more expensive.

If you want the best there is and you have a LOT of money, go enterprise grade.
 
Aside from the longer write endurance, most Enterprise SSD's have capacitors that can flush whatever's in cache to DRAM, in the event of a sudden loss of power.
 
Most Enterprise ssd's are SLC which is better than MLC. Faster, longer lifespan, more reliable, more expensive.

If you want the best there is and you have a LOT of money, go enterprise grade.

Aren't a lot of enterprise drives using eMLC currently and moving away from using SLC?
 
The best reason to go to an enterprise grade SSD is if you want SAS.
 
Enterprise is about reliability, not performance. What do you prefer if you have a multi million dollar business running on your servers? Fast and unreliable servers, or slower and very rock solid servers? Reliability costs very much, more than performance.

For instance, the IBM Mainframes are very reliable. For instance, some of them make each calculation in parallell, and if one cpu calculates wrong that cpu is taken offline. Such tailor made solutions costs much.

At the same time, the IBM Mainframes are very slow cpu wise. Any decent high end x86 cpu is more than twice as fast as the fastest Mainframe cpu. And the Mainframe cpu runs at 5.26GHz and has 200MB L3 cache, and still it is slow. But Mainframes are very reliable and that is the reason the cost 10s of millions of USD, and that is the reason they handle billions of USD worth of banking transactions.

The same with RISC Unix such as IBM POWER and Oracle/Sun SPARC and HP Itanium - they are very reliable and not as buggy as x86. They have lot of functionality to increase reliabilty. Which is called RAS.

Thus, Enterprise SSD disks are reliable and can run for 24/7 whereas a cheap one might be faster, but if you try to run it for extended periods it will just break. And it might have bugs.
 
At the same time, the IBM Mainframes are very slow cpu wise. Any decent high end x86 cpu is more than twice as fast as the fastest Mainframe cpu. And the Mainframe cpu runs at 5.26GHz and has 200MB L3 cache, and still it is slow. But Mainframes are very reliable and that is the reason the cost 10s of millions of USD, and that is the reason they handle billions of USD worth of banking transactions.
Comparing the processing power of each CPU in a vector processing array to the processing power of a CPU in a traditional scalar computer is silly (really I think you know better than that). The mainframe is built to scale efficiently for many CPUs, while the stand-alone computer is built to run a single CPU as efficiently as possible.

Mainframes don't cost a lot of money because they are reliable. They are reliable because they cost a lot of money and no reasonable person would want that money to be wasted on something that doesn't work.

Furthermore, let's be honest - most of this has to do with sunk costs and the risks involved in rewriting old software. If new mainframes couldn't emulate old mainframes, would there even be a sustainable mainframe market?

Thus, Enterprise SSD disks are reliable and can run for 24/7 whereas a cheap one might be faster
Oh, nonsense...real "enterprise" ram/nand arrays are much faster than any single consumer ssd. When Kingston puts the "enterprise" label on one of their sata SSDs and jacks up the price, they're just taking advantage of the human tendency to be risk-adverse where risks are small and reckless when risks are large.
 
Comparing the processing power of each CPU in a vector processing array to the processing power of a CPU in a traditional scalar computer is silly (really I think you know better than that). The mainframe is built to scale efficiently for many CPUs, while the stand-alone computer is built to run a single CPU as efficiently as possible.

You and brutalizer are talking about two different kinds of machines. You are talking about HPC supercomputers not mainframes. The mainframes brutalizer is talking about (ones running banking systems) are not massive numbers of vector processors. Parallel vector calculations are great for running weather simulations and for modeling nuclear explosions but they are not very useful for running banking code.

Brutalizer is right with regard to reliability being the main feature. Practically every part of these machines can fail and be replaced without taking the machine down. That doesn't come by accident. It's a key consideration in the design of the system.
 
You and brutalizer are talking about two different kinds of machines. You are talking about HPC supercomputers not mainframes. The mainframes brutalizer is talking about (ones running banking systems) are not massive numbers of vector processors. Parallel vector calculations are great for running weather simulations and for modeling nuclear explosions but they are not very useful for running banking code.
Ok, but those mainframes are running code that was originally written for other mainframes that (at the time) had to perform massively parallel I/O (back when transaction processing was considered something exotic). If new mainframes could not emulate old mainframes, how many of these would get sold?

Brutalizer is right with regard to reliability being the main feature. Practically every part of these machines can fail and be replaced without taking the machine down.
Are you suggesting that it's impossible (or too hard) to write client/server software for x86 that performs redundant calculations and doesn't go down every time a machine needs to be replaced?
 
Last edited:
Today's mainframe customers are well aware of client/server architectures. It's not like they were transported here in a time machine from 1970 and don't know anything else. The same organizations that use mainframes also use client/server, SaS, and cloud architectures.

There's a place for mainframes in 2012 and it's not solely about running a legacy codebase. It is extremely difficult to scale realtime transaction processing across large numbers of small machines. Mainframes do that job well. They're also used to run VM's for server consolidation. You might be able to run 10-20 VM's on a big x86 box but you can run 100's on single mainframe. In both of these roles, organizations can be confident that their mainframe will run 24/7/365 from the day it's installed until the day it's decommissioned. And no, x86 virtualization (ESX) does not provide even remotely comparable reliability and availability, not even in the same ballpark.
 
Today's mainframe customers are well aware of client/server architectures. It's not like they were transported here in a time machine from 1970 and don't know anything else. The same organizations that use mainframes also use client/server, SaS, and cloud architectures.
OK...then why is the primary market 3rd world banks that want to acquire/be acquired by 1st world banks?

And no, x86 virtualization (ESX) does not provide even remotely comparable reliability and availability, not even in the same ballpark.
Right...then why has flash trading forced every stock exchange to abandon the mainframe for x86 client/server and FFPGA? I have a feeling that "reliability and availability" translates to something along the lines of "middle management is so dysfunctional that Finance and Billing can't agree on how to share a limited resource and top management is too busy siphoning off as many of the company's assets as possible to make a decision so can we have a mainframe please?"
 
Last edited:
Comparing the processing power of each CPU in a vector processing array to the processing power of a CPU in a traditional scalar computer is silly (really I think you know better than that). The mainframe is built to scale efficiently for many CPUs, while the stand-alone computer is built to run a single CPU as efficiently as possible.
Mainframe cpus are different from vector cpus. Vector cpus are more for numerical calculations, HPC super computer work. Mainframe cpus are not tailored to that and perform very bad at numerical calculations. Any highend x86 is many times faster than the fastest Mainframe cpu on calculations. A Mainframe is not a HPC server. A Mainframe is more of SMP server. And SMP servers are very different from HPC servers. Typically, a HPC server is a cluster, a bunch of PCs on a fast network. They can scale to thousands of PCs. For instance, Google uses such a network with 900.000 PCs when people google.

A SMP server, is different. They are typically a one single big fat server, with as many as 32 cpus, or even up to 64 cpus. The biggest SMP server on the market has today 64 cpus. They can handle workloads that HPCs can not. IBM offers a 32 cpu Unix server called P795, it has POWER7 cpus. HP offers a 32 cpu Itanium server called Superdome. Oracle offers a 64 cpu SPARC server called M9000. IBMs largest Mainframe z196 has 24 cpus. The z10 Mainframe had 64 cpus, and gave 28.000 MIPS.

SMP servers can weigh a ton and costs several millions. For instance the old IBM P595 which was used for the old TPC-C world record, costed 35 million USD list price.

HPC servers, are many small PCs. Each costing very little. Put together on a network. All Top500 super computers are HPC. They are big cluster, typically running Linux. They can have 1000s of PCs, sporting millions of cores.

There are no big Linux SMP servers with 32 cpus. The biggest Linux SMP server has 8 sockets I think, for instance, HP sells an x86 server with 8 sockets that Linux can run on. I dont know of any 16 cpu server sporting x86. x86 scales up to 8 sockets today. If you want to go beyond 8 cpus, then you need to go to SPARC/POWER/Itanium or Mainframes, and then you use Unix OSes or Mainframe z/OS. Not Linux, Linux is not built to go beyond 8 cpus, and has problems scaling beyond 8 cpus on a SMP server. Of course, on a HPC cluster, Linux scales very well, up to 1000s cpus.

For instance, here is one person who ported Linux to Mainframes. He compared Linux on x86 and on Mainframes, and concluded that one Mainframe MIPS equals 4 Mhz in x86. Mainframes are measuring performance in MIPS. The biggest and newest z196 Mainframe, sporting 24 of those cpus, give in total 52.000 MIPS. How many MHz does that correspond to? You do the numbers. Dont forget that one high end x86 with 10 cores each running at 2GHz, gives 10 x 2 GHz = 20.000 MHz.
http://www.mail-archive.com/[email protected]/msg18587.html

Software emulation of an IBM Mainframe, using open source TurboHercules emulator, gives 3.200 MIPS on a x86 server. Dont forget that emulation is 5-10x slower than running native code. Thus the x86 server should give 5-10x higher performance if the code was ported instead of emulated.
http://en.wikipedia.org/wiki/TurboHercules#Performance

Mainframes don't cost a lot of money because they are reliable. They are reliable because they cost a lot of money
I dont agree on this. There are cases when lots of money has been poured into a project and it failed. It cost lot of money, but that does not mean it was reliable. Just pouring lot of money does not mean it will be reliable. Maybe the objective was something else than RAS, maybe performance. x86 has very high performance, but are not reliable and very buggy. It needs to targeted to RAS to achieve RAS. Lot of money will not help.

Furthermore, let's be honest - most of this has to do with sunk costs and the risks involved in rewriting old software. If new mainframes couldn't emulate old mainframes, would there even be a sustainable mainframe market?
Yes, this is very true. There are lot of vendor lock in. Once you Mainframe, you can not exit easily. Mainframes are extremely expensive and the only way to get back that investment, is to go all in Mainframes. Thus, you are stuck in Mainframes. No new companies bet on Mainframes today. Only old locked in companies who has Mainframes, still use them. But if a company has no Mainframes, no one will start to use Mainframes.

Oh, nonsense...real "enterprise" ram/nand arrays are much faster than any single consumer ssd. When Kingston puts the "enterprise" label on one of their sata SSDs and jacks up the price, they're just taking advantage of the human tendency to be risk-adverse where risks are small and reckless when risks are large.
Well, Enterprise stuff are more tested and more reliable. This extensive testing and built for RAS, costs lot of money.

Ok, but those mainframes are running code that was originally written for other mainframes that (at the time) had to perform massively parallel I/O (back when transaction processing was considered something exotic). If new mainframes could not emulate old mainframes, how many of these would get sold?
Not a single one. Todays servers can well replace Mainframes. The only reason there are still lot of Mainframes on the market today, is because of vendor lock-in. Old shops continue to buy. New shops never go the Mainframe route.

Are you suggesting that it's impossible (or too hard) to write client/server software for x86 that performs redundant calculations and doesn't go down every time a machine needs to be replaced?
Some workloads are not easy to do what you describe on a cluster. Some workloads are better for Mainframes, and some workloads are better for HPC clusters.

OK...then why is the primary market 3rd world banks that want to acquire/be acquired by 1st world banks?
I did not get this. What are you refering to?

Right...then why has flash trading forced every stock exchange to abandon the mainframe for x86 client/server and FFPGA?
I dont know of any stock exchange using Mainframes. Their performance is too bad. x86 have low latency. Mainframes have not.

All modern, fast exchanges use x86 today, running Linux (and Solaris) in a cluster. For instance, NASDAQ stock exchange uses x86 servers running Linux, the stock system is developed in Java. They have the lowest latency, around 100 microseconds and throughput up to million of orders/second. Thus, the garbage collection in Java is not a problem, you can reach world class performance if you know how to cope with garbage collection in Java. No need for C/C++. Java is tailored for server backend work.
 
Back
Top