SSD vs HDD Lifespans?

Cerulean

[H]F Junkie
Joined
Jul 27, 2006
Messages
9,476
SOB Firefox didn't retain my post after I hit submit and got a 'page not found' message for losing connectivity.

SSDs have a limited number of writes, whereas HDDs do not.
SSDs have an unlimited number of reads, so do HDDs.

On the condition that an HDD would not fail due to bumps, "stuffing up," being the unlucky HDD of a batch of harddrives that will fail sooner than others, but by mechanical/motor failure, would SSDs stand a chance?

If an SSD failed, you could buy another one and simply clone the old SSD, and then continue with daily life as normal. If an HDD failed, you would have to send your HDD to a data recovery specialist to play surgery (this is not inexpensive).

According to http://hardforum.com/showpost.php?p=1035294711&postcount=2, an Intel SSD could write 100GB per day for 5 years before expecting write-failure. I calculated out that an HDD could write almost 5TB per day at 60 MB/s, but by how much the lifespan of an HDD would be affected by doing this 24/7 I don't know. If an SSD wrote 100GB per day, that would be an average write speed of 1.19 MB/s.

It would be interesting to do a setup where several different SSDs and HDDs (different brands + different capacities*) and have them all write at 60 MB/s endlessly (or whatever the highest average speed could be attained by HDDs) until they died, and see how long they lived. Hmm.

Discuss. (Please do not stereotype without hard evidence.)
 
Thats a useless test to do because real life data storage patterns are not like what you describe.
 
It's still possible to completely kill an SSD in less than a week by using the worst case write pattern. Lifespan of an SSD depends on the type of Flash used (SLC/MLC), feature size (90, 65 nm...), and usage pattern. Max write cycles differs wildly from 100k for 90 or 65 nm SLC Flash to 3k on MLC 32 nm Flash. Writing many small blocks of data to an SSD is worse than writing large blocks.

It's really hard to quantify the lifespan of an SSD except within a really specific usage scenario. HDDs on the other hand have large datasets of lifespan numbers. For desktop use virtually unlimited, I'm pretty sure I'm not the only one who still has 20 GB HDDs lying around and still uses them for a quick server system or so.
 
It's still possible to completely kill an SSD in less than a week by using the worst case write pattern. Lifespan of an SSD depends on the type of Flash used (SLC/MLC), feature size (90, 65 nm...), and usage pattern. Max write cycles differs wildly from 100k for 90 or 65 nm SLC Flash to 3k on MLC 32 nm Flash. Writing many small blocks of data to an SSD is worse than writing large blocks.

It's really hard to quantify the lifespan of an SSD except within a really specific usage scenario. HDDs on the other hand have large datasets of lifespan numbers. For desktop use virtually unlimited, I'm pretty sure I'm not the only one who still has 20 GB HDDs lying around and still uses them for a quick server system or so.

If you hammer a single cell with the combined write speed of the entire drive for a week, yeah the cell will fail hard.

HDDs suffer from all the pitfalls of a mechanical system. The bearings and motors have a finite life, as do the actuators. The expected life out of a HDD is a few years except in a few edge cases, like the 20 GB drive that seems to have risen from the grave as some undying shambling monster. I have a 30GB maxtor that refuses to die, it just limps along sounding like a coffee grinder as I slowly move data to and from it.

SSDs, assuming there aren't any long term data stability issues with the flash cells on the newer flash chips, will keep data until the flash chips themselves start to crap out. According to a lot of specs I've read, that's like 15 years between cell refreshes. And you can throw it down a flight of stars and it really won't care.
 
Published Datasheet for X25-M on Intel website

warranty = 3 years

20GB/day host write for typical client workloads
estimate minimum useful life = 5 years = 5 x 365 x 20 GB = 36500 GB

estimate 5 years but warranty only 3-years. Actual physical write GB is higher, estimate lower due to many considerations including computer hw/sw behaviors. (note the "client workloads")

Edit : example : DragonFly BSD claims with tuning and specific usage pattern based on their special setup, it can push the write to way more than 40TB
 
Last edited:
Heh, I killed my 40GB Maxtor by overheating it. :( It was a stupid move I made by covering the computer with a blanket...under the bed. I killed it just last year; had this drive since like the late 90s.
 
The HDD can last for 100 years, but can also fail in 1 month. The SSD has a predictable lifetime, where the end of its lifetime is not coupled with dataloss. Dataloss and SSDs are a strange combination. However, i do think many OCZ SSDs feature much lower reliability than Intel SSDs. Even though Intel sells their SSDs very well all over the world, the number of reports of spontaniously failed drives are mostly about OCZ drives.

So what do you want? A drive where you know its going to fail in ~17 years with your average user pattern taken into account - OR - a drive which can last 100 years but can also fail every minute. And when it fails, HDDs will lose all data unlike SSDs.

Aside from maximum write iterations, an SSD can fail if there is a weakness in the firmware. Even Intel was affected by that; in some cases with BIOS passwords and specific intel firmware that introduced TRIM support. Aside from that you can have a dud or weak sample, of course, that had a weakness inside since it was produced in the factory; while other samples do not.

Luckily, the system disk houses little personal data; only the data in my documents et al, the rest is not personal and should not need to be backed up.
 
He likely means 'failed' in the sense that the maximum write iterations have been reached and it will now switch to being read-only; thus it is failed because you can't use it as normal drive anymore; but you have not lost any data or at least shouldn't have.
 
HDDs will lose all data unlike SSDs.
They will not. This is impossible unless you take the platters out and supercharge it with electromagnetism. When you do a low-level format, you can still technically recover the data because that data is still there -- just with a weaker magnetic level than before you "erased" it. When you buy never-before-used HDDs, chances are that the platters have zero magnetism until you turn it on and begin writing stuff to it; it's in its most 'secure' state when its fresh.

Hence "low-level" and "secure" formatting.
 
lol, lacking $1k to $2k that a data recovery service will charge, you can say HDDs will eventually lose all data.
 
I wonder where people get this strange notion from that SSDs will never suffer data loss, will never have data corruption from a firmware bug, loose connection, stray radiation or Murphy's Law, and will keep a read-only copy available once no more write cycles are available.

As far as I am aware, the capacity of an SSD will start shrinking the moment no more spare cells are available. Even with wear-level algorithms it's not like all cells will run out of write cycles simultaneously. Ever considered how many write cycles it would cost to keep the write cycles between cells balanced? It'd be like Flash suicide :)

At any rate each write physically damages the Flash cell, which makes it very much unlikely that one will be able to read the correct data at the end of the write cycles limit. How would the SSD controller know when a cell can be read and at what point it'll stop being readable? The 3k limit for 32 nm Flash is just an average figure and not an absolute number. This seems to suggest that there's no such thing as a 'read-only' mode for SSDs as I can see no conceivable way to implement it.
 
Normal HDDs do not fail based on usage, so you don't have to worry about how much you use it. If you want to format it twice a day, then go right ahead. With SSDs you need to optimize everything to avoid writes as much as possible and if you raid, then they'll probably all fail at once as they'll have equal writes, so it will produce downtime. For the price you pay for SSDs to get the same capacity as a normal HDD... it better last nearly forever! But it does not.

I've only had a couple HDDs fail in my 10ish years of experience. I rather have a random chance of failure then a guarantee death time. At least with random, RAID is useful. I don't think a RAID with SSDs would work, as they would more or less all fail at once. Though I wonder if some have some kind of fail alarms on them, that could be useful. It would start beeping when it reaches near end of life. The timer could base the beep on the usage trend, so it does not beep too early or too late.

Either way, always have backups!
 
Last edited:
@Red_Squirrel

good point. Which is why under current consumer SSD environment, you need to plan ahead. Example below,

RAID-1 mirror pair

Case A1
1. SSD-A Intel X25-E 64GB
2. SSD-B Intel X25-M G2 80GB

or Case A2

1. SSD-A Intel X25-M G2 80GB
2. SSD-B Intel X25-V 40GB


Notice uneven pairings. The logic

1. RAID-1 people mostly after redundancy (uptime-protection). Not absolute speed.

2. For case A1, X25-E will likely last much longer so you know in advance which one will likely reach write failure soon. The uneven nature of pairing with different write durability reduce the likelyhood of write failure with close vicinity (RAID-1 technically write happens to both drives so they could fail fairly close in succession if doing standard config )

3. For case A2, X25-M and X25-V have similar durability expectation stated in datasheet. So the trick here is uneven pairing of capacity (case A1 can also do this, effect more evident here). Example Case A2 configured with RAID-1 30GB. X25-V has 10GB unused to assist wear-levelling and X25-M 80GB has 50GB unused to assist wear-levelling

Finally, this is what I think but it is kind of expensive to prove the thoery. My un-substantiated opinion, please double-check with Intel SSD engineers.
 
Last edited:
This seems to suggest that there's no such thing as a 'read-only' mode for SSDs as I can see no conceivable way to implement it.

You could very easily have the drive auto-lock in read-only mode the second it runs out of reserve flash (the space set aside that isn't accessible for partitioning).

That would be the safest method for your data, and would remove the possibility of a mysteriously shrinking drive (NTFS wouldn't know how to handle anyway).
 
You could very easily have the drive auto-lock in read-only mode the second it runs out of reserve flash (the space set aside that isn't accessible for partitioning).

That would be the safest method for your data, and would remove the possibility of a mysteriously shrinking drive (NTFS wouldn't know how to handle anyway).

Only issue I see with this is in say, a VM environment. It would go to do a huge write, and only finish half way until it hits the limit, then the VM file is corrupt. So while most of the data may be accessible, you'd end up with at least one corrupt file.

Or does it somehow handle this better, like leave some reserve write space, and "force" fail when the number is super low?
 
Back
Top