You know how they said SSD would wear out

JHefile

Necrophilia Makes Me [H]ard
Joined
Jun 22, 2003
Messages
1,180
Well Mine never did. I used it like I abused it.

Look how amy hours I have on my SSD and turn ons and you tell me I can't use my SSD cause its as tender as a baby deer

Capture.JPG
 
Well, not as many reads and writes and slightly lower PO count, but nearly double the PO hours. This Intel 320 Series SSD was my first SSD. It has since been relegated to scratch disc duty (Temp folder, Mozilla Cache, Swap file, etc) but it still sees a lot of work.

Intel SSD.jpg
 
I should do a benchmark on my first generation Patriot SSD. Has ZERO internal garbage clean up and was way before TRIM. It might get about 20MB/s.

Never died, but slow as shit now.
 
this is from May of this year,
these are my oldest SSD drives
crucial-128-256-ssd-life-5-2016.jpg
 
this is from May of this year,
these are my oldest SSD drives


I wonder if it's because the 1 drive is still at 100%, so the calculation can't project correctly, but how can a drive with twice the capacity have half the lifespan?

I can't see a modern 256GB drive only having a projected lifespan of 7 years (based on writes alone).
 
I wonder if it's because the 1 drive is still at 100%, so the calculation can't project correctly, but how can a drive with twice the capacity have half the lifespan?

I can't see a modern 256GB drive only having a projected lifespan of 7 years (based on writes alone).
The life remaining on the 128GB is wrong, it's less than 7 years remaining, the program sometimes reads that wrong for some reason.

crucial-128-256-ssd-life.jpg


The 256 had my some of my games on it so that is why it's at 100% as there are hardly any writes to the drive, just the initial installs and the occasional updates.

the 128 is now in one of my laptops, I'll read it again and see what it looks like. The 256 was sold with my 3770k gaming machine this past summer so that drive is gone.
 
bah, come back when you have 100k hours on it.

For comparison, just checked a WD Raptor in my pc ...

hdd.jpg
 
My M4 died with the latest firmware about after about 3 years in.

It was sitting at 50-60% free space with very low writes all throughout its life too.
 
bah, come back when you have 100k hours on it.

For comparison, just checked a WD Raptor in my pc ...

Well, the topic is about SSDs but since you brought up spinners I have a 1TB WD that has over 44k POH in my system right now and several 500GB WD-REs that have well in excess of 50k POH that are now relegated to backups.

"My WD's older than your WD..." (sung to the Ken L Ration jingle)

My M4 died with the latest firmware about after about 3 years in.

It was sitting at 50-60% free space with very low writes all throughout its life too.

My boot drive is a Samsung 840PRO with about 18k hours on it. There has been a firmware update for it for about 2 years now but I refuse to update it because of reports of firmware updates bricking the drives. I'm not having problems with it so no update. I'm probably going to build a new system after the first of the year with a new M.2 boot drive so maybe I'll update the Sammy then. If it bricks, oh well...
 
  • Like
Reactions: jkw
like this
My boot drive is a Samsung 840PRO with about 18k hours on it. There has been a firmware update for it for about 2 years now but I refuse to update it because of reports of firmware updates bricking the drives. I'm not having problems with it so no update. I'm probably going to build a new system after the first of the year with a new M.2 boot drive so maybe I'll update the Sammy then. If it bricks, oh well...

Well, the last few firmwares for the M4 were supposed to fix errors that would have eventually bricked the SSD, so... ;)

That said, I also have a 256GB 840 Pro that is getting lots of time, even more than the M4. It is on the latest firmware. Have yet to have a Samsung SSD die on me yet.
 
My Samsung 830 128GB is well past 12TB and works just as well as when I bought it.
 
What has that got to do with wear though?

I use my SSD as a sanding block as well. Nah I just meant people said in the beginning that SSD's will wear out faster and lots of people were worried about writing to blocks too many times. I'm just saying use it like any other drive which was a quote I heard many years ago.
 
I should do a benchmark on my first generation Patriot SSD. Has ZERO internal garbage clean up and was way before TRIM. It might get about 20MB/s.

Never died, but slow as shit now.

I have an old Patriot Pyro 120GB that I used for years. Then one day it just decided to start going really, really slow. I did a secure wipe and updated the firmware but it didn't help at all.

The health on the drive is reported as good.

My guess is that something like the onboard RAM cache died.
 
I use my SSD as a sanding block as well. Nah I just meant people said in the beginning that SSD's will wear out faster and lots of people were worried about writing to blocks too many times. I'm just saying use it like any other drive which was a quote I heard many years ago.
Many early ones were not like this though.
OCz drives were notoriously bad. So bad, they folded.
I used one which started reallocating sectors without much use.

Some types worked well from the start and newer drives work out well now.
Hindsight is great but we were working with a relative unknown.
Caution was well advised for quite some time after launch.
Same should apply for any new tech, especially when it can result in direct loss of data.
 
BxVRlF2.png


Still my main system drive a of today (Intel SSD 320 40GB). Bit slow at writing (~40MB/S) but it's no issue for my use and it hasn't noticeably slowed down since day 1 (I ran a benchmark not a week ago - still a perfect score). I have plenty of HDDs with over 30k hours but that's not an achievement :p

(I highlighted the size because it matters when it comes to SSD's wear)
 
ok, I think I win the prize. This is in my primary laptop.

ssd.jpg
 
My boot drive is a Samsung 840PRO with about 18k hours on it. There has been a firmware update for it for about 2 years now but I refuse to update it because of reports of firmware updates bricking the drives.
Isn't that a problem with the normal 840, not the Pro? (Maybe I'm just confusing it with the slow-read problem...)
 
Isn't that a problem with the normal 840, not the Pro? (Maybe I'm just confusing it with the slow-read problem...)
Could be, IDK. Haven't looked it up in a long time. Don't like the Magician software anyway. It insists on changing my power setting every time I run it. Pisses my right off! Then there is the warning to backup your data. While it is a very good idea to back it up before updating the firmware, at the time I only had USB 2.0 external drives and backing up ~100GB at that speed takes a LONG time, so I just said "F. it!"
 
I've worn out 3 256GB Samsung 840 Pros out of 7 originally purchased. Here's one that went under this year at 1204 days (~29000 hours), about 50TB of writes:

s840-1.png s840p-2.png

The last drive I replaced was a little over 30000 hours with 66TB of lifetime writes.

I replace based on performance, not the SMART data. When the drives became 'glitchy' (e.g. often going to 100% disk usage and taking a noticeably long time to respond), that's when they got pulled. To their credit, Samsung replaced them under warranty.

I have a few more drives still running though, including one around ~31500 hours but 40TB of writes. More likely I'll be replacing with larger capacity drives before those wear down and because of Samsung's solid customer service here I'll be buying their drives.
 
  • Like
Reactions: jkw
like this
That's amazing. TechReport's went 600TB before remapping & 2.4PB total. Performance was consistent across all of that data. That's just 1 sample, of course, but it's really different from your 3.

Regardless, I'm impressed by RMA without SMART errors.
 
That's amazing. TechReport's went 600TB before remapping & 2.4PB total. Performance was consistent across all of that data. That's just 1 sample, of course, but it's really different from your 3.

Oh yeah, I do remember that report... why does it seem like nostalgia now? ;)

To define the use case, the 840 Pro drives are in a Windows Storage Pool supporting a vdisk with mirroring containing Hyper-V machines. For the almost 3 years I've had them, the drives were used in an SSD storage tier in a larger pool that included HDDs, which meant the SSDs were pretty much full at all times. And aside from standard VM IOPS, the SSDs were also subject to "hot" re-mapping back and forth from the HDD tier a couple times per day. As of the past couple of months, the drives are part of an SSD-only pool and are closer to 20% full.

I can also say for the 4 original drives still running and the very first drive received from RMA (665 days of use), they are showing similar wear leveling statistics as the 3 RMA'd drives in proportion to the amount of data being written. For my "least healthy" original 840 Pro drive still active with 40TB of writes / ~1300 days of use, it shows a raw vendor specific SMART value of 36 for the Wear Leveling Count (which translates to a data field value of 2331). This particular drive currently shows 105 reallocated blocks but I'm not seeing that number increase steadily... but I expect it to start doing so when the drive falls to around roughly 15% to 20% health. However, the other drives show NO reallocated blocks, which is what I would put emphasis on (instead of the calculated "health").

The drives were not acquired all at the same time; they were added over the course of months from different online vendors and added to the storage pool to expand SSD tier capacity.

I also had a Corsair Force 3 180GB SSD and a OCZ Agility 3 180GB running in a secondary server also acting an SSD tier in a similar storage pool hosting Hyper-V machines - basically as a mirror of each other (i.e. only SSDs in the pool). Those are showing over 4 years of power on time (37000 hours), about 54TB of writes each and indicate 97% of SSD life left based on their wear indicators. One drive shows 1 reallocated block, the other 0.

I hope this doesn't seem like a dig at Samsung's Pro line; I'm just stating what I've observed. Again, I would not hesitate to buy Samsung; I still believe that even under my use case of hosting multiple VMs with mirrored writes, the drives will easily survive their true usable life span (i.e. they'll generally be obsolete based on $/GB pricing long before they wear out). But generally speaking and in the theme of this thread, when adding/replacing I'd look at price/performance and otherwise probably won't care about NAND type.
 
Back
Top