840 & 840 evo drop in speed

Funny situation here, may -or may not- be attributed to the discussed bug.
Motherboard Supermicro X8DTL has an integrated LSI 1068E controller.
There is one additional LSI 2008 controller board (IBM IT flashed firmware).
Both are passed through to a Solaris 11.x VM under ESXi 5.1u2 (x=1 or 2 same thing).
Last but not least, a ZFS pool comprising two mirrored 250 GB Samsung EVOs.

Now:
a) when the two SSDs are on the 1068E controller (SATA2), no problem,
b) when one of them is on the 1068E and the other on the 2008, equally no problem,
BUT:
c) when both of them are on the 2008 controller (SATA3), after a few days one is dropped out due to write(?) errors.

Tried a second IBM board, other cables etc, same thing. So we rule out this part.
It may be one “lazy” EVO that can not cope with the fast SATA 3 path, while the other one can.
I leave the dual controller setup (b) which is stable and above that holds an inherent redundancy against single controller-failure (seq. read benchmark shows ~100MB/s less, though).
 
No, that has nothing to do with it. The discussd bug produces no errors or invalid data. The SSD basically works as expected, just reading back old data (weeks to months) is significantly slower than new data.
 
Isn't this a result of wear leveling?

Some users report sequential read speeds of the 840 EVO dropping down to 20 MB/s for very old data. It could be shown for example, that the read speed of data 10 month old is lower than data 5 months old.
Even if this is the result from the reorganization for wear leveling, a speed drop like this is either a bug or the whole TLC implementation is defective by design and should never have made it to market.
 
Some users report sequential read speeds of the 840 EVO dropping down to 20 MB/s for very old data. It could be shown for example, that the read speed of data 10 month old is lower than data 5 months old.
Even if this is the result from the reorganization for wear leveling, a speed drop like this is either a bug or the whole TLC implementation is defective by design and should never have made it to market.

Turned HD Tune Loose on mine, I'm getting somewhere around 10Meg/sec if i'm lucky on older data.

Right now the "fix" for me looks like it's may involve yanking the drive from my windows box, plugging it into the fileserver and running a pair of dd sessions. (one to read the drive, once to write it)
 
you could also just defrag...

ugly

hdtune_zps5b754cc6.jpg


i am only using 37gb of 465gb.
 
Last edited:
I read about this bug previously. I guess I should try a quick read test tonight. I don't really thrash my drive with much much churn, it's probably fine I'm guessing ...
 
I read about this bug previously. I guess I should try a quick read test tonight. I don't really thrash my drive with much much churn, it's probably fine I'm guessing ...

I didn't think mine was the until I checked it. I just did a verify on csgo and task manager was reporting 25mbs read with 100% disk usage.
 
I read about this bug previously. I guess I should try a quick read test tonight. I don't really thrash my drive with much much churn, it's probably fine I'm guessing ...

That does not seem to matter at all. It seems to be entirely based on the age of the data, no how much has been writen.
 
I'm moving data off the drive now for unrelated reasons. 100-130 MB/s throughput lots of files (that's read and write to another drive), all of them "compressed NTFS" source and destination, so I'm not seeing the 12 MB/s thing so far. Data is relatively old, some of it over a year, considerably more (Jan '13) than one month as quoted being a problem. Still, looking forward to the coming firmware upgrade.
 
Last edited:
I'm moving data off the drive now for unrelated reasons. 100-130 MB/s throughput lots of files (that's read and write to another drive), all of them "compressed NTFS" source and destination, so I'm not seeing the 12 MB/s thing so far. Data is relatively old, some of it over a year, considerably more (Jan '13) than one month as quoted being a problem. Still, looking forward to the coming firmware upgrade.

Run the ssdreadspeedtest from this page.

http://www.overclock.net/t/1512915/...es-benchmarks-needed-to-confirm-affected-ssds

work pc
3770, q77, 8gb, kingston hyperx 120gb

work_zps45daa023.jpg


gamer
4770, z87, 8gb, 840evo 250gb raid 0

home_zps042a4561.jpg
 
Last edited:
Unfortunately I already rearranged the pc for a new main 960gb m500 ssd. The new data on the old 500gb EVO benched at 549 MB/s with that read program, pretty slick.
 
Does this issue effect 840 pros as well? I also have an 840 evo, will all drives have this issue or just a small percentage? Thanks
 
I ran diskfresh (I know,mbad for the drives). After running it I am seeing over 1000MB/s in hd tune with it not dropping bellow 1000MB/s. Compared to above where I was dropping to 11MB/s.
 
I expect it to go back after time however there probably will be a change in firmware before the sequential speed degrades that far.
 
all. you have one? test it.

I just ordered one, a 500GB for $189.99. Thought it was a good deal since people went crazy about the refurbished crucial m4 512gb for $150. This is newer, faster, and brand new not refurbished, so I thought I couldn't go wrong. Now ... not so sure.
 
after diskfresh

after_zps533e32d4.jpg

I also had an extremely low minimal HD Tune transfer speed before running diskfresh and it's back to normal now. The first time I "noticed" my 120GB EVO was getting slower, don't mock me, was when I ran Win7 WEI test back in February and noticed SSD score going down from original 7.9 to 6.5 despite the disk being just 5 moths old. I reran the test a couple of times to no avail, the score still remained below 7 and I accepted it since "real life" as well as AS SSD benchmark performance hadn't degraded. The diskfresh got my curiosity up and I reran Win7 WEI test once more and got 7.6 now.
 
I had the same slowdown with 840 pro. This forum doesn't have attachments or I'd post the benchmarks. Write speed started out oK and then dropped down to slower than a 6 yr old laptop mechanical disk.
 
I had the same slowdown with 840 pro. This forum doesn't have attachments or I'd post the benchmarks. Write speed started out oK and then dropped down to slower than a 6 yr old laptop mechanical disk.

supposedly this doesn't effect the pro line and I thought it was read speeds only ? You might have another issue at hand
 
supposedly this doesn't effect the pro line and I thought it was read speeds only ? You might have another issue at hand

From what I've read it only affects tlc flash, and the pro doesn't use tlc. You might have a different issue.
 
Why doesn't Kyle have a post on this?

It would be nice if Kyle could contact Samsung and get feedback on what the the real issue is.

I personally have two 256g EVO SSDs, one for me and one for my wife.

I used the file found here:

http://www.overclock.net/t/1507897/...d-written-data-in-the-drive/720#post_22887168

Before I used DiskFresh I was averaging 80Mb/s and after I average 515Mb/s.

Personally, since I have yet to hear of any read errors, I do not believe there are actual degradation issues with the memory cell voltages for example. I'm of the theory right now that the firmware reads the date of a cell, and the older the cells data, the more robust the validation logic executed, thus the ever increasing performance hit.
 
Why doesn't Kyle have a post on this?

It would be nice if Kyle could contact Samsung and get feedback on what the the real issue is.

Agreed. I'm having the same problem with my 250g and 750g and I'd like to see something official about it.
 
Personally, since I have yet to hear of any read errors, I do not believe there are actual degradation issues with the memory cell voltages for example. I'm of the theory right now that the firmware reads the date of a cell, and the older the cells data, the more robust the validation logic executed, thus the ever increasing performance hit.

That doesn't sound feasible to me. I doubt they have "variable" strength ecc. My wild ass guess is that they build a map of blocks based on age, and the older blocks are getting re-checked multiple times accidentally, possibly due to falling into multiple age lists. That, or the drive is somehow building a looong linked list of blocks.
 
I just ordered one, a 500GB for $189.99. Thought it was a good deal since people went crazy about the refurbished crucial m4 512gb for $150. This is newer, faster, and brand new not refurbished, so I thought I couldn't go wrong. Now ... not so sure.

You might wait as long as you can on the return to see if Samsung releases new firmware. Good price for sure.
 
Fix is released on the 14th October.

I always thought Windows was taking far too long to load compared to the old Crucial M4.
 
I noted this issue on my EVO drives so I put them in a server that dumps data/reads/moves/erases all the data on the disks daily so it's not an issue... the PRO drives do not seem to have this issue

went back to my old reliable Intel 330's... may not be the fastest but they just keep on working reliably
 
Wait, my 500GB is a non-EVO 840, while the 1TB is an EVO. Is the plain 840 affected?
 
Back
Top