Read Error Rate keeps going up on SSD...

shantd

Gawd
Joined
Aug 2, 2008
Messages
665
I've only recently started to monitor my hard drives and the results have been very revealing. My Corsair SSD is my most recent purchase, I put my OS on there naturally. But HDD Regenerator is reccomending I back up my drive because the read errors are going up constantly. Right now, the total amount of read errors is 9,071,068 (assuming in bytes). Any time I refresh, that number goes up. In other words, as long as the drive is running it's having reading errors.

Compared to my Hitachi 2TB storage drive, which is constantly being written to and is 4 years old, the total read errors are 0.

Is this normal? I'm wondering if it has something to do with AHCI. I never knew about that so I installed it without it....is it too late to change it over? Thanks
 
This is not normal. Back up your data. You should be starting on the RMA process with Corsair...
 
do you have a "life" left on it, I know my OCZ Agility 3 60 gb is sitting at ~84% remaining life and if I recall it is ~2 years old now, for awhile I had 10gb free on it where now I have sitting at ~22gb free I also turned on disk compression to see if it helped or hurt performance and I do not notice much difference either way, I am wondering myself if something like this hurts its expected life, I do not see the read errors.

How old is your Corsair drive and which model/capacity how much is left etc, it is a good idea to have it in AHCI mode however, as long as TRIM is enabled/turned on this shouldn't matter much beyond a performance hit in most cases.
 
Are you sure this is not normal?

Different manufacturers use the RAW data column differently. What does Current/Worst/Threshold look like for raw read error rate.

My seagate drives all look like this for example (I have 6 of them):

Code:
ATTRIBUTE_NAME         FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
Raw_Read_Error_Rate     0x000f   118   099   006    Pre-fail  Always       -       199607344
Raw_Read_Error_Rate     0x000f   118   099   006    Pre-fail  Always       -       195990016
Raw_Read_Error_Rate     0x000f   117   099   006    Pre-fail  Always       -       137523456
Raw_Read_Error_Rate     0x000f   115   099   006    Pre-fail  Always       -       84187760
Raw_Read_Error_Rate     0x000f   114   099   006    Pre-fail  Always       -       81014856
Raw_Read_Error_Rate     0x000f   108   099   006    Pre-fail  Always       -       16694915

The current value goes both up and down, never headed to anywhere near the threshold.
 
Main benefits Hard disk drive is an integral part of every computer. It stores all your information. One of the most prevalent defects of hard drives is bad sectors on the disk surface. Bad sectors are a part of the disk surface which contains not readable, but frequently necessary information. As a result of bad sectors you may have difficulties to read and copy data from your disk, your operating system becomes unstable and finally your computer may unable to boot altogether. When a hard drive is damaged with bad sectors, the disk not only becomes unfit for use, but also you risk losing information stored on it. The HDD Regenerator can repair damaged hard disks without affecting or changing existing data. As a result, previously unreadable and inaccessible information is restored.

Read more: HDD Regenerator - Free download and software reviews - CNET Download.com http://download.cnet.com/HDD-Regenerator/3000-18512_4-75852528.html#ixzz2xCThZV20

does not sound like it should be used on SSD they are a way different beast then a standard hard drive AND if going by what the tool reads out as probably IS making the SSD write way more often then it should, you don't defrag SSD if it is well of course you will shorten its lifespan and possibly could watch its life drain away. Use a defragger when you need to, use Ccleaner when you need to, I know this sounds rude, but if you are not able to keep your system clean or whatever then you probably should not be using high end type gear in the first place obviously you need to RTM.
 
The drive is a Corsair Force 3 SSD, and according to HD Tune the amount of life left is still sitting at 100. It's about 2 years old now.

Here's the HD Tune Pro health printout:

HD Tune Pro: Corsair Force 3 SSD Health

ID Current Worst ThresholdData Status
(01) Raw Read Error Rate 92 92 50 170512371 ok
(05) Reallocated Sector Count 100 100 3 0 ok
(09) Power On Hours Count 86 86 0 12701 ok
(0C) Power Cycle Count 100 100 0 528 ok
(AB) Program Fail Count 0 0 0 0 ok
(AC) Erase Fail Block Count 0 0 0 0 ok
(AE) Unexpected Power Loss Count 0 0 0 57 ok
(B1) Wear Range Delta 0 0 0 5 ok
(B5) Program Fail Count 0 0 0 0 ok
(B6) Erase Fail Count 0 0 0 0 ok
(BB) Reported Uncorrectable Errors 100 100 0 0 ok
(C2) Temperature 30 30 0 1966110 ok
(C3) Hardware ECC Recovered 120 120 0 170512371 ok
(C4) Reallocated Event Count 100 100 3 0 ok
(C9) Soft Read Error Rate 120 120 0 170512371 ok
(CC) Soft ECC Correction 120 120 0 170512371 ok
(E6) GMR Head Amplitude 100 100 0 100 ok
(E7) SSD Life Left 100 100 10 0 ok
(E9) Media Wearout Inidcator 0 0 0 6156 ok
(EA) (unknown attribute) 0 0 0 5199 ok
(F1) LifeTime Writes from Host 0 0 0 5199 ok
(F2) LifeTime Reads from Host 0 0 0 8154 ok

Health Status : ok


As you can see, the read error number's now over 17,000,000, so it has doubled since the time I posted this thread. But yet HD-Tune says "OK" all the way down the line. Every single scan I've run on it, check disk, seagate tools, HD-Tune's error scan, nothing ever indicates a problem. If there are bad sectors I can't seem to find it. Very aggravating because I just had another drive die on me.
 
This is my rig:

1 x ASUS RAMPAGE FORMULA LGA 775 Intel X48 ATX Intel Motherboard
1 x Intel Core 2 Quad Q9450 Yorkfield 2.66GHz LGA 775
1 x ASUS 20X DVD±R DVD Burner with LightScribe Black SATA Model
1 x AMD HD6970 GPU
1 x Thermaltake W0116RU 750W Complies with ATX 12V 2.2 & EPS 12V Power Supply
2 x Seagate Barracuda 7200.11 ST3500320AS 500GB 7200 RPM 32MB Cache SATA 3.0Gb/
1 x G.SKILL 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 1066 (PC2 8500)


I had the 2 seagate barracudas in raid stripe 0 config until one of them started to show read errors. Then Windows indicated the drive was dying, I had just enough time to copy the data to another drive. Tried running HDD Regenerator on it, the scan completed succesfully (supposedly) but it was dead after that so I'm not sure I'd wanna try that again.
 
Read errors wouldn't really have anything to do with bad sectors.

I does look like it's dying though because the current value for Raw Read Error Rate of (92) is equal to the worst all time value (92) and probably headed steadily downward toward the threshold of 50.

If it's only 2 years old why don't you RMA it? It has a 3 year warranty. You probably need to wait till the value reaches 50 though. Then the drive will indicate its not healthy anymore.
 
Well, the read error rate is at 94 now, seems to fluctuate between 92 & mid 90s depending on workload at the time. But interestingly as I was looking through some files I actually stumbled onto some screenshots of HD-Tune Pro tests that I ran in october of 2012, very shortly after I got the drive, as well as another set from several months ago. Naturally I decided to compare the results to see if I could spot any sort of progression. The results don't make a bit of sense.

Here's the readout from 10/2012 for the exact same drive:


HD Tune Pro: Corsair Force 3 SSD Health

ID Current Worst ThresholdData Status
(01) Raw Read Error Rate 79 79 50 46161847 ok
(05) Reallocated Sector Count 100 100 3 0 ok
(09) Power On Hours Count 98 98 0 2318 ok
(0C) Power Cycle Count 100 100 0 129 ok
(AB) Program Fail Count 0 0 0 0 ok
(AC) Erase Fail Block Count 0 0 0 0 ok
(AE) Unexpected Power Loss Count 0 0 0 14 ok
(B1) Wear Range Delta 0 0 0 5 ok
(B5) Program Fail Count 0 0 0 0 ok
(B6) Erase Fail Count 0 0 0 0 ok
(BB) Reported Uncorrectable Errors 100 100 0 0 ok
(C2) Temperature 30 30 0 1966110 ok
(C3) Hardware ECC Recovered 120 120 0 46161847 ok
(C4) Reallocated Event Count 100 100 3 0 ok
(C9) Soft Read Error Rate 120 120 0 46161847 ok
(CC) Soft ECC Correction 120 120 0 46161847 ok
(E6) GMR Head Amplitude 100 100 0 100 ok
(E7) SSD Life Left 100 100 10 0 ok
(E9) Media Wearout Inidcator 0 0 0 1516 ok
(EA) (unknown attribute) 0 0 0 2084 ok
(F1) LifeTime Writes from Host 0 0 0 2084 ok
(F2) LifeTime Reads from Host 0 0 0 1383 ok


...WTF?? How can the Raw Read Error Rate be 79 in 10/6/12 and now it's 92, which is currently reported as the worst its ever been? I don't understand, I'm frankly not sure how much stock we ought to be putting into these numbers. I'm wondering if a certain amount of read errors is normal? If these numbers are to be taken seriously, why then does it indicate that the SSD Life Left is still 100? Or the Media Wearout Indicator at a very normal 6156?

I'm wondering what programs everyone else is using to monitor their drives. Maybe if we were using the same utility, we could compare numbers 1 for 1. Seems to me that would be more meaningful than looking at numbers in a vacuum.



Health Status : ok
 
you cannot use a program designed for HDD and apply the same basis on SSD. How many times a "cell" has been written to vs how many times it has been read as an example would seem to me bad on a standard HDD as an SSD will shift stuff around as needed to make sure the cells don't get worn out, this could almost be considered a read error rate. You have to think quite differently as the numbers are not directly comparable. SSD very bad to have an extreme high level or writes vs reads, HDD doesn't matter so much, SSD to move data from one "sector" to another is what you want it to do to do its wear leveling where on a standard HDD this could be counted as a possible bad sector on the actual drive.

Also as I pointed out, running these tests to tell the actual status of the drive depending on the test is not good for it nor is benchmarking. The overall "life" is what matters the most, a low life left means just that, a ton or writes to the cells wears them out which does effect the available performance and of course means they do eventually wear out and you could have an unusable cell, life matters more then anything else for SSD, is it states life is ok, then generally speaking if TRIM, garbage collection and such were functioning the drives shouldn't out and die

46million raw read error, hmm, you damn well want a SSD to do a crap ton of reading, that's not what hurts the drive seeing as its solid state and there is no moving parts you can read the drive more or less without limits, the writing is what take the life down.
 
I would use HDD Guardian.
https://code.google.com/p/hddguardian/

It's a Windows GUI for smartmontools which is the tool everyone on linux/unix uses around the world to monitor their disks.

You are supopsed to be able to trust the S.M.A.R.T values, that's what they are there for.

They won't always predict a failure, but when they show bad things they should be correct that things are not right.

And smartmontools IS meant to be used on an SSD. An SSD has specific SMART values itself that smartmontools simply reads. Any tools that simply reads SMART values is made to be used on any disk (HDD or SSD) that has SMART attributes within it.

http://en.wikipedia.org/wiki/S.M.A.R.T.

Read Error Rate:
(Vendor specific raw value.) Stores data related to the rate of hardware read errors that occurred when reading data from a disk surface. The raw value has different structure for different vendors and is often not meaningful as a decimal number.

As you can see, you are not supposed to trust the RAW value for that particular attribute. Just need to see what the current value is and if it's getting close to or crossing the threshold.

Though the worst value should never increase. I have never seen that happen on any disk personally.
 
was just about to pull the smart status for read error, disk surface, SSD do not have said disk surface, so the way they operate most smart tools probably would either not track relevant data or treat it as a bad condition.

i.e moving 1 bit of data back and forth across sectors via TRIM and so forth can be construed as a bad data sector on a normal mechanical where on an SSD this is expected and actually a good thing, now if you are talking 3 years of use you are more then likely talking many 10k of GB moved in this fashion, if this happened on a regular drive that would be wicked bad.

Think on the context before you freak out lol. SMART obviously is used on SSD but I am quite sure there is not a single one designed specifically by any maker for SSD in question so many of these values are probably quite out of whack.
 
Think on the context before you freak out lol. SMART obviously is used on SSD but I am quite sure there is not a single one designed specifically by any maker for SSD in question so many of these values are probably quite out of whack.

False.

SMART values are read directly from the device. The actual SMART parameters read by any tool capable of reading SMART data are never "out of whack".

How some of those SMART parameters are interpreted certainly depend on the type of device. But I never look at the "interpretations" of any SMART software tool -- I just look at the SMART values themselves and make my own interpretation.
 
OK, so ignore the raw numbers and just worry about the current/worst. If that's true then I should be OK. But I would be curious how other peoples numbers look for SSDs running windows. I wonder if anyone has a raw number of zero...

That HDD Guardian looks nice, I'm gonna pick that up. Thanks for the tip.
 
what I am saying and what you are saying is basically the same thing. I am saying the numbers might be correct but their context probably is not, a read error rate of physical media does not mean the same thing that it would on an SSD that it would on HDD, same wording, different concept can you argue that one.

you interpret the result in your own mind, huh funny, isn't that what I am saying really. I did not state the actually smart status as being wrong, but rather if the actual makers did not bother retooling said status for different devices the same rules do not apply in the same fashion, so wrong no, different context umm yeh, read error rate going up and down, odd isn't it, a standard HDD I have NEVER seen that it goes up its threshold goes down it doesn't magically reverse.

So, is the concept of SMART wrong, no, it cant be, but how it figures out these numbers can be, the same rules cannot apply to vastly different types of media that it no way adds up.

to answer question, I never have used smart tools to check that, I only look at life remaining, temperatures might be interesting sure, but the rest of the numbers mean virtually nill when it comes to SSD, the amount of times a cell has been read vs written to is what matters the most bar none and depending on the drive they calculate the life remaining differently of course. Hey sometimes things just go bad and SMART is a potential indicator that's it that's all
 
I did not state the actually smart status as being wrong, but rather if the actual makers did not bother retooling said status for different devices the same rules do not apply in the same fashion

This is nonsense.

Any device that is capable of reporting SMART parameters has the capability to report RAW values as well as normalized values (current, worst, threshold). The way the normalized values are computed by the device is obviously set by the makers of the device, and it is meaningful for the device.
 
Seems to me a silly thing to read something that is not a specific thing based on the device in question, when did SMART come out and what for? Hard disk drives, i.e a disk surface the can hold data. They use it for SSD here and there (and the definition S.M.A.R.T is a recognizable name such as 80+ or even energy star) though obviously the definitions in some cases do not or should not apply in the same way cause the makers can use it how they see fit, damn that's complicated.

"At present, S.M.A.R.T. is implemented individually by manufacturers, and while some aspects are standardized for compatibility, others are not."
"SMART" came to be understood (though without any formal specification) to refer to a variety of specific metrics and methods and to apply to protocols unrelated to ATA for communicating the same kinds of things."
"From a legal perspective, the term "S.M.A.R.T." refers only to a signaling method between internal disk drive electromechanical sensors and the host computer. Hence, a drive may be claimed by its manufacturers to implement S.M.A.R.T. even if it does not include, say, a temperature sensor, which the customer might reasonably expect to be present."

So basically of course a "metric" such as temperature which means of course either Fahrenheit or Celsius means temperature the way we understand it, each maker can define it how they see fit (even when it makes no sense) they can decide if they want to use it, they can define the upper and lower limits etc.

Due to it not being a specific standard to certain devices in question beyond hard-drives unless implicitly worded as such

Read Error Rate "Stores data related to the rate of hardware read errors that occurred when reading data from a disk surface" amounts to nothing if you think about it, neither a disk nor a surface in SSD case. Maybe should send an email to Corsair and ask them directly cause in my mind 46 million read errors is bloody huge given its context obviously the drive is going to explode.

171 0xAB SSD Program Fail Count (Kingston)Counts the number of flash program failures. This Attribute returns the total number of Flash program operation failures since the drive was deployed. This attribute is identical to attribute 181.
172 0xAC SSD Erase Fail Count (Kingston)Counts the number of flash erase failures. This Attribute returns the total number of Flash erase operation failures since the drive was deployed. This Attribute is identical to Attribute 182.

Now that DOES take an SSD into account as it does apply to the device specifically and directly would you say, and I am pretty sure if that said 46 million fails on anything, well, am sure the drive would never get there (^.^)
 
They use it for SSD here and there (and the definition S.M.A.R.T is a recognizable name such as 80+ or even energy star) though obviously the definitions in some cases do not or should not apply in the same way cause the makers can use it how they see fit, damn that's complicated.

Complete nonsense.
 
Here are the SMART values for my 2 SSDs:

Samsung 840 Pro:

Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   099   099   000    Old_age   Always       -       863
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       92
177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       15
179 Used_Rsvd_Blk_Cnt_Tot   0x0013   100   100   010    Pre-fail  Always       -       0
181 Program_Fail_Cnt_Total  0x0032   100   100   010    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x0032   100   100   010    Old_age   Always       -       0
183 Runtime_Bad_Block       0x0013   100   100   010    Pre-fail  Always       -       0
187 Uncorrectable_Error_Cnt 0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0032   070   059   000    Old_age   Always       -       30
195 ECC_Error_Rate          0x001a   200   200   000    Old_age   Always       -       0
199 CRC_Error_Count         0x003e   100   100   000    Old_age   Always       -       0
235 POR_Recovery_Count      0x0012   099   099   000    Old_age   Always       -       15
241 Total_LBAs_Written      0x0032   099   099   000    Old_age   Always       -       3372070028

Crucial M4:
Code:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   100   050    Pre-fail  Always       -       0
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   100   100   001    Old_age   Always       -       7964
 12 Power_Cycle_Count       0x0032   100   100   001    Old_age   Always       -       1070
170 Unknown_Attribute       0x0033   100   100   010    Pre-fail  Always       -       0
171 Unknown_Attribute       0x0032   100   100   001    Old_age   Always       -       0
172 Unknown_Attribute       0x0032   100   100   001    Old_age   Always       -       0
173 Unknown_Attribute       0x0033   098   098   010    Pre-fail  Always       -       78
174 Unknown_Attribute       0x0032   100   100   001    Old_age   Always       -       134
181 Program_Fail_Cnt_Total  0x0022   100   100   001    Old_age   Always       -       8388121920665
183 Runtime_Bad_Block       0x0032   100   100   001    Old_age   Always       -       0
184 End-to-End_Error        0x0033   100   100   050    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   100   100   001    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   001    Old_age   Always       -       0
189 High_Fly_Writes         0x000e   100   100   001    Old_age   Always       -       112
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       0
195 Hardware_ECC_Recovered  0x003a   100   100   001    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   100   100   001    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   001    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   001    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   100   100   001    Old_age   Always       -       0
202 Data_Address_Mark_Errs  0x0018   098   098   001    Old_age   Offline      -       2
206 Flying_Height           0x000e   100   100   001    Old_age   Always       -       0
 
Thanks. What software did you use? Howcome the first drive doesn't show the read error rate?
 
When you see the extremely large numbers for the RAW_VALUE for Raw_Read_Error_Rate, it isn't doing what you think it is. It's not counting errors, it's counting the number of sectors that have been read. The number of sectors that have been read is compared to the number of errors, which is reported elsewhere, to calculate the normalized value.

SandForce SF-2000 SSDs calculate the value of Raw Read Error Rate with this equation:
10log10[BitsRead / (ReadErrors + 1)]
BitsRead is the RAW_VALUE of Raw Read Error Rate, ReadErrors is (maybe, probably) the raw value of Reported Uncorrected Errors, and the result of the equation is what goes into the normalized VALUE of Raw Read Error Rate.

http://www.users.on.net/~fzabkar/HDD/Seagate_SER_RRER_HEC.html

The reason the Crucial M4 is showing 0 and the Samsung 840 Pro isn't showing Raw Read Error Rate at all is so that consumers like you don't panic unnecessarily when they see huge numbers.
 
Thanks. What software did you use? Howcome the first drive doesn't show the read error rate?

smartmontools is the only software I use to do SMART testing and diagnostics.

It appears my samsung drive doesn't keep track of raw read error rate. There are tons of possible SMART attributes and every different drive model and manufacturer uses different attributes, and some use the same attributes slightly differently.

You can see most of the possible SMART attributes here.

http://en.wikipedia.org/wiki/S.M.A.R.T.#ATA_S.M.A.R.T._attributes
 
When you see the extremely large numbers for the RAW_VALUE for Raw_Read_Error_Rate, it isn't doing what you think it is. It's not counting errors, it's counting the number of sectors that have been read. The number of sectors that have been read is compared to the number of errors, which is reported elsewhere, to calculate the normalized value.

SandForce SF-2000 SSDs calculate the value of Raw Read Error Rate with this equation:
10log10[BitsRead / (ReadErrors + 1)]
BitsRead is the RAW_VALUE of Raw Read Error Rate, ReadErrors is (maybe, probably) the raw value of Reported Uncorrected Errors, and the result of the equation is what goes into the normalized VALUE of Raw Read Error Rate.

http://www.users.on.net/~fzabkar/HDD/Seagate_SER_RRER_HEC.html

The reason the Crucial M4 is showing 0 and the Samsung 840 Pro isn't showing Raw Read Error Rate at all is so that consumers like you don't panic unnecessarily when they see huge numbers.

So are you saying I've got nothing to worry about based on the data I posted for my corsair?
 
So are you saying I've got nothing to worry about based on the data I posted for my corsair?

Correct. You have nothing to worry about.

From the current readout you got from HD Tune Pro:
ID Current Worst ThresholdData Status
(01) Raw Read Error Rate 92 92 50 170512371 ok

From the readout you got two years ago:
ID Current Worst ThresholdData Status
(01) Raw Read Error Rate 79 79 50 46161847 ok

The Current, Worst, and Threshold are normalized values. It's like saying "On a scale of 1 to 10 you scored an 8", only in this case the scale is calculated in a way that is not obvious to us and probably very complicated. Earlier in the page I linked to, you see this:

The following table correlates the normalised SER against the actual error rate:

90 — <= 1 error per 1000 million seeks
80 — <= 1 error per 100 million
70 — <= 1 error per 10 million
60 — <= 1 error per million
50 — 10 errors per million
40 — 100 errors per million
30 — 1000 errors per million
20 — 10 errors per thousand


A drive that has not yet recorded 1 million seeks will show 100 and 253 for the Current and Worst values. I believe this is because the data are not considered to be statistically significant until the drive has recorded 1 million seeks. When this target is reached, the values drop to 60 and 60, assuming there have been no errors.

That's for Seek Error rate, not Raw Read Error Rate, but it probably works in a very similar way. Note that what's being counted is not errors, but errors per x, x being large numbers that get larger.

So when you said earlier in the thread "Well, the read error rate is at 94 now, seems to fluctuate between 92 & mid 90s depending on workload at the time", the number is probably fluctuating in a way that seems odd because it has zero errors to calculate against. The threshold is 50, meaning if it were less than that you would have a problem, but it's much higher than that, in the 90s, meaning you may have done an extremely large number of raw reads with no errors at all.

In other parts of your SMART data, you have:
(05) Reallocated Sector Count 100 100 3 0 ok
That means you have had zero reallocated sectors over the lifetime of your drive, meaning that no sectors have ever gone bad.
(BB) Reported Uncorrectable Errors 100 100 0 0 ok
That means that no uncorrectable errors have ever been detected.

The Raw Read Error Rate's current value is probably being calculated with zero errors ever seen. The raw value of Raw Read Error Rate is how many raw reads have been read since it last flipped over to zero, which probably happens at 250 million, and may happen fairly often. It's possible that if you take enough readings, you'll get a sense of how often the raw value will flip over, and see smaller numbers after that happens.
 
Excellent post evilsofa, sorry for the late reply but that fully explains it. Now I understand. Thanks a million mate, and to everyone else who helped me understand these numbers. Much appreciated.
 
Back
Top