Copied 1.5TB from 8TB Seagate to 4TB Evo 860. Just checked and it's all gone - how?

DoubleTap

2[H]4U
Joined
Dec 16, 2010
Messages
2,990
I have an 8TB Barracuda - it's maybe 6 months old and barely used. It won't fit in my new ITX system. It mostly has family archives and backups - photos, etc.

Just got a 4TB Samsung Evo 860 SSD

I manually copied the data from the HDD to the SSD about a week ago (using windows explorer) I compared the file count and capacity size when done - it matched up.

Didn't think about it for a week.

Now, when I look at the 4TB SSD, it's basically empty

There was one folder in the drive, (DATA/Apple Backup 2017/Apple Computer) and it contains 0 bytes.
There should be about a dozen folder structures.

Yes, I suppose I could have used a backup program, but it's just a bunch of file structures - it took half the day and I did it in chunks and kept manually checking to make sure each chunk copied over. Nobody else has access to this system and everything else seems fine.

Fortunately, I left the original data on the spinner, but this has me a little unnerved and worried about the functionality of the 4TB Evo.

Any ideas on what could cause this?
 
Did you try to access any file that had been copied across before they vanished?

Have you tried to use the new SSD to create/use/delete files at all to see if even the most basic functionality is there?

1. I did look at a couple old pictures, but I can't remember if it was from the original source or off the new drive.

2. Yesterday I copied 1.5 TB from another drive just to see if it would go - it did and I can look at those files.

If a regular user told me this story, I'd assume they just screwed up and were mistaken about something, but this is important family data, I spent all day doing it and I reviewed file counts and data size when I was done (using folder properties in windows) and now it's gone.

That "Z Drive Backup" is just a few empty, zero byte folders:

1584455561430.png
 
Can’t give you an explanation but some things I would check:

1. Run full/extended checks with Samsung Magician software
2. Copy a full 3-4TB of data to the drive and compare checksums
3. If you can do it safely, open the physics drive up and make sure you didn’t get a counterfeit or something weird
4. Copy some data to the drive, then unplug it and let sit for a week or 2 then check it
 
It's a non-zero possibility that your 860 EVO 4TB is counterfeit, and that it only pretends to be 4TB in size. If it's fake, then the hacked firmware would likely loop writes over top themselves after 32-64 GB or so, meaning if you wrote 64 GB and then another 64 GB then the first set would be gone. This is all masked behind firmware so you won't exactly know it's going on until you go looking for the initial set of data.
 
It's a non-zero possibility that your 860 EVO 4TB is counterfeit, and that it only pretends to be 4TB in size. If it's fake, then the hacked firmware would likely loop writes over top themselves after 32-64 GB or so, meaning if you wrote 64 GB and then another 64 GB then the first set would be gone. This is all masked behind firmware so you won't exactly know it's going on until you go looking for the initial set of data.

Can’t give you an explanation but some things I would check:

1. Run full/extended checks with Samsung Magician software
2. Copy a full 3-4TB of data to the drive and compare checksums
3. If you can do it safely, open the physics drive up and make sure you didn’t get a counterfeit or something weird
4. Copy some data to the drive, then unplug it and let sit for a week or 2 then check it

I considered this possibility, it was a problem with sd cards for a while, could easily be an issue with a $600 ssd...

1. (con) The packaging has a piece of packing tape over one of the seals on the box - I cut this to open it. Could be extra from the warehouse or whatever, my recollection was that I inspected the factory seal because I was originally sent a 250GB drive with a 4TB Amazon warehouse sticker by mistake.
2. (pro) The Samsung Magician software reports the drive as the right size, healthy and the serial number in firmware matches the drive.

I'm about 95% sure it's not a counterfeit - it's not easy to get to right now - it's inside my Ncase and I have to remove the chassis fans to get to it...

Will run some copies and update the firmware and not trust it for a while.

1584516376181.png
 
Last edited:
Then I'll side with others here; never trust the drive again. I hope you're inside your 'return it to the store I bought it from' window, because Samsung's RMA process is a piece of trash.

Yeah, I just got it from Amazon, I can get it replaced.

Had to call them, but a new one will be here Tuesday.
 
anandtech has had issues with samsung 4TB drives sometimes not detecting on power up in BIOS (don't think they had problems with actual data loss)

total loss of all data would indicate NAND failure or fake SSD (the firmware update would likely fail if its a fake SSD, you have to open it to really verify its real) you should see some problems inside the SMART if any write or read errors was happening or the drive was unplugged without been ejected and NTFS decided to wipe everything (i use HDD sentinel)

i tool i have used in the past for testing HDDs but works for SSDs as well is h2testw ftp://ftp.heise.de/pub/ct/ctsi/h2testw_1.4.zip as it writes a verifiable data set to the drive then once finished it then verifies the test data and later on you can just press verify later on to see if any of the data has changed (its old but works very well, very simple)
 
as it writes a verifiable data set to the drive then once finished it then verifies the test data and later on you can just press verify later on to see if any of the data has changed (its old but works very well, very simple)

Is this writing to every sector or less perhaps a random subset? Between work and home I have used badblocks at least a thousand times on hard drives but never on SSDs because writing to every single sector seems like its more wear than I want considering the chance of a failure is so low.
 
anandtech has had issues with samsung 4TB drives sometimes not detecting on power up in BIOS (don't think they had problems with actual data loss)

total loss of all data would indicate NAND failure or fake SSD (the firmware update would likely fail if its a fake SSD, you have to open it to really verify its real) you should see some problems inside the SMART if any write or read errors was happening or the drive was unplugged without been ejected and NTFS decided to wipe everything (i use HDD sentinel)

i tool i have used in the past for testing HDDs but works for SSDs as well is h2testw ftp://ftp.heise.de/pub/ct/ctsi/h2testw_1.4.zip as it writes a verifiable data set to the drive then once finished it then verifies the test data and later on you can just press verify later on to see if any of the data has changed (its old but works very well, very simple)

Is this writing to every sector or less perhaps a random subset? Between work and home I have used badblocks at least a thousand times on hard drives but never on SSDs because writing to every single sector seems like its more wear than I want considering the chance of a failure is so low.

I'm wondering if there could be an issue with the 8TB HDD

If it wasn't working, I don't think the copy process would appear to work, so that seems unlikely, but I'm anxious to get that data backed up to two other locations.
 
I'd also recommend using something like TeraCopy when moving / copying large amounts of important documents. It can be set to automatically do a checksum validation upon completion of a copy/move action. Might also consider setting up an old computer as a FreeNas box.
 
Is this writing to every sector or less perhaps a random subset? Between work and home I have used badblocks at least a thousand times on hard drives but never on SSDs because writing to every single sector seems like its more wear than I want considering the chance of a failure is so low.
it's only done once, it leaves the *.h2t files on the "SSD/HDD/SD Card" so you can re verify that the data is still intact at a later date (1 drive fill won't harm it unless the drive is broken to begin with)

unsure if there is a problem when testing on NTFS formatted drives it might fail on the last test file (if so just delete the last h2t file and then press verify) note Verify only reads the files to make sure the hashed data still is the same (so won't harm the drive no matter how many times you verify the drive

I'd also recommend using something like TeraCopy when moving / copying large amounts of important documents. It can be set to automatically do a checksum validation upon completion of a copy/move action. Might also consider setting up an old computer as a FreeNas box.
TeraCopy is nice for that (once it has finished copying the files you can save the checksum so can test later on) you can also go into the settings and make it verify the files on every copy, i have it set on mine to always verify and expand the panel (by default you have to press test)
 
Last edited:
I'd also recommend using something like TeraCopy when moving / copying large amounts of important documents. It can be set to automatically do a checksum validation upon completion of a copy/move action. Might also consider setting up an old computer as a FreeNas box.

it's only done once, it leaves the *.h2t files on the "SSD/HDD/SD Card" so you can re verify that the data is still intact at a later date (1 drive fill won't harm it unless the drive is broken to begin with)

unsure if there is a problem when testing on NTFS formatted drives it might fail on the last test file (if so just delete the last h2t file and then press verify) note Verify only reads the files to make sure the hashed data still is the same (so won't harm the drive no matter how many times you verify the drive


TeraCopy is nice for that (once it has finished copying the files you can save the checksum so can test later on) you can also go into the settings and make it verify the files on every copy, i have it set on mine to always verify and expand the panel (by default you have to press test)

Thanks for the advice. I re-copied the data from the HDD to my new 2TB NVME, then from there to my 4TB SSD - it seemed to work just fine and now the data is on 3 drives (plus the old 2TB HDD that it was on) so I'm not so edgy about it now.

Plan is to get a Synology to hold the data, then do incremental backups to an external USB drive as a backup. Maybe 2 USB backups....
 
Thanks for the advice. I re-copied the data from the HDD to my new 2TB NVME, then from there to my 4TB SSD - it seemed to work just fine and now the data is on 3 drives (plus the old 2TB HDD that it was on) so I'm not so edgy about it now.

Plan is to get a Synology to hold the data, then do incremental backups to an external USB drive as a backup. Maybe 2 USB backups....

My only caution to you is your current plan does not protect you from bit rot. The OS/drives are perfectly happy copying corrupt data back and forth. I don’t believe Synology has that functionality (bit rot protection), or if it does it only tells you it has happened, you’d have to restore a good copy from somewhere else.
 
My only caution to you is your current plan does not protect you from bit rot. The OS/drives are perfectly happy copying corrupt data back and forth. I don’t believe Synology has that functionality (bit rot protection), or if it does it only tells you it has happened, you’d have to restore a good copy from somewhere else.

Yeah, that is a concern - the Synology boxes do support the BTRFS (?) file system which guards against bit rot and I plan to create the Synology store from my 8TB HDD which, while not the oldest source, is oldest new source and likely has the best copy from the aging 2TB HDD
 
I think BTRFS just tells you it has detected corruption, but doesn’t fix the issue (ie have to restore from back up)...It sounds like you are aware but just making sure. I remember reading some non flattering discussions on BTRFS several years ago, so I’m not too current on its status now.
 
Back
Top