How to recover lost partition on GPT drive?

poee

Limp Gawd
Joined
Nov 15, 2005
Messages
257
My 3TB WD30EZRX has been working fine for the last few weeks I've owned it. I've been using it to backup files from older, smaller disks so I can consolidate and not have to keep 6 HDDs in my tower anymore.

Yesterday I used PerfectDisk 11 Pro to defrag all my drives (like I do every month or two) and this was the first time the 3TB drive had been defragged. Upon rebooting from that overnight task, I discovered that my 3TB HDD is now listed as RAW instead of NTFS in Windows 7 Disk Management. It still has a drive letter assigned to it, but trying to open it gets the error:

"You need to format the disk in drive K: before you can use it. [Format Disk] [Cancel]" -- when I hit "cancel" I get this popup:

"K:\ is not accessible. The volume does not contain a recognized file system..."

So I run EASEUS Partition Recovery 5.0.1 only to discover that it doesn't work with GPT drives, only MBR. It says that I need to use EASEUS Data Recovery Wizard ($70) instead to move the data off that drive to another and reformat. But I believe that all my data is right there and can be restored by whatever the GPT equivalent is to fixing an MBR.

Is there an application that fixes the GPT similar to the functionality of FixMBR?
 
Thanks for the replies!

I tried TestDisk, it scanned the drive and sees it as only 746.39GiB volume while Windows Disk Management sees it as 2794.39GiB -- coincidently a diff of exactly 2048GiB = 2TB. (A clue!) TestDisk warns that if the drive capacity does not look correct then I should not proceed until I fix that issue.

Tried GPT fdisk (gdisk), which returned the following:
C:\Windows>gdisk -l 4:
GPT fdisk (gdisk) version 0.7.2

Partition table scan:

MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.


Warning! Secondary partition table overlaps the last partition by

4294966385 blocks!
You will need to delete this partition or resize it in another utility.
[...]
Googling around for an answer to this got me deep into Linux territory which I am sadly non-proficient in as yet. Though I have gone through a bit of a crash course over the last two days (and nights). I sent an email to the developer of gdisk, who was kind enough to respond with some insight:
My best guess is that the defrag utility you mention replaced some critical driver or system file with one that has a 32-bit limitation. That would make the maximum drive size that the system could handle correctly drop to 2 TiB. Depending on how it worked, a larger disk might show up as being 2 TiB in size or it might "wrap around"... and show a size that's it's true size minus 2 TiB. The latter is consistent with the size gdisk is reporting. (3 TB = ~2.72 TiB, so you'd have ~0.72 TiB, or ~740 GiB, showing.) If I'm right, the solution is to restore the drivers that the software replaced.
Anyhow, a 32-bit limitation somewhere in the BIOS/Driver/OS chain got me searching on a new track through various forums, which led me to the Intel Matrix Storage driver as a prime suspect. (Though I do not use RAID, the Intel Matrix driver was my AHCI driver.) Turns out that versions of Intel Matrix Storage -- now called Intel Rapid Storage Technology -- earlier than 10.1 do not properly support >2TB physical disk drives. I had v.8.x as my AHCI driver. It did not impair my initial formatting of this drive, and seeing the correct size in Windows, and writing about 1.6TB worth of media files to it that were read properly (used teracopy with CRC verify). But from what I've gathered so far, I would have run into problems had I tried to write more to the disk beyond the 2TB the Intel Matrix driver could work with.

Perhaps when I ran PerfectDisk to defrag the drive, the program attempted to read and/or write to sectors beyond that 2TB limit, even though all my data was in the first 1.6TB of the disk (it needs additional space to swap out large files, which is what was on the disk). This in turn caused some error with the AHCI driver that caused the GPT to be rewritten on the next boot to conform with a disk size of 746GiB, trashing the existing 3TB partition so that Windows saw it as RAW.

So I installed the latest version of Intel Rapid Storage Technology (v10.6.0.1022), which supports >2TB disks. I was told I had to uninstall the old Intel Matrix driver first, which I did. This required a reboot. Upon reboot, Windows automatically put the MS generic AHCI driver where the Intel Matrix had been, and then immediately proceeded with chkdsk. My spirits sank as a saw the text roll as it removed tens of thousands of "records" it assumed were garbage and rebuilt the GPT. Though the process took about 30 minutes, there was no stopping it short of cycling the power, and I figured that would do more harm.

When Windows finally booted up, I saw my 3TB drive was back to its proper size and the format was a healthy NTFS. Only it was empty of my files, of course. Chkdsk did put a 93GB "found.000" folder on the root of the drive, filled with .chk files. Also, this from the gdisk dev (Rod Smith):
One more point: gdisk says that the partition table is completely intact, which means that the backup data appear at what gdisk thinks is the end of the disk (at the 746.5 GiB mark). When you fix the apparent disk size issue, you may need to use the "e" option on gdisk's experts' menu to move the backup data back to where they belong. This issue has another implication: The existing backup GPT data has been written to the middle of your data partition. If you're lucky, it's just written over some empty sectors and no real harm has been done; however, it's possible that it's overwritten a data file or even some filesystem data structures. Therefore, after you fix the size issue, I recommend running CHKDSK on the disk to locate and correct any data structure problems. If a file's been trashed, you'll just discover that the next time you access it, I'm afraid.
Chkdsk was run before I intended (prior to being able to run gdisk again to fix the GPT with an AHCI driver reporting the proper size), and it did indeed find data structure problems, and they were severe enough that none of my files appear on the volume anymore. I'm currently running EASEUS Data Recovery Pro (yeah I sprang for it) and I'm reasonably sure I can get a lot of my "lost" files off of it. Though that 93GB found.000 folder surely overwrote some data as well. When the Data Recovery program is finished its deep scan (3 hours in, 3 more to go) we'll see how it goes.

Sorry for this long post, but I thought others might want to see some of this if they run into similar issues. I have a feeling that a lot of people grabbing the new 3TB drives are going to encounter various issues unless they are running on the latest hardware with UEFI or have been diligent about updating their BIOS and drivers. (Just speaking for data disks here, obviously boot disks that large must be on a UEFI platform anyway.) There are several points along the chain from the HDD platters to the OS's UI, and if any of them are 32-bit limited in any way (cannot see more than 2TB, etc), or they use a disk utility with same limitation or that isn't properly GPT aware, then people are going to have problems thinking that all they need to worry about is OS compatibility with >2TB drives.
 
Thanks for the replies!
I tried TestDisk, ......
.......Sorry for this long post, but I thought others might want to see some of this if they run into similar issues.
Damn! I'm impressed!

Your dilligence and articulate feedback gets an A++ from me.

I know it doesn't mean much but it's all I have. :)

Please stick around and contribute more to the forum.

Thanks for the info!

EDIT....How's the recovery going?
 
Thanks! The recovery program has detected about 800GB of files that it says it can recover. Since I don't happen to have that much free space on another volume, I am currently awaiting another 3TB HDD to complete the process. I'm pleased at this point that any of the data is recoverable, let alone half of it! (It contained about 350 of my DVDs ripped to HDD, a process I'd rather not have to repeat, but far better than losing irreplaceable data.)

I'm looking at Newegg and Amazon (since I'm in Cali, Amazon is cheaper for me w/ no sales tax) and I'm trying to decide between getting another WD30EZRX (4-platter) to match up with my current one, or go for the Hitachi Coolspin 3TB (5-platter). After dealing with this formatting issue, I cannot really put a lot of stock in the bad user reviews all over both sites for 3TB drives, since most seem to be either vague "It don't work" type of comments, or people who are coming up against various 32-bit limitations (OS, drivers, USB enclosures, etc.) and blaming it on the HDD manufacturer.

There are definitely more issues to contend with in setting up these >2TB devices, especially when dealing with non-UEFI systems, but if any "blame" is to be thrown around, how about MS and the whole Windows platform (and all the hardware and software builders supporting that market) being so sluggish in committing to UEFI and GPT, when everyone could see for years where things were heading? As usual, there was a "cross that bridge when we get there" kind of attitude, and this significantly lengthens the bumpy "early adopter" period. But I feel a lot more confident with 3TB drives, now that I know about some great tools like gdisk and testdisk and, as always, helpful forums like [H]! Also, I always intended to learn Linux... just kept putting it off for another time. Now it doesn't seem quite so daunting after all.
 
Hitachi 5K3000 3TB is what you want to go with, forget user reviews as 99% of those people aren't qualified to give them - its always either a 1 or 5 star rating depending on their emotional state and whether they liked the packing job or 100 other things that have nothing to do with the merits of the drive. Not sure why but harddisks really bring out the crazies in the review sections.

And just like someone stated earlier the first thing I also do in a recovery operation is a bit-for-bit copy of the source (patient) disk, then I perform the recovery work on the copy disk(s). And stay away from Testdisk, its somewhat legacy now and hasn't been updated to deal with larger disk sizes, and I wouldn't recommend it for the data recovery novice, very easy to make the situation worse.

If you value your data and time then keep once you get your data recovered to the new drive, run an Error Scan operation in HDTune (or if WD has a utility to scan the entire surface of the disk use that) to make sure the WD drive is good before putting it back into service as a backup drive for the new drive that's holding your recovered data. And begin replicating your files on a schedule, ideally with archival copies. Two independent drives with one having archival copies of the other > RAID1 when it comes to backup and avoiding the situation from happening again.
 
Last edited:
@odditory: Great advice, thanks! I will definitely be doing a full surface scan of the drive when I have recovered what I can. Will use the WD utility for that, then use gdisk to make a backup of the GPT. Oh, and I will never use quick-format on such large volumes! I want every sector on those platters tested thoroughly before committing so much data to them.

I've been steadily increasing my HDD space over time for exactly the purpose you advise: archival backups. I have Acronis TI 2011 that I use for regular automated backups, and have archival backups of all my data except for most of the DVDs I ripped, since it wasn't until now that I had the space to duplicate all those (and I reasoned I had "hard copies" of those anyway, but even if data is replaceable, so much time and effort is not). Yes, 3TB is way too many eggs in one basket, so the new 3TB will be a backup volume for this current one, though not a RAID1, for the reason you mentioned. If I had started with two of these in RAID1 the problem would have corrupted both drives and left me no better off.
 
so the new 3TB will be a backup volume for this current one, though not a RAID1, for the reason you mentioned. If I had started with two of these in RAID1 the problem would have corrupted both drives and left me no better off.
You've learned a valuable lesson on why RAID1 isn't a back-up. ;)
 
Oh, and I will never use quick-format on such large volumes! I want every sector on those platters tested thoroughly before committing so much data to them.

Except a windows long-format isn't really the best test of the health of the drive, the only thing it tests is whether it can complete the format - and during the format if the controller on the drive encounters bad sectors and remaps on the fly, the format process won't know about it. I like a tool that shows me a map of the disk sectors and if it encounters a bad one I can see it.

So my preference after purchasing a new drive is #1 screenshot the smart stats, #2 full surface scan with vendor tool or third party, #3 compare SMART stats to before surface scan and see if reallocated sectors increased. If its significant then you might want to replace the drive within the 30-day return window.

Once you've done that then quick-format is fine. Technically you have nothing to lose with long-format except time but I haven't used long-format in more than a decade (I'm usually formatting raid arrays anyway which it also doesn't make sense for).
 
I've had the same issue 3-4 times, about once per 3TB I own. I'm still looking for a solution, as I just lost one a few days ago after I thought I was in the clear. I've updated firmware and drivers, and windows has been rebuild many times during this period.

Are you using an SAS system as well? It's one thing I thought of to blame for this happening so much (LSI 3442e controller, hp expander, norco box).

http://hardforum.com/showthread.php?t=1617748

By the way odditory, you warn against using testdisk because it's old, but you don't recommend an alternative? I've never had any luck with it anyway, but what do you use?
 
hello to comunity.
very best holly days 2 all!

after 4 days and nights found this forum.
i got WD30EZRX suddenly disappeared from system, formatted after restart - spend days with dozen recovery software - getting nothing.
EASEUS Data Recovery Wizard Professional reports only 746.39gb, but finds 3 partition with proper size:
RAW starting with 0 - till the end
NTSC starting with some mega bites till the end
another NTSC starting with some gigabytes till the end.
i guess first RAW is what manufacture have produced, second NTSC is what was shipped and the last one is my last format command.

R-Studio as well as GetDataBAck seems to find al the occupied clusters (R-Studio visual representation of the scan process looks promising)- but, as a result -nothing.

i got modern mobo - asus maximus iv, with latest firmware and drivers installed PRIOR to the occasion.
my drive initially was part of the WD MY BOOK, it disappeared while being on usb3. then i have taken it from the box, connected to the mobo, and was prompted to format it.

am preparing my self for gdisk, altho am absolute nob to linux. manly, am wodering what its (gdisk) gona give me at the end? fixed what? stucture to do what?
any help pls.
cheers.
 
I just experienced a disk loss, like you did.
I'm running the server with Win 2011 essentials.
4x 2TB SATA disks as Raid 10 array connected to the onboard ROMB system (Raid on Motherboard). So to the O/S it looked like on 4TB disk. I used GPT because it is said to be capable of handling big drives and "safe".
THe server is still running, as this was not the system disk (must be smaller than 2TB for Windows Servers - a limitation that is not mentioned in the sales brochure ;))
Then suddenly in the middle of the night (21:00) it stopped working and sent an allert email to me. Right before the daily backup started. One day lost!
:confused: What do we have in common?
Operating System?
Big Drives (2TB+)?
GPT?
It's not SAS or SATA, as we have both systems affected.

I'm in the middle of analyzing the disks with testdrive (quicktest is running for more than one day now and is at 75% now, I wonder how long the detailed testing takes). I will tell about the results when I'm done.
 
Read this thread and started down the same path to resolve my identical issue (3TB drive on an ICH8R controller on v8... Intel AHCI drivers, wrote over 2TB to a 3TB disk, rebooted and lost GPT).

I thought I was cunning in not removing the Intel drivers prior to updating to 10.1.. but on reboot check disk ran and I am now looking at a stream of 'Deleting orphan file record segment 12345' messages. Not the prettiest thing in the world. That will teach me for working on this via VNC and not diligently waiting with the Esc key ready to prevent it from runnning.

Hopefully data is recoverable.
 
Read this thread and started down the same path to resolve my identical issue (3TB drive on an ICH8R controller on v8... Intel AHCI drivers, wrote over 2TB to a 3TB disk, rebooted and lost GPT).

I thought I was cunning in not removing the Intel drivers prior to updating to 10.1.. but on reboot check disk ran and I am now looking at a stream of 'Deleting orphan file record segment 12345' messages. Not the prettiest thing in the world. That will teach me for working on this via VNC and not diligently waiting with the Esc key ready to prevent it from runnning.

Hopefully data is recoverable.
After reading this thread, I'm very, very cautious about getting any 3+ TB drive.:( Very, very cautious.:eek: Methinks that this issue should be summarized into a stickie. :)

I would normally volunteer to do it myself, but I'm a complete noob about drives > 2.2 TB. :eek:
 
A good backup saved my life. Afterwards I kicked out the windows server and dramatically reduced the time I spent on server management.
My data was not recoverable. I tried serveral tools and spent a lot of time.
What I learned is that a daily backup is not enough unless loosing the work of one day is acceptable for you. I run serveral incremental backups during the day now.
 
I used GPT because it is said to be capable of handling big drives and "safe".

Well, GPT should not be blamed here, after all it's just a table with partition start addresses and required if you use a disk above the 2 TB boundary. With MBR you cannot run into the the same problems because it would not let you use the whole disk in the first place.

I plan to upgrade a computer with an Intel S3420GPLC Server board to an 3 TB hdd. The board supports UEFI, but seems not to fully support GPT and 3 TB drives. At least the Intel ESRT (name for host raid on server boards) is said to not support it. I could not get any information whether the board supports booting from 3 TB drives in plain AHCI mode. After all I have to test it thoroughly before using the system with important data. I hope that this problem will no longer exist with new hardware, but if I remember the 128GB/LBA48 issues, these stayed around for several years. At that time hdds larger than 128 GB were really expensive so consumers did not encounter that too often.
 
Last edited:
Why not blame GPT?
I just can't accept to buy a product that holds all the company data and due to some little fault in software the data is gone. Maybe others don't mind?!
A server product must protect my data from getting lost - the windows server failed to do so.
 
GPT is not a "product". There are clear specifications on how it is to be implemented and what it is capable of. What is to blame here is the BIOS, the driver or the software on top of it. If you assemble a system yourself you are responsible for compatibility. While there are a lot of standards that should ensure compatibility out of the box, the history shows that this more often not the case. Just look at PCI, there is a several hundred pages spec, it is very detailed, yet you could find so many devices that did not correctly work together. If you buy a complete system, you can blame the vendor for incomplete validation, however. That is why storage servers cost a fortune compared to the bare hardware.
 
Maybe I should say the GPT implementation of Windows Server is to blame. But I'm not so much into that specs that I could tell you. But I would never use this combination again.
 
Maybe I should say the GPT implementation of Windows Server is to blame. But I'm not so much into that specs that I could tell you. But I would never use this combination again.

Maybe the the real issue is the"ecosystem," meaning all the tools we use to address some issue of system design and management. If you buy a pre-built system, presumably you get a system where the tools and the design are compatible. For homebrew systems, which we all like to do, we are assuming that risk. Sometimes that's OK, sometimes it's not so OK. Good backups are critical here, and so is some measure of patience and diligence.
 
Maybe I should say the GPT implementation of Windows Server is to blame. But I'm not so much into that specs that I could tell you. But I would never use this combination again.

I doubt GPT is the source of the problem. Probably the onboard RAID or its driver is the culprit.
 
Thanks for the replies!

I tried TestDisk, it scanned the drive and sees it as only 746.39GiB volume while Windows Disk Management sees it as 2794.39GiB -- coincidently a diff of exactly 2048GiB = 2TB. (A clue!) TestDisk warns that if the drive capacity does not look correct then I should not proceed until I fix that issue.

Tried GPT fdisk (gdisk), which returned the following:
Googling around for an answer to this got me deep into Linux territory which I am sadly non-proficient in as yet. Though I have gone through a bit of a crash course over the last two days (and nights). I sent an email to the developer of gdisk, who was kind enough to respond with some insight:Anyhow, a 32-bit limitation somewhere in the BIOS/Driver/OS chain got me searching on a new track through various forums, which led me to the Intel Matrix Storage driver as a prime suspect. (Though I do not use RAID, the Intel Matrix driver was my AHCI driver.) Turns out that versions of Intel Matrix Storage -- now called Intel Rapid Storage Technology -- earlier than 10.1 do not properly support >2TB physical disk drives. I had v.8.x as my AHCI driver. It did not impair my initial formatting of this drive, and seeing the correct size in Windows, and writing about 1.6TB worth of media files to it that were read properly (used teracopy with CRC verify). But from what I've gathered so far, I would have run into problems had I tried to write more to the disk beyond the 2TB the Intel Matrix driver could work with.

Perhaps when I ran PerfectDisk to defrag the drive, the program attempted to read and/or write to sectors beyond that 2TB limit, even though all my data was in the first 1.6TB of the disk (it needs additional space to swap out large files, which is what was on the disk). This in turn caused some error with the AHCI driver that caused the GPT to be rewritten on the next boot to conform with a disk size of 746GiB, trashing the existing 3TB partition so that Windows saw it as RAW.

So I installed the latest version of Intel Rapid Storage Technology (v10.6.0.1022), which supports >2TB disks. I was told I had to uninstall the old Intel Matrix driver first, which I did. This required a reboot. Upon reboot, Windows automatically put the MS generic AHCI driver where the Intel Matrix had been, and then immediately proceeded with chkdsk. My spirits sank as a saw the text roll as it removed tens of thousands of "records" it assumed were garbage and rebuilt the GPT. Though the process took about 30 minutes, there was no stopping it short of cycling the power, and I figured that would do more harm.

When Windows finally booted up, I saw my 3TB drive was back to its proper size and the format was a healthy NTFS. Only it was empty of my files, of course. Chkdsk did put a 93GB "found.000" folder on the root of the drive, filled with .chk files. Also, this from the gdisk dev (Rod Smith):Chkdsk was run before I intended (prior to being able to run gdisk again to fix the GPT with an AHCI driver reporting the proper size), and it did indeed find data structure problems, and they were severe enough that none of my files appear on the volume anymore. I'm currently running EASEUS Data Recovery Pro (yeah I sprang for it) and I'm reasonably sure I can get a lot of my "lost" files off of it. Though that 93GB found.000 folder surely overwrote some data as well. When the Data Recovery program is finished its deep scan (3 hours in, 3 more to go) we'll see how it goes.

Sorry for this long post, but I thought others might want to see some of this if they run into similar issues. I have a feeling that a lot of people grabbing the new 3TB drives are going to encounter various issues unless they are running on the latest hardware with UEFI or have been diligent about updating their BIOS and drivers. (Just speaking for data disks here, obviously boot disks that large must be on a UEFI platform anyway.) There are several points along the chain from the HDD platters to the OS's UI, and if any of them are 32-bit limited in any way (cannot see more than 2TB, etc), or they use a disk utility with same limitation or that isn't properly GPT aware, then people are going to have problems thinking that all they need to worry about is OS compatibility with >2TB drives.

--------------------------------------------

Hello,

I have encoutered the same problem with two Seagate 3TB drives I bought to back up my family archive, and I have lost loads of home movies and pictures I desperately need to get back.

Both drives encountered problems once they got to around 2TB of data, causing the drives to reconfigure to same arrangement that you had i.e. 746.39GiB volume while Windows Disk Management sees it as 2794.39GiB.

I have followed your thread and I also discovered I was using an old Intel Rapid Storage Technology which I have now upgraded to version 11.6.

My question is where do I go next, please bear in mind I am a luddite with very ltitle computer expertise or programming knowledge. I should add that one disk sits inside the machine connected e-sata, and the second is in an external Blizzard Icybox which is supposed to be certified for +3TB sata disks, anyhow both disks have the smae problem and appear identically in windows. AT the present time both are disconnected.

From you post it is not clear if I should run gdisk or buy EASEUS Data Recovery Pro and it seems as soon as I connect the disk chkdsk will run anyhow???? In your post you give the impression that I need to run gdisk to put the file structure back in place, but then it seems that chkdsk did it anyway??

So suffice to say my lack of knwoledge meant that I got a little lost within your post, and I would be grateful if you could just do a simple step by step guide to what you did so I can follow you and recover my files. I would not that you do not actually say in your post if you fully restored all you data, but I am assuming you did.

When I lost my data I spoke to Seagate who were useless, and a couple of so called experts and no one could help me and your post has given me some hope. SO thanks for the original post, and many thanks for any help you can give me now.

Best Reagrds

GlennM
 
I had a similar problem.

3TB WD30EZRX in a external enclosure over esata (marvell controller).

It was split in two 1.8ish gb partitions (A & B). When I shrank one of the partition (A) to make a third exFat one (C), using easeUS partition master 9, I lost the other partition (B).

Windows couldn't see the unfortunately liberated unallocated space, maybe because of the enclosure (when I first formatted the drive I had to put it internal because my enclosure has issues with more than 2tb drives/partitions), so I plugged it internal.

Now that the machine could see the unallocated space, Active partition recovery for windows was able to recover the lost partition.

So yeah, Active Partition Recovery seems to do the trick with GPT drives. I hope it will work for others too.
 
--------------------------------------------

Hello,

I have encoutered the same problem with two Seagate 3TB drives I bought to back up my family archive, and I have lost loads of home movies and pictures I desperately need to get back.

Both drives encountered problems once they got to around 2TB of data, causing the drives to reconfigure to same arrangement that you had i.e. 746.39GiB volume while Windows Disk Management sees it as 2794.39GiB.

I have followed your thread and I also discovered I was using an old Intel Rapid Storage Technology which I have now upgraded to version 11.6.

My question is where do I go next, please bear in mind I am a luddite with very ltitle computer expertise or programming knowledge. I should add that one disk sits inside the machine connected e-sata, and the second is in an external Blizzard Icybox which is supposed to be certified for +3TB sata disks, anyhow both disks have the smae problem and appear identically in windows. AT the present time both are disconnected.

From you post it is not clear if I should run gdisk or buy EASEUS Data Recovery Pro and it seems as soon as I connect the disk chkdsk will run anyhow???? In your post you give the impression that I need to run gdisk to put the file structure back in place, but then it seems that chkdsk did it anyway??

So suffice to say my lack of knwoledge meant that I got a little lost within your post, and I would be grateful if you could just do a simple step by step guide to what you did so I can follow you and recover my files. I would not that you do not actually say in your post if you fully restored all you data, but I am assuming you did.

When I lost my data I spoke to Seagate who were useless, and a couple of so called experts and no one could help me and your post has given me some hope. SO thanks for the original post, and many thanks for any help you can give me now.

Best Reagrds

GlennM

Sorry to hear of your troubles, GlennM. Alas, I was unable to recover ANY of my data from that debacle. I paid for the EASUS Data Recovery Pro and ran it for days trying to glean something usable from the partition. All I got in the end was a large quantity of heavily nested folders with hex numbers for names, all empty but for some log files with unreadable strings. It "recovered" about 12MB of these useless bits of stray data out of over 900GB lost. So, no, I do not recommend that recovery program. Why pay $70 for something that doesn't work when there are so many free options that offer a similar user experience? ;)

However, this experience has not deterred me from >2TB HDDs. It is just a matter of insuring a compatible chain of hardware, software, firmware, BIOS, and OS. (That's all! ;)) I am currently running 2x 3TB WD30EZRX drives and 2x 3TB Hitachi 5K3000 drives (not counting my <=2TB drives), and have had no problems since I updated that Intel Rapid Storage Technology driver over a year ago. I just couldn't save any data that had been lost during the time that the incompatible, 32-bit-limited Intel Storage driver was installed. Oh, and I run nightly backups of my entire system.

At this point I price HDDs as double the advertised price and buy in pairs (not for RAID1 but for backups). There is no substitution for regular backups, I don't care how much you pay for a storage device or how long that warranty lasts. Every single one of them will fail at some point, taking your data with it to oblivion. I think of all digital storage as volatile, not just RAM! HDDs just take a bit longer to lose your data than RAM after the power goes off. (If you really value your data, a little paranoia can go a long way...:rolleyes:)
 
I just wanted to thank Poee for your response, I am grateful.

I still have my drives unmolested by any recovery software and so I still seek a solution to recover my data.

I still use >2tb drives using the upgraded Intel software and likewise to date no problems have been encountered. You are right about backups and data loss, it is a risk we all face.

Thanks for your response, and all the best.

BR

Glenn M
 
Hi Glenn,

I read you story after trying to post mortem the same thing happening to me. Did you ever work out how to recover your data?

In my experience, it is possible to do a disc clone where you copy the entire contents of your drive as raw data from one hard drive to another - it takes a long time (~ 24 hours) as you copy the contents of every sector the drive whether or not it contained data. From there you have a consequence free drive to experiment on. Test disk is a free and powerful, although not particularly intuitive programme. I found it was unable to replace my partition table, but in the past I've used it to recover video files and photos from deleted partitions using its partner programme Photorec, which is easier to use. You don't get any files names or the directory structure, but it's better than nothing.

The important thing - which you appear to understand, but I'll say again for the benefit of future readers - is not to write anything to the drives in the mean time. Which means only attaching the drive when the computer has started up, for fear of check disk stepping in and trying to fix things.

If you've learnt anything helpful since your last post, I'd be very interested to hear it.

Thanks
 
Wow, I've had 3 similar failures over the course of a year. I trumped up the failures, to a bad controller, until I saw this thread. It turns out that I believe I have the same issue! The latest driver for my ICHR9 was version 9.x which clearly wouldn't support 4TB drives.

After copying files to back up my system last night to a 4TB drive i went over the 2TB threshold and the drive went raw, did a bunch of searching and found this thread.

I believe my MFT and must be corrupted..... I believe that's what happens is sector 1 gets over written by the data writing to the disk after the 2TB limit via wraparound and nukes the disk.
 
Last edited:
Back
Top