How to screwup your HDD in three easy steps

Status
Not open for further replies.

_Sin_

[H]ard|Gawd
Joined
Jan 1, 2003
Messages
1,652
http://www.theinquirer.net/?article=14597

I call shens, but I dont have any spare drives at the moment. (I do have some 160Gb drives id test it out on though if people are lucky)

Unused space on hard drives recovered?

Hidden partitions revealed

By INQUIRER staff: Tuesday 09 March 2004, 14:33
READER WILEY SILER has sent us a method which he said was discovered by Scott Komblue and documented by himself which they claim can recover unused areas of the hard drive in the form of hidden partitions.

We haven't tried this here at the INQUIRER, and would caution readers that messing with your hard drive is done at your own peril and very likely breaches your warranty. Here is what Wiley and Scott did. µ

* UPDATE Does this work? We're not going to try it on our own machine thank you very much. Instead, we're waiting for a call from a hard drive company so we can get its take on these claims.

Required items
Ghost 2003 Build 2003.775 (Be sure not to allow patching of this software) 2 X Hard Drives (OS must be installed on both.) For sake of clarity we will call the drive we are trying to expand (T) in this document (means Target for partition recover). The drive you use every day, I assume you have one that you want to keep as mater with your current OS and data, will be the last dive we install in this process and will be called (X) as it is your original drive.

1. Install the HDD you wish to recover the hidden partitions (hard drive T) on as the master drive in your system with a second drive as a slave (you can use Hard Drive X if you want to). Any drive will do as a slave since we will not be writing data to it. However, Ghost must see a second drive in order to complete the following steps. Also, be sure hard drive T has an OS installed on it You must ensure that the file system type is the same on both drive (NTFS to NTFS or FAT32 to FAT32, etc)

2. Install Ghost 2003 build 2003.775 to hard drive T with standard settings. Reboot if required.

3. Open Ghost and select Ghost Basic. Select Backup from the shown list of options. Select C:\ (this is the drive we want to free partition on on hard drive T) as our source for the backup. Select our second drive as the target. (no data will be written so worry not). Use any name when requested as it will not matter. Press OK, Continue, or Next until you are asked to reboot.

Critical step
4. Once reboot begins, you must shutdown the PC prior to the loading of DOS or any drivers. The best method is to power down the PC manually the moment you see the BIOS load and your HDDs show as detected.

5. Now that you have shutdown prior to allowing Ghost to do its backup, you must remove the HDD we are attempting to expand (hard drive T which we had installed as master) and replace it with a drive that has an OS installed on it. (This is where having hard drive X is useful. You can use your old hard drive to complete the process.) Place hard drive T as a secondary drive in the system. Hard drive X should now be the master and you should be able to boot into the OS on it. The best method for this assuming you need to keep data from and old drive is:

Once you boot into the OS, you will see that the second drive in the system is the one we are attempting to expand (hard drive T). Go to Computer Management -> Disk Management

You should see an 8 meg partition labeled VPSGHBOOT or similar on the slave HDD (hard drive T) along with a large section of unallocated space that did not show before. DO NOT DELETE VPSGHBOOT yet.

6. Select the unallocated space on our drive T and create a new primary or extended partition. Select the file system type you prefer and format with quick format (if available). Once formatting completes, you can delete the VPSGHBOOT partition from the drive.

7. Here is what you should now see on your T drive.

a. Original partition from when the drive still had hidden partitions
b. New partition of space we just recovered.
c. 8 meg unallocated partitions.

8. Do you want to place drive T back in a PC and run it as the primary HDD? Go to Disk Management and set the original partition on T (not the new one we just formatted) to and Active Partition. It should be bootable again if no data corruption has occurred.

Caution
Do not try to delete both partitions on the drive so you can create one large partition. This will not work. You have to leave the two partitions separate in order to use them. Windows disk management will have erroneous data in that it will say drive size = manus stated drive size and then available size will equal ALL the available space with recovered partitions included.

This process can cause a loss of data on the drive that is having its partitions recovered so it is best to make sure the HDD you use is not your current working HDD that has important data. If you do this on your everyday drive and not a new drive with just junk on it, you do so at your own risk. It has worked completely fine with no loss before and it has also lost the data on the drive before. Since the idea is to yield a huge storage drive, it should not matter.

Interesting results to date:
Western Digital 200GB SATA
Yield after recovery: 510GB of space

IBM Deskstar 80GB EIDE
Yield after recovery: 150GB of space

Maxtor 40GB EIDE
Yield after recovery: 80GB

Seagate 20GB EIDE
Yield after recovery: 30GB

Unknown laptop 80GB HDD
Yield: 120GB
 
it seems prety wacky.
I do remebr wayyyyy back when you used to be able to hook an mfm drive to an RLL controller and get 50% more space out of it, but it was usualy unstable as hell.
[we are talking back inthe 20/40 meg days mid to late 80's]
 
This looks very intriguing... I have hundreds of drives I could test it out on here at work, but not a copy of Ghost in sight!
 
m2c4u ust did this on a Samsung SP1604N and gained 7gig from it. He also ran HDtacj and didnt see any decrease in performance.
 
i know i have an old WD 10 gigger somewhere around the house... maybe that'll be my next project.
 
I shot off an email to keven rose at the screen savers to see if he could do it in their lab. ive got 2 13 gigs to play around with. I really want to do it to my 250gig. Im not real sure i understand why this works or if the extra space is actually usable and there, like its just a table glitch.
 
im wondering if its just not a matter of space incorrectly reporting. for those of you giving it a whirl, make sure to grab some big movie files or some such and try to fill up all this "new space", and play them back.
 
Reminds me of Stacker that I used to run on my 286. lol
 
Czar.. Comments? :D

Sounds good but nothing is free. Their has to be a tradeoff. If not performance then stability or life expectancy.
 
I've got a 60GB WD at home that's collecting dust (it was acting up a bit), so I'll give this a whirl when I get off of work. It's past it's warranty date, so doesn't really matter if it breaks or not.
 
really interested in this, ive got a 40 gb western digital drive in mine at the moment, with 500 mb free :x

if all looks good i might back it up to my housem8s pc and give it a go, i need the space and im skint ;)
 
does this count as overlocking?..... i knew it was only a matter of time before we'd see posts in the overclocking forum about hard drives. my only question now is whether power supplies or floppy drives are next... seeing that we've figured out how to overclock pretty much everything else under the sun ;)

in all seriousness though, the geometry has not changed with the drive, there are still the same amount of bytes per square inch as determined by the drive engineers.... something's gotta give, there's gotta be some catch.... it just smells like an exploit of some type, its just a matter of what's being exploited.

if there truly are no performance, reliability, or physical repurcussions for doing this, then this is truly awesome. i have a copy of the right build and a 20gb wd drive i'm going to experiment on itm. i'm just really interested to see how accurately the drive reads & writes when the 'hidden' partition gets utilized.
 
Stacker was a realtime data compression utility, and it hampered performance a bit.
I'm just wondering what this can be, apparently a table error like someone sayd before.
It is hardly true that HD companies would let that kind of space unesed if this was true.
Why the hell louch a 150GB drive if it can have 250 to 300GB???
Well let's see what it does....
waiting the results.
 
Just got 9GB extra on the 10GB drive. I will now test it out by writing a bunch of data to it and see how those programs/movies run on both.
 
heh n1 ice, post back in here soonish :)

honestly considering going home after work 2nite and knockin this out on my drive, really need some more mp3 storage...

i also cant see how it would work, there must be caps on the pysical ammont of space and data that can be used, and i dont see why hdd companies wouldnt have used it if it was there ages ago.. if its writable and usable.. with little performance hit, this is a classic! :)
 
Originally posted by SKiTLz
Czar.. Comments? :D

Sounds good but nothing is free. Their has to be a tradeoff. If not performance then stability or life expectancy.


hmmmm.....Im researching

off the top of my head there are several areas of a HDD that are typically inaccessable (like the boot sector, MBR, the area you write dynamic drive overlays, the architecture, sevrvo bursts, "spare" sectors to replace those that go bad ect. but I dont think all that wouldnt account for this much extra space.)

but seriously doubling the HDD space?
Interesting results to date:
Western Digital 200GB SATA
Yield after recovery: 510GB of space

something aint right
 
There are several things that could be going on with this. Not saying specifically any of these are the cause.

Many drive vendors sell the same model of drive with different capacities. Lots of times, the smaller capacity drives will be exactly the same as the bigger capacity drives. The extra space is just disabled somehow. I'm not sure what mechanism the manufacturers use to do this, but I wonder if this hack is bypassing that on some drives.

Also, hard drives have many spare sectors reserved to be remapped when a usable sector goes bad. I have no idea how much space they actually have set aside for this, but maybe this is somehow being converted into usable space. (which leaves me to wonder what would happen when a sector goes bad and the drive would need to remap one into this 'new' space)

Just my theories right now, I'm sure we'll find out soon enough why this works...
 
I was thinking about the model lockouts as well
in the past some manufacturers locked out sections not just to limit the capacity (in support of a price\marketing structure) but also to place the "spec" of the HDD in the "sweet spot" either close to the spindle for fast access times or at the outer edge of the platter for better sustained transfer

isf that is the case.......
 
Originally posted by UICompE02
There are several things that could be going on with this. Not saying specifically any of these are the cause.

Many drive vendors sell the same model of drive with different capacities. Lots of times, the smaller capacity drives will be exactly the same as the bigger capacity drives. The extra space is just disabled somehow. I'm not sure what mechanism the manufacturers use to do this, but I wonder if this hack is bypassing that on some drives.

Also, hard drives have many spare sectors reserved to be remapped when a usable sector goes bad. I have no idea how much space they actually have set aside for this, but maybe this is somehow being converted into usable space. (which leaves me to wonder what would happen when a sector goes bad and the drive would need to remap one into this 'new' space)

Just my theories right now, I'm sure we'll find out soon enough why this works...

That first theory of yours is completely wrong. Why would a manufacturer pay the money to develop a higher capacity drive, and then instead of using that same technology to build lower capacity drives, just "disable" the extra space and sell it as a smaller drive? That's contrary to all common sense business practices. True that businesses do this with other products - such as processors - in order to reduce costs / increase profits, but if a hard drive manufacturer were to do this it would be the exact opposite.
 
http://www.pcguide.com/ref/hdd/op/media_Size.htm

Platter size @ PC Guide

Improved Seek Performance: Reducing the size of the platters reduces the distance that the head actuator must move the heads side-to-side to perform random seeks; this improves seek time and makes random reads and writes faster. Of course, this is done at the cost of capacity; you could theoretically achieve the same performance improvement on a larger disk by only filling the inner cylinders of each platter. In fact, some demanding customers used to partition hard disks and use only a small portion of the disk, for exactly this reason: so that seeks would be faster. Using a smaller platter size is more efficient, simpler and less wasteful than this sort of "hack".

finalgt I wouldnt necessarilly rule out "limiting" that is often done
they do sell identical HDDs with different capacities they are both made for $X, but sell at different pricepoints, one just has a fatter margin of profit, but even selling the "limited" one generates profit, and its cheaper to do that then tool up for a different model
 
Originally posted by finalgt
That first theory of yours is completely wrong. Why would a manufacturer pay the money to develop a higher capacity drive, and then instead of using that same technology to build lower capacity drives, just "disable" the extra space and sell it as a smaller drive? That's contrary to all common sense business practices. True that businesses do this with other products - such as processors - in order to reduce costs / increase profits, but if a hard drive manufacturer were to do this it would be the exact opposite.

Uhhh.... how would this be the exact opposite? You argue someone is completly wrong yet you have no supporting reasons.

One product line would be much cheaper than multiple ones. The large markup on your high capacity drives could cover research and development and the smaller markup on your lower capacity drives would be profit. Not everyone is going to shell out $100s for the highest capacity drive, so instead of marketing only to one group of consumers, market to many. This model seems to work for many other products.
 
Originally posted by IceDigger
Just got 9GB extra on the 10GB drive. I will now test it out by writing a bunch of data to it and see how those programs/movies run on both.

has he posted back yet? Maybe his machine died...or exploded or something...
 
Originally posted by finalgt
That first theory of yours is completely wrong. Why would a manufacturer pay the money to develop a higher capacity drive, and then instead of using that same technology to build lower capacity drives, just "disable" the extra space and sell it as a smaller drive? That's contrary to all common sense business practices. True that businesses do this with other products - such as processors - in order to reduce costs / increase profits, but if a hard drive manufacturer were to do this it would be the exact opposite.

You assume that what we are charged for Hard disk space is directly proportional to what it costs the manufacturers...

In reality it is based on demand, not production costs...
 
Someone gimme the side effects dammit.. :D Im a negative person by nature... Im just having real trouble getting my head around doubling your HDD space so easily.

Would be good to see a press statement from a HDD manufacture on this, but if it is true I suspect they will keep as quiet as possible.
 
Originally posted by Ice Czar
http://www.pcguide.com/ref/hdd/op/media_Size.htm
I wouldnt necessarilly rule out "limiting" that is often done
they do sell identical HDDs with different capacities they are both made for $X, but sell at different pricepoints, one just has a fatter margin of profit, but even selling the "limited" one generates profit, and its cheaper to do that then tool up for a different model

Heck, IBM does this with their servers now :) You buy an 8way server, and you get a 16 way with 8 procs disabled. Decide you need more power? Just call up IBM and they'll connect to your system and enable 8 more procs. It's means that IBM really only has to ship a few different designs, and a small firmware change here and there can enable all sorts of different configs. Less work for IBM, it's a great marketing tool, and the customers are eating it up like crazy.

I can see this saving HD manufacturers a ton of money. Why design and build 10 different models of hard drives, when you could design 1, and just enable/disable features/capacity with some firmware changes. Sure the margins on the lower end drives aren't as good as they could be, but they save a shit-ton (technical term:)) on factory tooling and setup costs.
 
Ok, just got done benchmarking the "old" space and the "new" space

old - 12308kB/s

new - 11785kB/s

Not much of a difference between the 2.

So far nothing is wrong, no errors.

Going to copy a dvd to the hd and play that via alcohal on the space and see what happens.
 
What model server are you speaking of, i work for IBM i can look into that, sounds kinda cool.
 
Originally posted by IceDigger
Ok, just got done benchmarking the "old" space and the "new" space

old - 12308kB/s

new - 11785kB/s

Not much of a difference between the 2.

So far nothing is wrong, no errors.

Going to copy a dvd to the hd and play that via alcohal on the space and see what happens.

:D
 
Originally posted by finalgt
That first theory of yours is completely wrong. Why would a manufacturer pay the money to develop a higher capacity drive, and then instead of using that same technology to build lower capacity drives, just "disable" the extra space and sell it as a smaller drive? That's contrary to all common sense business practices. True that businesses do this with other products - such as processors - in order to reduce costs / increase profits, but if a hard drive manufacturer were to do this it would be the exact opposite.

No his theroy is a sound one, the reasoning behind that is that cpu manufactuers do the same thing with the cpus. Look at the celerons and the newer durons, the main thing missing is cache . why make a completley diffrent cpu when when you can take the "failuers" and sell them as something else.
 
Originally posted by ScratchMan
Uhhh.... how would this be the exact opposite? You argue someone is completly wrong yet you have no supporting reasons.

One product line would be much cheaper than multiple ones. The large markup on your high capacity drives could cover research and development and the smaller markup on your lower capacity drives would be profit. Not everyone is going to shell out $100s for the highest capacity drive, so instead of marketing only to one group of consumers, market to many. This model seems to work for many other products.

I can't make a definitive argument without a greater understanding of how hard drive technology is developed, but the following is based on the assumption that hard drive manufacturers need only develop one technology, which they can then apply to multiple drive models (different capacities).

A hard drive manufacturer doesn't necessarily have to design every drive model from the ground up just because it's a different capacity, does it? For example, once WD has the technology to create a 120GB 7200 RPM 8MB drive, then it follows that they'll easily be able to create a 40GB, 60GB, 80GB and 100GB variation of those drives without having to completely design the thing from the ground up, correct? If that's true, why would they manufacture 80GB drives, then "disable" 40 gigs and market it as a 40GB drive? Their profit margins would be far lower than if they just manufacturered a regular 40 gig drive.

Again, that's based on my limited comprehension of the hard drive manufacturing process. Feel free to correct me if I'm wrong.
 
I want to test this on my 120GB wd drives. Anyone else tested on larger drives?

IBM Atlanta eh? GSSC? LOL!!! I am your BO in Raleigh!!!
 
While copying files over from my networked server to the "new" space windows restarts. When windows comes back on i get a driver error with the network card.

"\systemroot\system32\drivers\el2k_2k.sys device driver could not be loaded"

This could be a geniun error as I have not installed any drivers other then the network one for the win2k installation or any fixes.

Going to take the driver out in safemode and see what happens.

Cross your fingers everybody.
 
Status
Not open for further replies.
Back
Top