Which is Worse for an HDD? (Folder Structuring)

TorxT3D

Gawd
Joined
Apr 30, 2006
Messages
649
Which is more stressful on an hdd??

- 2000-3000 folders in the root directory of the partition
or
- breaking those 2000-3000 up into subdirectory(s) or genres, if you will..


reason i ask is because i feel like if theres alot of folders in the root partition, windows either takes forever to scan and display them and/or causes bad threashing on the hdd during the scan.. Anyone know what im talking about?

Well, ive also tried it by splitting up the folders into categories and its still the same issue..
Any remedies? Ive already turned off and disabled indexing on the drives...
 
Using NTFS as the filesystem, I guess? I haven't hit that sort of limitations. I have 1200 image files in one folder and it loads immediately. Making a thousand subdirectories, each with a thousand child directories, takes about four seconds to list each subdirectory. That's a million directories, though, so I guess it's not too bad.
 
He said in the root directory of the partition soooooo... I'd say that's a terrible idea, actually.

Create some subfolders in the root directory of the partition/drive itself and then go from there populating those with even more, but 3K directories in root? Man, talk about choking a horse... geez :)

Yes, I know what you're talking about hence my suggestion. Give it a shot and see how it works for you.
 
Well, ive also tried it by splitting up the folders into categories and its still the same issue..
Any remedies? Ive already turned off and disabled indexing on the drives...

Create some subfolders in the root directory of the partition/drive itself and then go from there populating those with even more, but 3K directories in root? Man, talk about choking a horse... geez :)

Yes, I know what you're talking about hence my suggestion. Give it a shot and see how it works for you.

He tried that, didn't help.
 
I doesn't matter where in the filesystem the folders are, they're all just virtual addresses. When you have folders at the root of the drive it's not like they're somehow different than if they're in subdirectories, they're just like any other folder. The only spot on the drive that is really unique is the MBR.

I wrote a simple bash script to create 10000 uniquely named files in a single directory on my 400mhz pentium 2 server, ls-ing the directory took maybe 2-3 seconds (it was mostly slowed down by the console's scrolling speed, I presume)

anyway, google recently published that in-depth look at hard drive reliability and found that usage patterns had no impact on drive lifespan.
 
I doesn't matter where in the filesystem the folders are, they're all just virtual addresses. When you have folders at the root of the drive it's not like they're somehow different than if they're in subdirectories, they're just like any other folder. The only spot on the drive that is really unique is the MBR.
That depends on the filesystem.
I wrote a simple bash script to create 10000 uniquely named files in a single directory on my 400mhz pentium 2 server, ls-ing the directory took maybe 2-3 seconds (it was mostly slowed down by the console's scrolling speed, I presume)
So redirect the output to /dev/null and see how fast it goes.
 
Hi,

Windows? Well, what type of files?

Windows checks each and every file in detail, if it thinks it can get extra information for the user (photos, videos, music, etc.). This can take almost forever with many large video files, for example. And Windows does not only check the file header, if you have one large video file in a folder and try to delete it right after going to that folder, you can't delete it, because "somebody" still has access to that file.

Hans-Jürgen
 
I doesn't matter where in the filesystem the folders are, they're all just virtual addresses. When you have folders at the root of the drive it's not like they're somehow different than if they're in subdirectories, they're just like any other folder. The only spot on the drive that is really unique is the MBR.

I wrote a simple bash script to create 10000 uniquely named files in a single directory on my 400mhz pentium 2 server, ls-ing the directory took maybe 2-3 seconds (it was mostly slowed down by the console's scrolling speed, I presume)

anyway, google recently published that in-depth look at hard drive reliability and found that usage patterns had no impact on drive lifespan.

That's all fine and dandy but, he's running Windows, and even though I'm a hardcore Windows user for many reasons, I know it simply can't handle disk operations like other OSes can. Windows simply can't tolerate extreme amounts of directories in the root of any drive, regardless of the filesystem. Breaking the directories into subdirs can help but it's still going to choke when reading through all the gunk.

Try creating 10K uniquely named files in a single directory on a Windows machine, it'll take a bit more than 2-3 seconds to do most any operations. I have one directory on an NTFS partition that has 2230+ pics in it, all jpgs, and listing it either in Explorer (god awful slow) or the Command Prompt (slow but faster than Explorer by a wide margin) still takes a lot more than 2-3 seconds on a 7200 rpm 8MB buffer drive.
 
Hi,

I have one directory on an NTFS partition that has 2230+ pics in it, all jpgs, and listing it either in Explorer (god awful slow) or the Command Prompt (slow but faster than Explorer by a wide margin) still takes a lot more than 2-3 seconds on a 7200 rpm 8MB buffer drive.

That's exactly what I wrote about. Windows scans the full EXIF data (and probably more, e.g. watermarks etc.) for each image file for those annoying popups.

Hans-Jürgen
 
ahh... yea some good information here..

so yea im using XP.. is there any substantial difference in xp and vista as far as this goes?
im assuming a unix based operating system will handle this situation alot better?? (would be no surprise, lol)
 
#1: Make one subfolder to hold all of these folders. I presume that these are frequented folders that you are using. Put a ( at the beginning of the folder name so it will show up at the top of the list of folders on the disk for quick and easy access. You may want to create several folders like that.
#2: Make a list of how and where you would put these folders into subfolders.
#3: Name the subfolders in a manor that would easily identify the folders that these subfolders would hold. Some folders may have many and some may have fewer folders in it. The idea is make quick access.
#4: Get a copy of PerfectDisk 8. When it defrags the drive it will also put all of the folders in one compact area. This will speed up the opening of the folders a lot quicker.
 
With any modern filesystem that doesn't make a difference for the HD itself.

When it comes to performance, keeping directories at small numbers of entries is a good idea. Even with the hashing that modern OSes do on the entries, you have to pay memory for the hash tables.
 
Back
Top