How does a file that isn't fragmented get fragmented??

videobruce

Limp Gawd
Joined
Jan 21, 2005
Messages
412
I tried doing a search, but that was waste of time. Just basic "should I defrag" and "when should I defrag" questions.

Situation;
Larger video files (500mb to 2GB) in a non O/S partition that were defraged previously and NOT moved or changed in any way.

After running a defraged program, it reports no files being defraged. Weeks later, checking of the fragmentation amount of the same partition, numerous files are all of a sudden reported fragmented when they were not before.

The question is, if a file ISN'T fragmented and it's not a file that was 'touched' (at least by me in any way), how the hell can it get fragmented???

If it matters, I'm running Win7 Pro using Piriform Defraggler.
 
Why do you worry about fragmentation so much? If you need better I/O switch to an SSD. They're not that expensive anymore.
 
https://www.howtogeek.com/115229/htg-explains-why-linux-doesnt-need-defragmenting/

This isn't a look Linux is better post... although yes FAT and NTFS are terrible file systems. ;) Take a min and read this how to geek article though it does a not bat job of explaining why FAT and NTFS fragment.

Also as Boonie says if you move to an SSD its not going to be an issue on windows either.

Basically if you think of a hard drive as a file cabinet... partitioning it with a file system is basically like setting up a bunch of hanging folders all numbered and ready to hold your files.

FAT just started adding files one after the other. So if you edited a document later and made it a bit bigger that extra bit would no longer fit. So in the first drawer there is a table of contents listing where everything is... so FAT would just add the new bits to the end and edit the table of contents. Creating a fragmented file. So FAT as you can imagine was always fragmented to some degree and the more you edited and appended more data to files the worse it got.

NTFS introduced by MS 25 years ago... basically lifted some smart (for the time) ideas from IBMs HPFS which IBM developed for their joint OS/2. (this is where Heatle chimes in and tells us of the fantastic MS engineers... but come on HPFS and NTFS even use the same ident code (07) MS didn't even bother to change it.)
Anyway point is NTFS allows for a bit of a buffer after each file. So again if you think of of it like a cabinet with hanging folders... inside each hanging folder there is a now a manila folder with enough room for a few files, it means if you need to add a bit more onto that file later you can take it out add to it and put it back. As long as it doesn't grow to much you don't have to store data elsewhere and fragment. The issues of course are some files will grow larger then the space allotted and still fragment anyway... also because the drive is constantly leaving a bit of free room, as the drive fills up free space reported isn't continuous space. This is why windows with spinning discs with less then 10-20% free space perform like shit. Every new file you write is going to end up being added to free space after other carefully placed records. So better then FAT but still pretty shit.

EXT4 the main Linux file system is designed to not fragment... if you think of it as a cabinet. Like NTFS it leaves space between files the difference is it leaves massive amounts of space, spreading files from the start out over the disc. This has advantages for disc life as well as fragmentation. EXT4 also has the ability to "defrag" itself... not to get super technical but Unix file systems use an inode system which makes it very easy to write a complete new version of a file and simply change the inode number pointing at it. (its why Linux and other nix OSs update in min not hours... write new file change inode done) EXT4 will fragment if you purposely design loads to do it. (but you have to basically be trying) or if you fill a drive past 95% or so... at that point pretty much every file system on a magnetic disc is going to fragment.

As to why your big video files are fragmenting even when your not doing much else on the drive.... that is windows for you. (I joke slightly) If the drive is more then 75% full there is a good chance you will see fragmentation on even large files as NTFS tries to not run out of space. (remember reported free space and free continuous space are not the same thing) Its also possible system level things are adding or changing meta data, index data ect... all the hidden stuff windows tends to keep on drives take up space as well. Anyway I would assume if you had a large drive with a few large files and lots of empty space fragmentation shouldn't happen... if the drive is over half full though it seems pretty common for Windows and NTFS to start fragmenting at least a small bit larger files.

After typing all that though... it shouldn't really be a major issue unless your talking massive amounts of fragmentation. A fragmented video file isn't going to play back any worse then one that isn't unless you have a 20 year old 4000 RPM hard drive or something. Unless something crazy is happening and the file is really scattered across the drive (which it should not be) you really shouldn't notice. Even OS files for the most part a bit of fragmentation isn't going to make much difference in the reading... where it makes windows run like crap is in writing new data and being forced to scatter it.
 
if you need to add a bit more onto that file later you can take it out add to it and put it back. As long as it doesn't grow to much you don't have to store data elsewhere and fragment.
Sorry if I don't quite follow but by that do you mean this?
Its also possible system level things are adding or changing meta data, index data ect...
or in other words, if a file is 670 502 914 bytes in size and something is added it will not be the same file again, so obviously you are talking about something else.

As I understand what OP is asking is how can a file physical data on a platter change location after time so that it comes fragmented and what was previously reported as one continious space on a hdd now reports as 15 fragmented peaces? Is it because "inside each hanging folder there is a now a manila folder with enough room for a few files" so that some data can be written inside this "one continious file space - as reported by defragmenter" thus causing the avi file to get fragmented? But even so, will it cause even the smallest perfomance degradation if physical file data on a platter are not moved and reading heads need to move to the same excact locations before and after "fragmentation" to read that avi file?

P.s. As for defragmentation applications, they use different algorythms to report total fragmentation, I have seen app reporting close to zerro fragmentation after defrag job but windows still reporting more than 50% files fragmented or vice versa. So if you use one particular defrag app for measuring defragmentation you should use the same app after time to get comparable results.
 
Last edited:
#1
Need to ask piriform what they consider fragmented before an answer can be performed.
in the old days some defraggers wold take into account consolidation of the empty space.

So let say a new defrag system File B has File A in front of it and file C trailing it.
You delete File A. File B is still untouched but now has empty space in front of it. so its "fragmented" according to the program because it not on its "optimal" location, where it should be for all empty space to be consolidated.

File B itself has off cause not changed at all.


Also why use defragmentation tools?
Windows 7 does it on its own
 
Windows does not do it as efficiency and even then only when it is scheduled to do so and if your pc is not turned on long enough daily it may not happen so soon as it does it only when pc is idle (and leaving aside question when or if it is required to defrag hdd at all, as for modern fast hdd's perfomace increase will be negitable).

As for piriform defragler, isn't these seperate 'free space defragmentation' and "file defragmentation" so that piriform does not add in report this file as fragmented but rather free space fragmentation?

In one case I can see good use of defragmetnion app - if you installed OS on almost full data hdd and system files resides at the end of the drive - you all have seen these hdtune charts where reading speed slowly degreeses from 200MB/s to 80MB/s at the end of the drive or so - you don't want that 80MB/s to be with your system files and apps, right?
 
Last edited:
Sorry if I don't quite follow but by that do you mean this?
or in other words, if a file is 670 502 914 bytes in size and something is added it will not be the same file again, so obviously you are talking about something else.

As I understand what OP is asking is how can a file physical data on a platter change location after time so that it comes fragmented and what was previously reported as one continious space on a hdd now reports as 15 fragmented peaces? Is it because "inside each hanging folder there is a now a manila folder with enough room for a few files" so that some data can be written inside this "one continious file space - as reported by defragmenter" thus causing the avi file to get fragmented? But even so, will it cause even the smallest perfomance degradation if physical file data on a platter are not moved and reading heads need to move to the same excact locations before and after "fragmentation" to read that avi file?

P.s. As for defragmentation applications, they use different algorythms to report total fragmentation, I have seen app reporting close to zerro fragmentation after defrag job but windows still reporting more than 50% files fragmented or vice versa. So if you use one particular defrag app for measuring defragmentation you should use the same app after time to get comparable results.

Well its not really a file cabinet... its a disc broken up into sectors. A video file of 600mb would take up thousands of sectors. The disc file system interfaces with the hardware and the hardware controls the disc. Every HDD has a micro controller of its own and decides where to really put data. The file system dictates how many extra sectors to allocate to files ect. often the inner workings of those micro controller systems are pretty unclear. I doubt its the OPs case but some more advanced drives do self correct and will move data. (not a feature on most PC drives currently)

Having said that the file system does keep meta data on files... things like security settings ect. All that data is kept on the drive, no its not appended to the actual file but the OS may well be writing it close by (and I believe NTFS Meta files tend to be near the beginning of the drive but the OS can move them around.. they can and will grow, and they do get fragmented like any file... and if the FS and OS write meta data into clusters where X or Y file is that file may also be considered to be fragmented depending on the way you read "fragmentation") So I know its really nebulus... however it is possible for a drive with nothing but 10 files on it to become fragmented even if nothing "new" is written. Just reading the data changes meta data files on the drive. (for instance the system knows when you last accessed a file... which user did ect. that data is written to the drive. Also if you search files on the drive all those $volume things people may see if they look... meta data and it is stored on the drive and can cause fragmentation)

As for read performance being degraded on any reasonably recent Spinning HDD (last 10 years at least) unless a file is severely fragmented read performance will be largely uneffected. Drives have ram caches for a reason... if your reading or writing data it goes into the cache first, where the drives Micro Controller decides how best to write the data, or it fills it as it reads. (so perhaps you see a small like >1% slow down if you where to copy the entire file at SATA speed... but if your reading the data playing it back or editing it ect, the performance bottle neck will be elsewhere)

Even the cheapest current drives have more then enough cache to smooth out read performance. As far as system performance goes in terms of reading fragmentation isn't that big a deal unless its severe, the main issue with fragmentation is writing, writing takes more head time and honestly even that on most newer drives isn't that big an issue, as the drives micro controller is supposed to figure out how best to write the data... it will decide to write X Y and Z on platers 1 2 and 3 in the same physical sectors... then move to A B C on the same platters further in ect. Point being it fills the cache with write ahead data and then writes it to the drive. Of course with windows (and any OS really) if a drive is close to full your putting more pressure on the HDD controller and cache.

PS not directly related... but speaking of micro contorllers. Western Digital is moving to on board Risc V controllers for all their drives. They want more processing power so they can use AI like software to put the most important data in the best places. The future of seemingly silly ideas like hybrid drives might actually be pretty interesting. Of course I can't imagine "defrag" software will be of any use at that point. The MC will be in charge of where things really go on the disc... that will be the point.
 
Last edited:
Just use the built in defragger that comes with Windows 7 and forget about it. Personally, I think the program you are using is either misreporting or trying to do something it should not.
 
If the OP bought a slow storage type drive to keep the videos, no amount of defragging is going to make them fast. Even worse if he selected an 'ECO' model.
 
Windows does not do it as efficiency and even then only when it is scheduled to do so and if your pc is not turned on long enough daily it may not happen so soon as it does it only when pc is idle (and leaving aside question when or if it is required to defrag hdd at all, as for modern fast hdd's perfomace increase will be negitable).

As for piriform defragler, isn't these seperate 'free space defragmentation' and "file defragmentation" so that piriform does not add in report this file as fragmented but rather free space fragmentation?

In one case I can see good use of defragmetnion app - if you installed OS on almost full data hdd and system files resides at the end of the drive - you all have seen these hdtune charts where reading speed slowly degreeses from 200MB/s to 80MB/s at the end of the drive or so - you don't want that 80MB/s to be with your system files and apps, right?


I assumed you are "answering" to me from the context of your message. But it is hard to say as you use no reference to whom you are actually referring to. Please correct me if I am wrong.

1: You do have any kind of metrics on this efficiency? I don't work with guesswork or gut-feelings claimed as being facts. Unless you can prove this claim is correct. I will consider it invalid and not worth spending any more time on.
2: If your pc is turned on long enough to run a defrag program. its turned on long enough to run a defrag programs. Whatever its software A or B. (give or take)
3: why are you asking me into how piriform defragger works. I clearly stated to op he will need to ask piriform about he software indication I don't know how that software works
 
Why do you worry about fragmentation so much? If you need better I/O switch to an SSD. They're not that expensive anymore.
Very simple answer; these are media files, a SSD won't provide any improvement, one reason DVR's use 5400rpm HDD's, not 7200's.
Then there is the fact I have 6-8 folders with between 10GB and 286GB in each folder. That should answer your question.
 
Very simple answer; these are media files, a SSD won't provide any improvement, one reason DVR's use 5400rpm HDD's, not 7200's.
Then there is the fact I have 6-8 folders with between 10GB and 286GB in each folder. That should answer your question.

If performance is not your problem then you don't need to defrag with ape force rage either. What exactly was your problem?
 
Anyway point is NTFS allows for a bit of a buffer after each file. So again if you think of of it like a cabinet with hanging folders... inside each hanging folder there is a now a manila folder with enough room for a few files, it means if you need to add a bit more onto that file later you can take it out add to it and put it back. As long as it doesn't grow to much you don't have to store data elsewhere and fragment. The issues of course are some files will grow larger then the space allotted and still fragment anyway... also because the drive is constantly leaving a bit of free room, as the drive fills up free space reported isn't continuous space. This is why windows with spinning discs with less then 10-20% free space perform like shit. Every new file you write is going to end up being added to free space after other carefully placed records. So better then FAT but still pretty shit.

If the drive is more then 75% full there is a good chance you will see fragmentation on even large files as NTFS tries to not run out of space. (remember reported free space and free continuous space are not the same thing) Its also possible system level things are adding or changing meta data, index data ect... all the hidden stuff windows tends to keep on drives take up space as well. Anyway I would assume if you had a large drive with a few large files and lots of empty space fragmentation shouldn't happen... if the drive is over half full though it seems pretty common for Windows and NTFS to start fragmenting at least a small bit larger files.
.
I understand editing a specific file, say a document, then saving it will/can cause it to be fragmented, but these video (media) files are only 'watched' (read), not edited. As to spare empty space within the clusters, if that gets occupied by parts of another unrelated file, why or how would that cause the media file to become fragmented?

If the OP bought a slow storage type drive to keep the videos, no amount of defragging is going to make them fast. Even worse if he selected an 'ECO' model.
My drives are 7200rpm and not "ECO" models. I just mentioned DVR's for an example.

As I understand what OP is asking is how can a file physical data on a platter change location after time so that it comes fragmented and what was previously reported as one continious space on a hdd now reports as 15 fragmented peaces? Is it because "inside each hanging folder there is a now a manila folder with enough room for a few files" so that some data can be written inside this "one continious file space - as reported by defragmenter" thus causing the avi file to get fragmented? But even so, will it cause even the smallest perfomance degradation if physical file data on a platter are not moved and reading heads need to move to the same excact locations before and after "fragmentation" to read that avi file?
P.s. So if you use one particular defrag app for measuring defragmentation you should use the same app after time to get comparable results.
Correct on the understanding. And I only use one defrag program.
 
Last edited:
Also why use defragmentation tools?
Windows 7 does it on its own
Since when does M$ do ANYTHING correctly??? :p
That's why there are so many 3rd party offerings for every so called 'program' that M$ has built in.

If performance is not your problem then you don't need to defrag with ape force rage either. What exactly was your problem?
I don't believe I ever mentioned "performance". But, to respond, I hear the drive 'thrash' around when playing these files that it shouldn't/wouldn't if the file was in one piece.

The "ape force rage" is a bit over the top here. :rolleyes:
 
  • Like
Reactions: Zuul
like this
Since when does M$ do ANYTHING correctly??? :p
That's why there are so many 3rd party offerings for every so called 'program' that M$ has built in.

I don't believe I ever mentioned "performance". But, to respond, I hear the drive 'thrash' around when playing these files that it shouldn't/wouldn't if the file was in one piece.
oh I recall "the click of death" and the deathstar
 
I assumed you are "answering" to me from the context of your message. But it is hard to say as you use no reference to whom you are actually referring to. Please correct me if I am wrong.

1: You do have any kind of metrics on this efficiency? I don't work with guesswork or gut-feelings claimed as being facts. Unless you can prove this claim is correct. I will consider it invalid and not worth spending any more time on.
2: If your pc is turned on long enough to run a defrag program. its turned on long enough to run a defrag programs. Whatever its software A or B. (give or take)
3: why are you asking me into how piriform defragger works. I clearly stated to op he will need to ask piriform about he software indication I don't know how that software works
1. If you actually run it, it does it's job. However it is not as advanced to move, for example, big non system files or any other files in that matter like your pictures or videos at the end of the hdd leaving more room at the begining of the hdd with faster read and write speeds for more frequently accessed data.
2. Not exactly what I was reffering to. Someone said you don't need to run seperate defrag app because Windows does it by it's own. I don't think windows build in defragmenter is scheduled to run (therefore slowing down hdd) while there is user activity at the pc. Thats what I meant that in some cases (or exclussions if you wish) it could take longer time for build in defragmenter to do its job properly in time period if a user constantly interupts it.
3. Sorry didn't notice op mentioned it in his last sentence.
 
Last edited:
Fragmentation happens, on every file system, and it cannot ever be prevented or stopped 100%, it's just not possible regardless of the claims that companies and people working on creating more efficient file systems will say. Yes, it's possible to mitigate a lot of the potential fragmentation before it happens by caching data till you get enough to write out in a space where it'll fit without fragmenting but as files are created/deleted/replaced/moved/etc on storage media, fragmentation always happens sooner or later.

Are some file systems better at that mitigation? Absolutely.

Can any file system prevent it from happening? Not on your life.
 
If the noise bothers, more effective approach is to buy a sound insulated drive cage instead of fighting a futile fight of keeping all the files contiguous.
 
I understand editing a specific file, say a document, then saving it will/can cause it to be fragmented, but these video (media) files are only 'watched' (read), not edited. As to spare empty space within the clusters, if that gets occupied by parts of another unrelated file, why or how would that cause the media file to become fragmented?

Technically they may not be. The software your using may well report them as such, when you perform a scan the quick types would be worse for this they would simply see more then one file and report the files as fragmented.

Or its possible your drives micro controller moved bits slightly when it wrote the meta data or some other hidden files such as shadow copies or something of that nature. This is highly likely if the drive is more then half full... and almost for sure the case if its more then 3/4 full.

One thing you may want to check is your disk health. I honestly don't know the best way to do that in windows... I imagine there is a FOSS windows smart monitor or one from your HDD manufacturer. In Linux I just use the smartctl command to check smart logs. Smart is just a monitor system... if the drive says it got to hot a few times in its life that doesn't mean its going to melt or anything. (if it says its over temp all the time find a better placement for it. If its reporting failures or anything it would be good to know. Chances are there is nothing wrong... windows simply writes a lot of data all the time, and NTFS itself isn't exactly the most modern FS anymore. (I mean really its the only major FS left that needs defragmentation at all)
 
I don't believe I ever mentioned "performance". But, to respond, I hear the drive 'thrash' around when playing these files that it shouldn't/wouldn't if the file was in one piece.

The "ape force rage" is a bit over the top here. :rolleyes:

Its possible depending on the placement of the file size of the drive number of platers ect that the file is simply stored on more then one platter. NTFS is not a very forward thinking File system. It doesn't say look I have a 600mb file to write and there is only 200mb on this platter. It will simply write the 200mb then put the 400mb elsewhere. It won't skip that 200mb and write the data on a different platter. Its very possible for a HDD to have to make some a few jumps even on data that isn't fragmented as we would think of it. Also its my understanding that some drive manufacturers micro controllers consider head position, meaning if it has say 4 platers the first sector of each disk would be the first 4 sectors... which is the smart way of doing it but requires more CPU power on the drive, so some cheaper drives do not do this which can mean more seeking.

I guess I'm saying depending on the drive, it may well "thrash" no matter what you do. If you look at a line of drives from say WD... they make blue, at one time green, Black, yellow and red drives. The difference doesn't look that obvious to most people... 7200rpm check, 64mb cache check, X size... 3+ drives could look identical spec wise. Yet one is super fast and loud as hell, one is just ok... and one is slow as shit. The difference is the micro controller and how smart it is... more expensive drives have more complicated controllers capable of using more tricks to get you more performance. I am not claiming you have a cheapo drive... the opposite in fact, the higher performance drives tend to make more noise as they use better faster micro controllers and data storage algorithms. You could always benchmark your drive and get a better idea how it stacks up vs other drives.

The ape force rage bit may be over the top... but it is funny. :) What I think boonie was trying to say in his way... windows and any drive it uses is always going to be fragmented to some degree. So don't worry about it unless your talking heavy 30%+ fragmentation. (at that point the drive is either quite old and hasn't been defragged in a long time, or you have way to much data on it... and I would suggest picking up more storage. As I explained earlier any file system even the more modern ones used by MacOS Linux or BSD will fragment files if they are running at max capacity.)
 
Fragmentation happens, on every file system, and it cannot ever be prevented or stopped 100%, it's just not possible regardless of the claims that companies and people working on creating more efficient file systems will say. Yes, it's possible to mitigate a lot of the potential fragmentation before it happens by caching data till you get enough to write out in a space where it'll fit without fragmenting but as files are created/deleted/replaced/moved/etc on storage media, fragmentation always happens sooner or later.

Are some file systems better at that mitigation? Absolutely.

Can any file system prevent it from happening? Not on your life.
Not quite true... If a drive is never being written to it should not fragment, not matter what filesystem. This doesn't include the metadata but the raw data.

So if this large was defragmented and now is either

1) the original software lied
2) the original software was accessing it while something was writing to it (say ... The windows defragmented service)

Think about how fragmentation works and think did anything that causes fragmentation occur?
 
"What is the sound of one hand clapping?" is the thought that comes to mind after reading that post above, seriously. :whistle:

If you've got a storage media drive of any kind in a computer system and it's powered on and it's never used you're doing something wrong. :D
 
"What is the sound of one hand clapping?" is the thought that comes to mind after reading that post above, seriously. :whistle:

If you've got a storage media drive of any kind in a computer system and it's powered on and it's never used you're doing something wrong. :D

Once you fill the drive with porn it's only read not written. Simple. What is the sound of one hand fapping?
 
"What is the sound of one hand clapping?" is the thought that comes to mind after reading that post above, seriously. :whistle:

If you've got a storage media drive of any kind in a computer system and it's powered on and it's never used you're doing something wrong. :D
and the point of this reply is purely to stroke your ego...
re-read what the OP wrote....
re-read what I wrote
go read what causes fragmentation...

a defragmentation drive with no moving/deleting will not spontaneously fragment. so now take my post along with what fragmentation is, what the OP is querying and then ask yourself WHAT exact value is your reply adding...
 
Last edited:
My guess is that Pirisoft went and defragged, then Windows did it's automated defrag process and "fixed" it back up.

Could also be something to do with indexing.
 
As I read, if the only problem is that the OP hears the head moving during file load and this is his only problem - get a damn insulated drive cage.
 
Back
Top