Can I fill a storage drive to capacity?

jordan12

[H]F Junkie
Joined
Dec 29, 2000
Messages
10,211
So for drives that is for storage only, can I fill them to say leaving only 1 GB free? Does it hurt them?
 
I have a separate SSD for my boot drive and I have over 100 GB free on it. Just wasn't sure in regards to secondary storage drives.
 
If you fill a disk more than 90% it will be very very slow (less than 10MB/sec or so) - this is because all filesystems have great problems with more than 90% full. This slowness dissappears as soon you delete some files. So you can fill the disk until everything gets really really slow, and delete some files and it will be quick again. But the disk will not be damaged, no. Everything will be very slow, that is the only problem.
 
He is asking about a storage drive not an OS drive. I will presume you are using Win7 or 8 and there is no defragmenting that needs to be done. You will have no problems with filling the drive and there will be no speed reduction or any other problems.
 
He is asking about a storage drive not an OS drive. I will presume you are using Win7 or 8 and there is no defragmenting that needs to be done. You will have no problems with filling the drive and there will be no speed reduction or any other problems.

That is what I was hoping on. thank you.

And yes, it is media files only, so they are very large. No defragging that I need to do.
 
Just one note, and only because nobody has mentioned this.

If your drive is SSD and you need performance you want to keep some free space on the drive. Especially if it is a TLC drive.

If it is a platter drive with no SSD assocaited speed increase happening. (the X drives from Maxtor or a ssd caching drive in use for it.) then you are fine filling it to max capacity.

Though if you are hitting capacity issues.. storage is cheap. Shuffle your pron to a different drive. ;) Media files indeed.
 
If you fill a disk more than 90% it will be very very slow (less than 10MB/sec or so) - this is because all filesystems have great problems with more than 90% full. This slowness dissappears as soon you delete some files. So you can fill the disk until everything gets really really slow, and delete some files and it will be quick again. But the disk will not be damaged, no. Everything will be very slow, that is the only problem.

I have not found this to be true at all.

I regularly rsync data from my large ZFS array onto 2TB, 3TB, and 4TB NTFS backup disks and they write at around 100MB/s even during the last 1GB of the drive's free space, right up until rsync dies due to the disk literally filling up and maybe a couple KB of free space. Then I just delete the last file rsync tried to copy (since it's a partial file) and move onto the next disk.

I have never encountered any issue filling drives completely full, at least not with NTFS.

It is my understanding that different file-systems handle being filled up differently.

But I think the problem would normally come from free-space fragmentation as you delete files and add new files. If you have 10GB free but it's in 10,000 1MB chunks, then yes writing a single 10GB file will probably be slower.

However, I am sequentially filling my drives from start to end in one copy process and I never delete a file from the disk so it has no problem writing until the very end of the disk at full speed the whole time.

I haven't seen any problems reading at full speed from a completely full drive either which is all my backup disks need to do when I run a checksum on all it's contents every so often.
 
Last edited:
No, there is nothing stopping you from filling it up as much as possible, although not recommended. The general rule is once you fill the drive 50-60% of its max capacity, you're going to start to see performance degredation. Doesn't matter if it's an SSD or HDD. They'll both be affected. Once you get 75% full it gets ridiculous. However, so long as you're maintaining a constant defrag up to 75% you shouldn't be effected too much. It'll be noticeable, but manageable. I believe you need between 10-15% free space to run defragmentation software effectively. It's pointless after that point. Need free space to move files around and organize them.
 
I will repeat, there is no speed reduction at all (none) if you are using Win 7 or 8 on a storage only drive or even a partitioned storage area. Many years ago it was recommended for the OS portion of a drive to have a 50% free space. Why? Mainly because Windows kept all of the updates added to the OS as time went by and you needed the added space for that.
 
I have not found this to be true at all.

I regularly rsync data from my large ZFS array onto 2TB, 3TB, and 4TB NTFS backup disks and they write at around 100MB/s even during the last 1GB of the drive's free space, right up until rsync dies due to the disk literally filling up and maybe a couple KB of free space. Then I just delete the last file rsync tried to copy (since it's a partial file) and move onto the next disk.

I have never encountered any issue filling drives completely full, at least not with NTFS.

It is my understanding that different file-systems handle being filled up differently.

But I think the problem would normally come from free-space fragmentation as you delete files and add new files. If you have 10GB free but it's in 10,000 1MB chunks, then yes writing a single 10GB file will probably be slower.

However, I am sequentially filling my drives from start to end in one copy process and I never delete a file from the disk so it has no problem writing until the very end of the disk at full speed the whole time.

I haven't seen any problems reading at full speed from a completely full drive either which is all my backup disks need to do when I run a checksum on all it's contents every so often.

Actually ZFS is known to have serious performance issues when the zpool starts getting full and its recommended to only ever be 80% (I think it was) full.

Also to the guy commenting about SSD... that only matters if the SSD and your drivers/OS supports trim.
 
Afaik the only reason ZFS has issues when it starts filling is due to fragmented meta slabs, which is basically the same reason NTFS slows down when it's filled.

BUT. As people above mentioned, this only affects fragmented filesystems. If you never delete files, or only have very large files, or make sure your drives are defragmented (impossible if you're using ZFS) then you will have no issues.
 
I will repeat, there is no speed reduction at all (none) if you are using Win 7 or 8 on a storage only drive or even a partitioned storage area. Many years ago it was recommended for the OS portion of a drive to have a 50% free space. Why? Mainly because Windows kept all of the updates added to the OS as time went by and you needed the added space for that.


Yeah? Because I got quite a few Windows 7 storage drives that say otherwise. Both SSD and mechanical. HDD's are worse going from writing inside to outside as the drive fills up.
 
Actually ZFS is known to have serious performance issues when the zpool starts getting full and its recommended to only ever be 80% (I think it was) full.

Also to the guy commenting about SSD... that only matters if the SSD and your drivers/OS supports trim.

I never mentioned anything about filling up ZFS, just NTFS.

Though now that you mention it I did fill up a ZFS pool awhile back to 99% full and saw no performance impact.

Again, I believe it to be related to free-space fragmentation which if you are just using the disk or array to store large files that you don't delete is never a problem.

The average file size in my 16TB of data is like 1GB because it's media files. So even if I were to delete files, they would be leaving 1GB holes which are fine for performance.
 
I will repeat, there is no speed reduction at all (none) if you are using Win 7 or 8 on a storage only drive or even a partitioned storage area.
This is very strange, I have dabbled with computers for 25 years and have never seen no speed penalty when disks are full. There are ALWAYS a speed penalty when disks are full. This applies to ALL filessystems, not just ZFS. My friend had this speed penalty problem on NTFS, I told him to delete some files and the storage disk was fast again. Myself has speed penalty on ZFS on a storage disk, until I deleted some files.

What you are saying, is against common wisdom. You are wrong.


I have not found this to be true at all.
...
I have never encountered any issue filling drives completely full, at least not with NTFS.
...
It is my understanding that different file-systems handle being filled up differently.
Again, ALL filesystems get a speed penalty when they are full. If you never had a speed penalty on full disks - that is strange. It is like you are saying that "yes, my PC started page RAM because I loaded a 10GB workload onto my 4GB RAM pc - and it was not slower". If your PC starts to swap - performance WILL decrease drastically. That is a fact. Implying otherwise is strange and against common wisdom. Same with full disks - they WILL be slow.


Actually ZFS is known to have serious performance issues when the zpool starts getting full and its recommended to only ever be 80% (I think it was) full.
There are several threads where people complain of bad ZFS performance, and after some discussion it turned out their raidz was >95% full. After deleting some files, performance was good again. And this applies to ALL filesystems, including ZFS, NTFS, ext4, etc. If your PC starts to swap RAM out to disk it WILL be slow - that applies to all OSes, including Windows, Solaris, Linux, etc. Same with full disks.
 
The only time you have issues if parts of large files are written on the slow and fast portion of the drive.
my 2tb storage drives are not 100% full but have 40-60gb free on them and run just fine.

this is a shot from when I smaller drives in the machine,
drives.jpg
 
My media server is either Windows 7 or XP.

The 4TB drives have 600+ 4GB files and nothing else. Sorting the files in Windows Explorer takes forever - (10-20 seconds or more).

I don't think it has anything to do with the hard drives being full - 60% is not full. So perhaps the configuration of the computer has something to do with it.
 
Any new drive I buy I completely fill the drive for random data 3 or 4 times before I actually use the drive for my data. Never seen this crazy slow down that you all are talking about.
 
Again, I believe it to be related to free-space fragmentation which if you are just using the disk or array to store large files that you don't delete is never a problem.

As far as full filesystems slowing down, you are, of course, correct. There is minimal slowdown due to a filesystem being full if fragmentation is minimal (except, perhaps, for certain write or delete operations in COW filesystems, where there could be some issues on a full filesystem).

But all HDDs have lower throughput on the inner cylinders. Most 2+ TB HDDs will have sequential throughput of about 50% at the innermost cylinders (as compared to the outermost cylinders).

So, you can expect a slowdown of as much as 50% if you fill up a typical HDD and then read from the files on the innermost cylinders. Of course, if you read from the files on the outermost cylinders (even on a full HDD) then you will see full speed (unless there is significant fragmentation).

Since the post you were replying to was talking about massive slowdowns, on the order of 10-20% of full-speed, you were correct in pointing out that such slowdowns do not actually occur unless the drives are significantly fragmented (and even then it would have to be a specific type of fragmentation to cause such a severe slowdown).
 
This is very strange, I have dabbled with computers for 25 years and have never seen no speed penalty when disks are full. There are ALWAYS a speed penalty when disks are full. This applies to ALL filessystems, not just ZFS. My friend had this speed penalty problem on NTFS, I told him to delete some files and the storage disk was fast again. Myself has speed penalty on ZFS on a storage disk, until I deleted some files.

What you are saying, is against common wisdom. You are wrong.

You can talk to me all day about theoreticals or how something "should be". But I have factual evidence and first had experience that when writing to my NTFS drives until they are completely full, I do not suffer any performance impact. Meaning, the copy process remains at the same speed as when the drive is empty.

I guarantee you that rsync writes at 80-120MB/s write speed (according to rsync) until the drive is completely full, even during the last 1GB of free space it's still writing at 80-120MB/s throughput.

Then when I read all the data from the completely full disk, I get 80-120MB/s read speeds across the entire disk when calculating the checksums I keep.

I do this process every couple weeks and this is always how it behaves.

You need to provide evidence contrary to mine before you call my experience blatantly wrong.

Please try it and see for yourself.

To be clear about what I am doing, I am taking a 8TB directory (filled with ~10GB files) on my ZFS array and rsyncing it to a 4TB NTFS HDD until the rsync pipe dies because the disk reaches 100% capacity.

I can provide a video of the file transfer throughput while the disk approaches and reaches 100% capacity if you want evidence of this behavior beyond my word.
 
Last edited:
As far as full filesystems slowing down, you are, of course, correct. There is minimal slowdown due to a filesystem being full if fragmentation is minimal (except, perhaps, for certain write or delete operations in COW filesystems, where there could be some issues on a full filesystem).

But all HDDs have lower throughput on the inner cylinders. Most 2+ TB HDDs will have sequential throughput of about 50% at the innermost cylinders (as compared to the outermost cylinders).

So, you can expect a slowdown of as much as 50% if you fill up a typical HDD and then read from the files on the innermost cylinders. Of course, if you read from the files on the outermost cylinders (even on a full HDD) then you will see full speed (unless there is significant fragmentation).

Since the post you were replying to was talking about massive slowdowns, on the order of 10-20% of full-speed, you were correct in pointing out that such slowdowns do not actually occur unless the drives are significantly fragmented (and even then it would have to be a specific type of fragmentation to cause such a severe slowdown).

Yeah, I meant there was no other slowdown other than the inherent cylinder performance. You can easily see what speeds to expect by running HDtune which will show you speeds at all portions of the HDD cylinder.

I was saying too that I see between 80-120MB/s when filling my drives completely full and the fluctuation is certainly down to the cylinder. I'm still seeing at least 80MB/s even for that last 1GB of the transfer.
 
It all has to do with access patterns. Ignoring speed variations due to head position on the platter for the moment.

If you start with an empty drive, and write it until it is full, you will have good write speeds all the way down to the last sector.

The performance penalty for having a nearly full drive that most people are familiar with comes from fragmentation of the filesystem and has nothing to do with the hardware itself. Even then general comments are difficult at best because how the performance deteriorates if at all is completely dependent on how files were written and deleted in the past.
 
I guarantee you that rsync writes at 80-100MB/s write speed (according to rsync) until the drive is completely full, even during the last 1GB of free space it's still writing at 80-100MB/s throughput.

Now that is bizarre. Either something is throttling your speed on the outermost HDD cylinders (but not the innermost cylinders), or somehow the writes are being randomly distributed among the cylinders rather than filling the outermost cylinders first and the innermost cylinders last.

There is a plethora of empirical evidence (as well as a solid conceptual explanation) that the throughput of HDDs decreases from the outermost to the innermost cylinders. A typical 5K rpm 2TB HDD would have sequential throughput of say 120 MB/sec (100-150 depending on platter density) on the outermost cylinders, but decreases to around 60 MB/sec (50-75, depending) on the innermost cylinders.

A higher density platter (such as in a 4TB HDD), or a 7K rpm HDD, could be around 160 MB/sec on the outermost cylinders and around 80 MB/sec on the innermost cylinders.
 
It all has to do with access patterns.

If you start with an empty drive, and write it until it is full, you will have good write speeds all the way down to the last sector.

The performance penalty for having a nearly full drive that most people are familiar with comes from fragmentation of the filesystem and has nothing to do with the hardware itself. Even then general comments are difficult at best because how the performance deteriorates if at all is completely dependent on how files were written and deleted in the past.

Yep, and it's particularily free-space fragmentation to be exact for slow write speedswhen file-systems fill up.

I was listening to a ZFS lecture the other day from one of the the founders of ZFS (George Wilson) and he was talking about ZFs performance and fragmentation and he said that as long as the free metaslab fragments were at least about 12mb each that you would not see performance slowdown even at 99% capacity.

It's also true that there is no magic % where performance will degrade, even with some free-space fragmentation, on a really large filesystem for example, 1% could still mean there is 1TB free, and as long as there are enough contiguous free spaces of a dozen or so MB each, then there should be no meaningful impact on write performance.
 
Now that is bizarre. Either something is throttling your speed on the outermost HDD cylinders (but not the innermost cylinders), or somehow the writes are being randomly distributed among the cylinders rather than filling the outermost cylinders first and the innermost cylinders last.

There is a plethora of empirical evidence (as well as a solid conceptual explanation) that the throughput of HDDs decreases from the outermost to the innermost cylinders. A typical 5K rpm 2TB HDD would have sequential throughput of say 120 MB/sec (100-150 depending on platter density) on the outermost cylinders, but decreases to around 60 MB/sec (50-75, depending) on the innermost cylinders.

A higher density platter (such as in a 4TB HDD), or a 7K rpm HDD, could be around 160 MB/sec on the outermost cylinders and around 80 MB/sec on the innermost cylinders.

I'm pretty sure the maximum speed is getting throttled. The NTFS-3G driver is not known for its amazing performance.

I'm using 1TB platter disks and I rarely see it drop under 80MB/s when filling them.

I'm not sure if rsync writes it completely sequentially or not, but that's what I'm using. I just filled 6 disks in the last few days as I am redoing my backup completely. So maybe the rsync throughput number is wrong? But I can also divide the size of the file by the timestamp for each file copy and see that the speed it reported was accurate.

And the total completion times have been leading to an average of around 100MB/s to fill my disks.

I can at least say with certainty that I am not seeing any "significant" impact on read or write speeds to a completely full NTFS disk other than cylinder speed or standard deviation fluctuations (due to system resources) under my use case.
 
Last edited:
Some "rules" need to be taken as warnings of possible problems that apply in certain situations, not just accepted as truths that never should be questioned. If I stopped filling my zfs pools once they are 80% full I would have 10-15 TB lost space in addition to parity for each pool. Some rules really need to be tested for your particular use and setup, a 120 GB SSD OS mirror and a 80 TB storage pool don't necessarily behave the same.
 
It is really strange that you do not experience slow downs with a full disk. I have experienced it several times, as soon as I delete files, performance goes up again when the disk is full. But if you say that you dont experience slow downs - you dont. I dont believe you are lying.

This means there is something we dont understand here. Sometimes performacne will go down, other times, not.
http://tuxera.com/forum/viewtopic.php?f=2&t=7780

Maybe we should start an investigation, when will full disks slow down, and in what circumstances will they not slow down? Is it dependant on fragmentation?

http://forums.theregister.co.uk/forum/1/2012/01/17/windows_8_server_refs/#c_1286646
"... We have an HP D2D (StorOnce) VTL which runs (effectively) a rebadged RedHat with ext3 - we get access to 80 or 90% of the filesystem, when I was talking to the designers about it they specifically said that if you fill ext3 up to over 90% it is totally crippled in terms of performance, due to fragmentation and not defragmentable if there wasn't enough free space. NTFS isn't defragmentable, either if you don't have enough free space..."
 
Fragmentation can be a huge problem or irrelevant depending on how you use the drives. If all you do is fill a drive from scratch to 99.99% without deleting anything and without running more than one transfer at a time there is no good reason for files to become fragmented enough for it to affect speeds significantly.

Transferring to/from the inner cylinders compared to the outer will lead to a slower STR when you reach the end of the disk, but before I switched to LTO tape I used HDDs for backup and filled them as close to 100% as possible without adverse effects. Conincidentally I used ext3, I set the reserved space to 0 and filled them until I (or rather my script) couldn't find a file small enough to fit in the remaining space. Speeds were good both while writing and reading the data, and I did this on hundreds of HDDs.
 
It is really strange that you do not experience slow downs with a full disk. I have experienced it several times, as soon as I delete files, performance goes up again when the disk is full. But if you say that you dont experience slow downs - you dont. I dont believe you are lying.

This means there is something we dont understand here. Sometimes performacne will go down, other times, not.
http://tuxera.com/forum/viewtopic.php?f=2&t=7780

Maybe we should start an investigation, when will full disks slow down, and in what circumstances will they not slow down? Is it dependant on fragmentation?

http://forums.theregister.co.uk/forum/1/2012/01/17/windows_8_server_refs/#c_1286646
"... We have an HP D2D (StorOnce) VTL which runs (effectively) a rebadged RedHat with ext3 - we get access to 80 or 90% of the filesystem, when I was talking to the designers about it they specifically said that if you fill ext3 up to over 90% it is totally crippled in terms of performance, due to fragmentation and not defragmentable if there wasn't enough free space. NTFS isn't defragmentable, either if you don't have enough free space..."

To be clear, the performance penalty is related to fragmentation not used capacity. Filesystems that are nearly full are more prone to fragmentation, but one does not necessarily follow the other.

If a nearly fully filesystem has a duty cycle that involves deleting, writing and modifying files often, then read/write performance will suffer as fragmentation will occur. If you are simply writing a drive until it is full, such as when doing a back up, performance will be fine both during the initial write and when reading those files back.
 
Here is a screenshot from the tail end of my rsync copy.

cnJY9zh.jpg


I was copying from sdp1 (99% full) to sdm1 and it looks like it went about 76MB/s (10MB/s slower) when it was writing to the last 707MB of the disk and now the disk has 0 available (it couldn't even write 4 bytes) But even the last 7GB went at 85MB/s which is full-speed due to the cylinder speed at that point in the disk.
 
To be clear, the performance penalty is related to fragmentation not used capacity.
I have shown links that when a disk is full, performance will suffer. And as soon you delete files, performance will be normal again. This is also my experience.

If this slow down was related to fragmentation - the disk should be slow even after deleting files. Because disk gets full -> slow down -> fragmented (hypothesis) -> delete files (disk is still fragmented) so performance will not go go up. Right?

Ergo, something is strange here. Clearly we dont really know what causes slow down. It should not be fragmented, because deleting files does not defragment a disk. It would be interesting to understand this.
 
I have shown links that when a disk is full, performance will suffer. And as soon you delete files, performance will be normal again. This is also my experience.

If this slow down was related to fragmentation - the disk should be slow even after deleting files. Because disk gets full -> slow down -> fragmented (hypothesis) -> delete files (disk is still fragmented) so performance will not go go up. Right?

Ergo, something is strange here. Clearly we dont really know what causes slow down. It should not be fragmented, because deleting files does not defragment a disk. It would be interesting to understand this.

"Deleting files" is not enough detail to draw a conclusion from. Deleting large files or contiguous small files on a nearly full, fragmented disk, will improve future write performance as you now have non-fragmented free space. Read performance of existing files cannot be affected by deleting other files alone, though read performance would be improved on future files due to contiguous writes. Some filesystems may use newly free space to defrag, which would improve read performance on existing files.

I assure you, the disk doesn't care that it is full it simply reads and writes sectors it is instructed to. While the filesystem does track used/empty extents, it is being able to efficiently allocate them that is more difficult when it is full and this often, but not always, leads to fragmented allocations.
 
We need some credible research papers, or articles by experienced filesystem developers. It is obvious our experiences differ.
 
We need some credible research papers, or articles by experienced filesystem developers. It is obvious our experiences differ.

You haven't outlined your experiences in a enough detail to make offer conclusions about them. I posit that if you have experienced poor read or write performance on a nearly full file systems in the past it was either fragmentation of files (read, modify (HDD only) performance degradation) or fragmentation of free-space (write performance degradation).

One of the quotes you made above concerning EXT3 put fourth exactly that. EXT is claimed to be fragmentation resistant due to the way it spreads out allocated extents for files across the filesystem, among other things. However when it gets near full, even it will begin to fragment writes, which causes performance issues for the affected files.
 
My and my friends experiences are that we fill a storage disk (not the system disk) and performance slows to a crawl, and then we delete files and performance goes back to normal again. This was on ZFS and NTFS disks. If fragmentation was the cause, as you suspect, then deleting files counts as defragmentation? Or what? Maybe only the last files got fragmented, and when we deleted them, fragmentation was gone again? But that is unlikely as our storage disks are very likely fragmented, copying lot of files back and forth.

And if you read the net, there are lot of testomonies and recommendations to not fill up a disk, otherwise it will be slow.
 
My and my friends experiences are that we fill a storage disk (not the system disk) and performance slows to a crawl, and then we delete files and performance goes back to normal again. This was on ZFS and NTFS disks. If fragmentation was the cause, as you suspect, then deleting files counts as defragmentation? Or what? Maybe only the last files got fragmented, and when we deleted them, fragmentation was gone again? But that is unlikely as our storage disks are very likely fragmented, copying lot of files back and forth.

And if you read the net, there are lot of testomonies and recommendations to not fill up a disk, otherwise it will be slow.

Deleting files, *can* result in stretches of contiguous free-space which can improve future write speeds and by extension the reads on those future writes. It would not improve read performance on existing fragmented files. It depends on what the overall fragmentation of the file system is at the time, how the files that are deleted were originally written (are they contiguous, are they fragmented, etc).
 
ZFS is one of the file systems that suffers the most from performance degradation when the free space approaches zero. There are a couple of settings that can be changed to reduce the problem in certain usage scenarios but these come with their own drawbacks.
 
Back
Top