Does the ~50% free space rule apply to SSDs?

thecrafter

I have LOVED the Cock for
Joined
Feb 11, 2011
Messages
571
I don't know if this is a fact or myth, but in the past it was said that it's best to have your hard drive be below 50% limit (I think it was 50) or else it will slow down significantly. Is this true for SSDs as well?

I.E will there be a performance impact if an SSD drive is say, 90% full, or will it operate the same as if it were pretty much empty?
 
As far as I know the speed of an HDD is related to writing in different tracks (inner/outer) being faster/slower so as you get towards the middle the speed goes down but this is not the case with an ssd.
 
I always heard 80%, but that had to do with rotational latency and leaving enough space for defragmentation. Two things we don't have to worry about with an SSD. Supposedly... I saw a post from someone whose SSD fragmentation was 60% so he ran Perfect Disk and benchmark scores improved significantly.
 
Last edited:
Most defrag software such as Diskeeper, used to tell you that you needed to have 20% free to be able to defrag properly as well as maintain system performance.
 
I believe it depends on the filesystem. Most Linux filesystems will write new files in the middle of the free space on the drive, and when you hit 50% capacity you're left with a bunch of small free blocks between each file and the OS has to start fragmenting new files. NTFS on the other hand writes files one right after the other, which gives you more performance if not fragmented (compared to having free space gaps between each file as in Linux) but a higher chance of fragmentation in the long run since it will leave various smaller gaps in the beginning of the drive as you move/delete stuff (hence, more defragmentation software in Windows than Linux).

But, fragmentation doesn't really matter when it comes to SSDs since it can access multiple blocks simultaneously. (I think?)
 
Am I misunderstanding this part? According to Wiki http://en.wikipedia.org/wiki/Solid-state_drive#Comparison_of_SSD_with_hard_disk_drives

SSD write performance is significantly impacted by the availability of free, programmable blocks. Previously written data blocks that are no longer in use can be reclaimed by TRIM; however, even with TRIM, fewer free, programmable blocks translates into reduced performance.[25][61][62]

Sounds like performance IS affected by how much free space there is
 
Am I misunderstanding this part? According to Wiki http://en.wikipedia.org/wiki/Solid-state_drive#Comparison_of_SSD_with_hard_disk_drives

SSD write performance is significantly impacted by the availability of free, programmable blocks. Previously written data blocks that are no longer in use can be reclaimed by TRIM; however, even with TRIM, fewer free, programmable blocks translates into reduced performance.[25][61][62]

Sounds like performance IS affected by how much free space there is
All TRIM does is wipe the space when you delete a file, rather than wiping the space when you try to write to a previously used space. If the "availability of free, programmable blocks" is "a few here, a few there, and the rest over there", the OS is going to fragment the file. The I/O time for jumping from fragment to fragment on a SSD is vastly faster compared to a mechanical drive, but I suppose, technically, it still has to do it.
 
It's always been 15% (meaning you can get as high as 85% capacity) and recently because of changes in NTFS with Vista/Windows 7, it's now around 5% just to be safe meaning you can throw whatever crap you want on the partition until it's about 95% capacity/full and that's about as far as I'd ever dare go. And I wouldn't even go that far, myself, but YMMV.
 
Am I misunderstanding this part? According to Wiki http://en.wikipedia.org/wiki/Solid-state_drive#Comparison_of_SSD_with_hard_disk_drives

SSD write performance is significantly impacted by the availability of free, programmable blocks. Previously written data blocks that are no longer in use can be reclaimed by TRIM; however, even with TRIM, fewer free, programmable blocks translates into reduced performance.[25][61][62]

Sounds like performance IS affected by how much free space there is

It is, but unfortunately, a SSD can only be filled up "once". A SSD will keep filling while you work with it, until it has written 60GB (on a 60GB size SSD) and then it's full, and this goes even if you've actually just rewritten a single 10MB file 6000 times (a hard drive, if you rewrite a single 10MB file 6000 time will still have 99% space free). After that, the SSD is considered full for performance reasons. And that is "OK".

What the WIKI is referring to, is that yes, you will have performance degradation, but the only way to clear up a SSD after it's full is to send its underlying electronics a "secure erase" command, which drops the flash storage table, ie, lets the SSD assume that all blocks are empty, whether they have data or not.

But, what will happen if you have 80% of the filesystem full on a SSD, is that you will be putting a heavier strain on the remaining 20%-30% of the flash blocks, and they will wear faster then the part of the disk you are not writing to on a daily basis, creating a uneven wear scenario which will bring about the death of the ssd faster then max life time.
 
Just as a FYI, the two reasons you do 50% are:
1. Head speed around a platter surface is much higher at the outer edge than the inner edge. Think about how much more distance the head travels in one RPM on the outer edge versus the inner.
2. Minimizing head travel from track to track. The more distance a drive head travels between reads and writes translates directly to lost time reading or writing. Also, more data is stored on the outer edges so filling the first 50% of the disk reduces this travel by more than 50% (hope that makes sense).

Very common in traditional enterprise storage to add spindles to arrays that would be partially used just to gain IOPS.
 
But this is a solution to a completely different problem.

OP's question was performance impact of going past 50% fill rate on drives. Understanding why and where it is common practice to fill that much is the problem at hand no?
 
Yes, but spindles are added to arrays to increase IO. It's not a question of losing performance when disk fills up, it's a question of high IO loads (often with shared storage), different IO patterns for different services etc.

Generally for such an environment you'll try to use more small disks which will provide you with more IOPS. It's not a question of disk usage, rather reducing disk access related latency.

So again, it's a solution to another problem.
 
SSD write performance might be impacted if it's too full, depending on the implementation of trim, wear leveling, etc.

But a better reason not to fill it (common numbers heard are 70-80% full, although it's pretty much a guess game) is wear and tear, with more space available for wear leveling, it works better, and less data means less writes.

Write numbers are also to consider, I used to put BOINC (distributed computing) on my SSD and didn't realize it wrote more than 10GB per day, it killed the SSD (a Crucial M225 64GB).
 
SSD write performance might be impacted if it's too full, depending on the implementation of trim, wear leveling, etc.

But a better reason not to fill it (common numbers heard are 70-80% full, although it's pretty much a guess game) is wear and tear, with more space available for wear leveling, it works better, and less data means less writes.

Write numbers are also to consider, I used to put BOINC (distributed computing) on my SSD and didn't realize it wrote more than 10GB per day, it killed the SSD (a Crucial M225 64GB).

Can you share how long it took to fail at that rate?
 
1 year

I kept around 15GB free and didn't use the drive that much aside from that (for example, I would never put a big video file on it), it was mainly a system drive.

It started failing by creating corruption in files, typical.
 
Back
Top