Anybody defrag their SSD?

% fragmented is a useless number. The number that matters is the quantity of fragments. That just means 42% of files have 1 or more fragments and that does not tell a lot. I had a drive at one point that had 50% or something fragments and 50,000 total fragments, which is not horrible, while another drive was in the millions of fragments and obviously was problematic.
 
Because of wear leveling, what appears contiguous to the OS is not actually going to be physically contiguous. Peforming it a lot will reduce life of the SSD needlessly but if there exists still a bug in some Samsung drives where older data becomes too slow to read, perhaps it might keep the data from becoming stale in that situation. Generally though, make sure scheduling is disabled for SSDs and forget about it.
 
I defrag mine in alphabetical order every 8 months or so. I like to 'turn the soil' and 're-energise' the data every so often. Re-do the file location table etc.

Not like I'm going to be using these drives for 10 years...be buying another in 18 months time for double the size and half the cost. Use it and abuse it.
 
I defragment my SSDs (with a proper SSD defragger, MyDefrag's flash disk script) fairly often (several times a year) because with NTFS compression enabled on some folders that compress well, I get thousands and thousands of fragments when the apps get updated. I obviously avoid NTFS compression on files frequently written to, but for a file written to once a month or less it can be totally worth it, tons of GBs saved on my SSDs thanks to that. Without NTFS compression though, I'd worry a lot less about it. Just maybe the OS drive once a in a while.

It certainly doesn't do any harm though, contrary to popular belief. After about 5 years of heavy use (including swapping* & defragging), my main OS drive (a 40Gb Intel 320) still has barely lost 4% of its writing capacity. That's on a machine running 24/7/365.

*since, to quote someone else (http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-a-for-solid-state-drives-and.aspx) "In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD."
 
Last edited:
However, if you look up on Microsofts tech pages regarding the Pagefile, even they state its largely legacy and used now mainly for just crashdump logging.

Not really worth the space anymore. I just set a 128MB one just in case anything old looks for it.
 
I defragment my SSDs (with a proper SSD defragger, MyDefrag's flash disk script) fairly often (several times a year) because with NTFS compression enabled on some folders
Looks like MyDefrag is done as of October 1, 2015. https://en.wikipedia.org/wiki/MyDefrag

On June 23, 2015, the author of MyDefrag announced that the official website including the forum would be shutdown on October 1st 2015.[7]

In the announcement, the author said it was unlikely that he would pick up the development of MyDefrag again, and he decided not to release the source of the software due to several clients having bought a license to the sources before. He said he would need to keep the source code closed in order to protect the clients investment, which means that MyDefrag will be frozen unless the author decides to revive the project again.[8]
 
For 99% of the population it isn't necessary. However, in some cases on a system with more than 80% fragmentation you can run into an issue where some database applications (like SQL and Exchange) will not be able to keep track of the large number of fragments. Then, you start to have issues. :)

While this isn't an SSD specific issue, they would be more susceptible to the issue due to the lack of a regular defrag.

Granted, this is an extreme edge case.

Riley
 
For 99% of the population it isn't necessary. However, in some cases on a system with more than 80% fragmentation you can run into an issue where some database applications (like SQL and Exchange) will not be able to keep track of the large number of fragments. Then, you start to have issues. :)

Are you saying that those database applications bypass the Operating System file IO routines -- reading (and writing) the filesystem metadata themselves and doing their own sector-level IO?

If that is true, it is rather scary if you have anything else besides database files on the filesystem. But I wonder if it is really true, or if you are confused about the way they work. The main reason I doubt the accuracy of your implication is that if it were true, I would expect those same DB programs to do their own defragmenting.
 
Are you saying that those database applications bypass the Operating System file IO routines -- reading (and writing) the filesystem metadata themselves and doing their own sector-level IO?

If that is true, it is rather scary if you have anything else besides database files on the filesystem. But I wonder if it is really true, or if you are confused about the way they work. The main reason I doubt the accuracy of your implication is that if it were true, I would expect those same DB programs to do their own defragmenting.

Here is the relevant Microsoft KB article. https://support.microsoft.com/en-us/kb/967351. We ran into this issue a couple months ago and it took a small Exchange 2010 server offline.

I don't think this is an issue with these application because they access the disks directly and bypass the filesystem, but more of the use the filesystem differently than other programs.

The DB apps are more or less a filesystem within a filesystem much the same way as a virtual disk (VHD, VMDK, etc.) resides on a host filesystem. The disk image can have zero fragmentation internally, but the physical layout on the disk can be very heavily fragmented - which causes the issue.

Riley
 
Some SSD's slow down when they get over a certain percentage full. I remember some OCZ drives started slowing down at just 50% full. Maybe you have one of those types of drives? Whenever I do a defrag on my platter drives, I clean up the drive first so I'm not defragging files or programs I don't want / need. If you did that too, then you may have dropped the drive below whatever threshold starts the 'slow down' point. That would account for the drive 'feeling faster'.
 
Here is the relevant Microsoft KB article. https://support.microsoft.com/en-us/kb/967351. We ran into this issue a couple months ago and it took a small Exchange 2010 server offline.

Okay, thanks for the reference link and clarification. I think the misunderstanding was a result of your statement that the "database applications will not be able to keep track of the large number of fragments".

From my reading of the microsoft report on the issue, it is not really the database programs that are failing, but rather the NTFS filesystem itself.
 
I know you are not suppose to defrag SSD's, but I was having sluggish performance from mine.
Reading or writing?

Defragging an SSD will move data around and with Trim possibly allow blocks to be cleared which may help write performance. But it puts additional write cycle wear on the SSD which means it wears out faster.

Also, many SSD's move data around on their own with garbage collection to help with wear leveling. Defragging one is self defeating. Compare that to a USB flash drive that, if you copy a large file to it, never moves it around. Only the unused space gets overwritten which makes them fail much faster.
 
Okay, thanks for the reference link and clarification. I think the misunderstanding was a result of your statement that the "database applications will not be able to keep track of the large number of fragments".

From my reading of the microsoft report on the issue, it is not really the database programs that are failing, but rather the NTFS filesystem itself.

No problem. I can see how my statement was misleading. I think what I was trying to say was that database-type applications are affected by the issue. Normal file operations seem to continue working. It's how those apps try to allocate additional extents and/or the number of extents they request is the issue.

Ultimately, though, it's NTFS failing due to the high degree of fragmentation (as you said).

Riley
 
my guess is it triggered all garbage collection to be carried out or something.
 
Back
Top