Over-provisioning a consumer grade SSD

karuse

n00b
Joined
Feb 17, 2023
Messages
1
Hi,
I just learned from an Intel whitepaper about over-provisioning
It said one way to over-provisioning an SSD is: limiting the logical volume capacity during partitioning in OS.
Does this means: if my SSD is 128GB and I am only parititioning it 100GB, I am over-provisioning my SSD and enjoying the benefit of longer lifetime and also faster 4k performance ?
 
That is correct, as for faster 4k performance, not as likely. These days SSD's are already going to out live your system most likely. over provision was more popular when SSDs first came out cause they weren't necessarily the best and wore out a lot faster. These days, you are talking 100 of TB's of read/write usage and more for SSDs.
 
Hi,
I just learned from an Intel whitepaper about over-provisioning
It said one way to over-provisioning an SSD is: limiting the logical volume capacity during partitioning in OS.
Does this means: if my SSD is 128GB and I am only parititioning it 100GB, I am over-provisioning my SSD and enjoying the benefit of longer lifetime and also faster 4k performance ?
MrGuv already went over why overprovisioning in this day and age is not importantnot/as important (depending on the device and your workload.) As far as 4k (and sequential) performance, overprovisioning can actually diminish your performance depending on how the controller allocates the space (for example, if you have 8 chip packages and overprovision by 20% it could either cut down on a percentage of each die, or just cut out a die entirely (which would cut down the number of concurrent streaming channels available and therefore diminish your sequential and possibly your random as well.)
 
I look at it this way. On a fresh Windows install, you're already overprovisioning roughly 100 GB dynamically and that gets smaller as you fill up the drive. Whatever free space you have is the overprovisioning so I don't even bother with it.
 
The whitepaper is from 2018.

Modern SSDs have OP built in which is only accessible to the controller for OP's exact use and is invisible to the OS/software. you do not need to manually add additional OP. OP was useful and necessary in old gen SSDs that didn't not have hidden OP.
 
  • Like
Reactions: M76
like this
On my cache drives I leave about 10% unpartitioned just to be safe. I don't know if it's actually increasing the life of the drive or doing anything. But I haven't had one fail yet and my cache drives are constantly writing and reading. As is the nature of caches.
It's the only way to make sure they don't hit 100% fill rate as well, as the cacheing is automated.
 
The whitepaper is from 2018.

Modern SSDs have OP built in which is only accessible to the controller for OP's exact use and is invisible to the OS/software. you do not need to manually add additional OP. OP was useful and necessary in old gen SSDs that didn't not have hidden OP.
I was going to say this. Over provisioning is already built into the drives, you did not need manual OP since the early 2010s. And even back then it was probably pointless. We had a bunch of 830 and 840EVO SSDs some used with OP, others not, and they still all work fine, with no difference in drive health for drives that had OP.
 
Is it possible by short stroking a SSD to optimize its performance by using only faster SLC cache ? For example, a SSD has 2TB capacity and it features around 600GB of SLC cache. By short stroking the only partition to 600GB, would drive exclusively use SLC and therefore have it's read/write speed near theoretical maximum, or this is not the way things work ? I'm guessing it's the latter but it would be interesting to know what is happening in there behind the scenes.
 
Is it possible by short stroking a SSD to optimize its performance by using only faster SLC cache ? For example, a SSD has 2TB capacity and it features around 600GB of SLC cache. By short stroking the only partition to 600GB, would drive exclusively use SLC and therefore have it's read/write speed near theoretical maximum, or this is not the way things work ? I'm guessing it's the latter but it would be interesting to know what is happening in there behind the scenes.
If it doesn't work that way, maybe make a 1.4TB dummy file and then you should only be using the SLC portion.
But how often would you be writing massive amounts of large files to the drive that you would need max speed for the entire write?
 
If it doesn't work that way, maybe make a 1.4TB dummy file and then you should only be using the SLC portion.
But how often would you be writing massive amounts of large files to the drive that you would need max speed for the entire write?
The "SLC" cache in MLC/TLC/QLC drives is not actual SLC silicon. It is the same MLC/TLC/QLC silicon but that small area is treated as SLC by the controller, (eg it only stores 1 bit per cell instead of the multiple charge bits it is capable of). Instead of wasting 99% of a 2TB SSD for the "SLC" speedier area (which in a 2TB drive is NOT 600GB, it is more like 30-40 usually) you could put your money into faster hardware, more RAM, etc.
 
Just stop. Let the drive and OS do their thing. You are not going to do anything to help.
 
Just stop. Let the drive and OS do their thing. You are not going to do anything to help.
1677255691057.png
 
All SSDs have additional space invisible to the user, just like mechanical drives. I would guess a 480 GB drive would actually have 512 GB of physical space, and a part of the invisible 32 GB is used as a SLC cache and some used for reallocation of failing areas. That's just a guess, I have no clue to be honest. Would leaving unpartitioned space help the drive? No idea. There are so many layers upon layers nowadays, the physical layout of the NAND, the flash translation layer, partitions, logical volume management, filesystems, it's hard to get a grasp of where and what is happening. I think TRIM might be helpful, so the filesystem can tell the firmware of the SSD "hey, I don't need this area, you can treat it as empty!", because otherwise it doesn't matter if your filesystem is empty and you have 1GB used on a 1TB drive, if you previously had it filled up, the SSD firmware still thinks it's filled up and has to struggle to do it's internal things like erasing of large areas whenever it needs to overwrite a block, because as far as I know, SSDs must erase a large area, like 4MB or so, even if you're only writing a few KB. I still find swap / pagefile on SSDs to be a nightmarish idea.
 
I still do it from new, just in case a customer fills their SSD (it has happened). I don't use much.

Used to be 120GB - 1GB 250GB - 2GB 500GB - 5GB 1TB - 10GB.

Doesnt really hurt either way. Plus most of us rip em out and replace em well before any issues occur. Cheap tech on the whole.
 
The "SLC" cache in MLC/TLC/QLC drives is not actual SLC silicon. It is the same MLC/TLC/QLC silicon but that small area is treated as SLC by the controller, (eg it only stores 1 bit per cell instead of the multiple charge bits it is capable of). Instead of wasting 99% of a 2TB SSD for the "SLC" speedier area (which in a 2TB drive is NOT 600GB, it is more like 30-40 usually) you could put your money into faster hardware, more RAM, etc.

Personally I would like the option to reformat a cheap 4-bit drive to 1-bit (the entire drive, no cache). Then you could use it at 1/4 capacity for applications that need endurance or consistent write speed, without the exorbitant cost of a special SKU.
 
Personally I would like the option to reformat a cheap 4-bit drive to 1-bit (the entire drive, no cache). Then you could use it at 1/4 capacity for applications that need endurance or consistent write speed, without the exorbitant cost of a special SKU.
Unfortunately, that is not an option that is open to you. In all honesty, in all but the most extreme cases or people with very narrowly tuned workloads you would see no benefit to doing that. Instead of paying for 4x the NAND only to use 1/4 of it, put that money you are wasting into something else that will benefit you all the time (more RAM, faster processor etc) instead of trying to shoehorn a drive not made for your use case (or more specifically not 100% tuned for your specific need) or purchase an Enterprise SLC or MLC drive in the first place.
 
Unfortunately, that is not an option that is open to you.

It should be, is the point.

In all honesty, in all but the most extreme cases or people with very narrowly tuned workloads you would see no benefit to doing that. Instead of paying for 4x the NAND only to use 1/4 of it, put that money you are wasting into something else that will benefit you all the time (more RAM, faster processor etc) instead of trying to shoehorn a drive not made for your use case (or more specifically not 100% tuned for your specific need) or purchase an Enterprise SLC or MLC drive in the first place.

I don't want a drive tuned for my needs. That would be expensive. I want cheap.
 
Back
Top