Are there any "mainstream" 4TB+ NVMe drives in the pipeline?

the SLC cache on that 8TB QLC SSD is 2TB in size when empty even filled up it should still have good 200GB of SLC cache (personally i probably just cap it to maybe 7000-6500 MB partition size for OP but it's likely not needed, actual available space is 7154MB witch is as 8TB in Gib starts to really show lol)

what i like to see what happens when a mass trim command has been triggered (does it halt the system until the Trim command has finished) open defrag/Optimise and press optimize see if disk load hits 100% or if the Optimise hangs around 80-90% or just goes straight to finish
 
the SLC cache on that 8TB QLC SSD is 2TB in size when empty even filled up it should still have good 200GB of SLC cache (personally i probably just cap it to maybe 7000-6500 MB partition size for OP but it's likely not needed, actual available space is 7154MB witch is as 8TB in Gib starts to really show lol)

what i like to see what happens when a mass trim command has been triggered (does it halt the system until the Trim command has finished) open defrag/Optimise and press optimize see if disk load hits 100% or if the Optimise hangs around 80-90% or just goes straight to finish

It does use full-drive SLC caching like the E16-based drives, which is a mixed blessing. The way it works and the associated algorithms are actually fairly complicated. Generally the amount of SLC available will be approximately one-fourth the amount of flash remaining but this can also include reserved/overprovisioned space. That is, with dynamic SLC you want to try and have some SLC free even if the drive is completely full. Keep in mind that leaving any space free on the drive is suitable for dynamic overprovisioning due to the aggressive nature of GC/TRIM on modern drives.

With dynamic SLC the wear zone for SLC and QLC is shared which includes GC/TRIM. So you have the scenario where certain QLC in SLC mode has less or more wear than other QLC in SLC mode; this information is tracked as part of wear-leveling. Normal drives without full-drive SLC caching will rotate SLC based on QLC wear while a drive with some or all static SLC has a separate wear zone for that SLC. The algorithms involved in any case by Phison can include, one, determining the workload and adjusting the performance tier to reduce power usage - that is, you could go direct-to-QLC - and two, over time a behavioral profile can be created to better adjust and use SLC based on usage. This can be problematic and inconsistent (as many people have complained about on E16 drives) and you still have the third/worst performance tier in some cases.

That worst performance state, when you're bottlenecked by folding, is related to how it handles GC as GC is effectively merging. With folding you compress four SLC blocks into one QLC block (block status is tracked, but so is page status) but there are actually multiple ways to handle folding for example hybrid blocks (a block flagged as SLC which is also able to be folded into like QLC) and you can pull partial blocks among other things, however in general that is the methodology, while there are multiple types of merges but GC inherently involves combining data/pages from two or more blocks and writing a new replacement block. Obviously, this new block has to be free and erased first, and erasing has high latency. But with folding you also have conversion of SLC to QLC so you can assure free blocks, and as stated a drive with dynamic SLC caching has a single GC/wear zone.

I actually have a drive with full-drive SLC caching so I suppose I could test it real quick... (edit some time later: it worked fine)
 
Back
Top