Cost advantage of HDDs over SSDs will last more than a decade

Considering the fact that data delivery is so much faster nowadays large slow hard drives are losing their appeal. When it took a week to download something large it was smart to keep a backup of it locally. Now we can grab anything within 30 mins or less it just doesn't seem as necessary to have large slow drives.
 
Considering the fact that data delivery is so much faster nowadays large slow hard drives are losing their appeal. When it took a week to download something large it was smart to keep a backup of it locally. Now we can grab anything within 30 mins or less it just doesn't seem as necessary to have large slow drives.
There are so many different use cases for large HDDs, or I should say, large amounts of storage. DrLobotomy has one use case. Mine is different. I have well over 1 TB of digital photos, which I need to access "on demand." And that's before I start to scan all my slides. Those scans will clock in at about 100 MB apiece.
 
I use HDD's for storage only.

The OS and all programs are on SSDs for the most part. My games are installed on a HDD with a 256GB SSD set up as a cache.

Even my retro systems all the way back to 486 are using SSDs. Once I set up older systems they will be using some sort of SSD as well.
 
HDD's will hold for several years at least.

Every time you add bits to a nand cell the gains increasingly get smaller, e.g. MLC doubled capacity over SLC, but the gain for TLC was halved, and then halved again for QLC.

Samsung's QVO came to market at the same price as its EVO drive.

I expect whats fuelling price drops on QLC is more that these drives often have lower rated spec's, and these allow lower quality nand to be used e.g. lower rated endurance and speed. As the cost savings from the density on QLC vs TLC will only be about 12.5%. Assuming it has yields same as TLC.

Also some have predicted its about increasing profit margins so e.g. TLC drive at specific price point, introduce QLC drive to market a bit lower so price conscious and low tech IQ people buy it. Eventually TLC volumes drop below QLC, at that point increase price of TLC and place it as a premium product and then either keep QLC at that slightly lower price point or bring it back to what TLC used to cost. We not going to suddenly see price per gig of spindles on higher bit density nand as the cost savings dont allow for it.

The rated endurance levels are all unproven at this point as well, my 850 pro died last month, the drive was rated for multiple thousands of erase cycles and 150 TBW. It died at just 45 erase cycles and 20TB writes.
My older 830s are still going but the one in my laptop which I benched has a massive performance drop from brand new, its under 50% of its rated performance. It has multiple 100s of erase cycles currently. Of course tho even with this drop its performance still comfortably beats any of my spindles, especially the 2.5 inch 5400rpm hdd it replaced.

The issue I have with hdd's right now is I think capacities are too high and I no longer like 3.5 inch devices, they seem too dated, too noisy, too big. If you have drives in a raid and one fails, if they large multi TB drives, then with current MTBF on consumer drives, you have a reasonable chance of the raid rebuild failing. For spindles I feel we need to move to a system where there is more but smaller devices, obviously this means boards having much more SATA ports. Cases also adapted to have more native 2.5inch bays vs less 3.5inch bays. I would much rather have 4 1TB drives than 1 4TB drive. Then modern OS like windows 10 should have software raid support for these devices, raid 5,6,1,10. Linux/BSD already has this covered, especially with ZFS. This all needs a new approach from industry tho, there is still no momentum to push us to 2.5 inch drives in desktops, and if anything SATA ports will be reduced in future and not increased.
 
HDD's will hold for several years at least.

Every time you add bits to a nand cell the gains increasingly get smaller, e.g. MLC doubled capacity over SLC, but the gain for TLC was halved, and then halved again for QLC.

Samsung's QVO came to market at the same price as its EVO drive.

I expect whats fuelling price drops on QLC is more that these drives often have lower rated spec's, and these allow lower quality nand to be used e.g. lower rated endurance and speed. As the cost savings from the density on QLC vs TLC will only be about 12.5%. Assuming it has yields same as TLC.

Also some have predicted its about increasing profit margins so e.g. TLC drive at specific price point, introduce QLC drive to market a bit lower so price conscious and low tech IQ people buy it. Eventually TLC volumes drop below QLC, at that point increase price of TLC and place it as a premium product and then either keep QLC at that slightly lower price point or bring it back to what TLC used to cost. We not going to suddenly see price per gig of spindles on higher bit density nand as the cost savings dont allow for it.

The rated endurance levels are all unproven at this point as well, my 850 pro died last month, the drive was rated for multiple thousands of erase cycles and 150 TBW. It died at just 45 erase cycles and 20TB writes.
My older 830s are still going but the one in my laptop which I benched has a massive performance drop from brand new, its under 50% of its rated performance. It has multiple 100s of erase cycles currently. Of course tho even with this drop its performance still comfortably beats any of my spindles, especially the 2.5 inch 5400rpm hdd it replaced.

The issue I have with hdd's right now is I think capacities are too high and I no longer like 3.5 inch devices, they seem too dated, too noisy, too big. If you have drives in a raid and one fails, if they large multi TB drives, then with current MTBF on consumer drives, you have a reasonable chance of the raid rebuild failing. For spindles I feel we need to move to a system where there is more but smaller devices, obviously this means boards having much more SATA ports. Cases also adapted to have more native 2.5inch bays vs less 3.5inch bays. I would much rather have 4 1TB drives than 1 4TB drive. Then modern OS like windows 10 should have software raid support for these devices, raid 5,6,1,10. Linux/BSD already has this covered, especially with ZFS. This all needs a new approach from industry tho, there is still no momentum to push us to 2.5 inch drives in desktops, and if anything SATA ports will be reduced in future and not increased.

Since SSDs are semi-conductors, pricing will probably follow the typical semi-conductor volume/yield/price curves. Years ago, when I was close to this issue, I remember seeing curves showing how yields typically improve over time, and costs drop as production volumes increase. Combine these two factors, with competition for a pure commodity product, and you get dramatic price decreases over time. I don't have those numbers handy, and the shape of the curves may have changed, but I expect that SSD price declines will be bigger than HDD price declines. At some point, "low capacity" SSDs will be cheaper in absolute terms than HDDs, moreso if power consumption is also counted. For "medium capacity" drives, the SSD premium becomes small enough that most of us will no longer buy new HDDs.
 
Last edited:
yep eventually they will meet several years down the line, but my point was that this wont be down to QLC.
 
This all needs a new approach from industry tho, there is still no momentum to push us to 2.5 inch drives in desktops, and if anything SATA ports will be reduced in future and not increased.
I agree, but for sata to decrese a new standard needs to be implemented, sata express never took off, and u.2 while for me its great not everybody likes it or implements it, and PCIe lanes are growning but not to a point where we can go fully, so something has to replace sata III, idk maybe sata IV with a new appoach.
 
For us sane people that do not feel the need to hoard data it's not an issue. Hasn't been for several years.
 
You can get used enterprise 4 TB HDD's for ~$40 each. SSD's can't come close to this price point yet, but everything has it's use.
 
I think both SSD and HDD can co-exist in the market. I'll never use an HDD for my OS or games again, but the density of HDD available combined with a sane price/GB curve as that density goes up means I'll always use them for storage and backup.
 
As I've said before the issue for storage is not price or size or speed. All those criteria have been met.

The issue that everyone has their head in the sand over is the trend of software requiring tens of thousands of performance killing micro files. This issue really needs to be addressed. Needs a radical new solution. Sure I can transfer 500GB of 4K video in minutes but copying an 8GB AppData folder can take hours.
 
As I've said before the issue for storage is not price or size or speed. All those criteria have been met.

The issue that everyone has their head in the sand over is the trend of software requiring tens of thousands of performance killing micro files. This issue really needs to be addressed. Needs a radical new solution. Sure I can transfer 500GB of 4K video in minutes but copying an 8GB AppData folder can take hours.

I'm in the process of building a new ZFS system, will be implementing small files and metadata special vdev, hope to alleviate this problem.
 
Except with the slowing of Moore's Law, "typical" curves are no longer typical. Gains are coming more from increasing bits/cell and thus lowering performance and durability.

Using current materials shrinking the node can not continue for much longer. 3D stacking is where it is now. Although that likely will run into limitations as well.

For hard drives they will likely keep a price per TB advantage as long as they continue to grow in size and still remain at least somewhat relevant for consumers. If they go enterprise only this could reduce their price advantage.
 
Using current materials shrinking the node can not continue for much longer. 3D stacking is where it is now. Although that likely will run into limitations as well.

3D stacking is a packaging advantage, but it really doesn't improve cost/bit in a significant way. You still still need to fab the same amount of wafer as if you weren't stacking at all.
 
My mistake, I was thinking of stacking the individual layers, where 3D NAND uses deposition techniques to add integrated layers.

Though each layer is still another step, so will incur more costs. Cheaper than doing another wafer obviously, but still some unknown cost increment/layer.
 
Oh boy, I paid $180 for a 30GB OCZ SSD way back when. Now there are cases that don't even have 3.5" drive bays. With the cost of SSDs being so low, there's no compelling reason for me to go back to mechanical drives even if the cost/GB is lower.
 
You can get used enterprise 4 TB HDD's for ~$40 each. SSD's can't come close to this price point yet, but everything has it's use.
That's only looking at capacity, though... take performance, noise, and power consumption into account, and hard disks really can't compete.
Also, comparing used vs. new isn't exactly fair...
 
HDD prices aren't really moving, they fluctuate per GB, meanwhile SSD prices are falling: I don't expect we'll find HDDs in most consumer devices by tge end of 2020.
 
HDD prices aren't really moving, they fluctuate per GB, meanwhile SSD prices are falling: I don't expect we'll find HDDs in most consumer devices by tge end of 2020.
What do you consider the boundary between consumer devices and business/enterprise devices? Guys here routinely talk about 8 TB and more drives. By the end of 2020, say, do you think that 8 TB SSDs will be available, and "affordable?" I'm not too sure, but I would welcome the demise of 3.5 HDDs up to 10 TB. I might even get a mid-tower case to replace my hulking Corsair 800D monster.
 
Paying a premium for a single-drive solution, for sure. You can build a 30TB SSD RAID array for around 4k.
 
Paying a premium for a single-drive solution, for sure. You can build a 30TB SSD RAID array for around 4k.
Sure, but if you are an enterprise, "density" can be an issue, especially if you are renting a cage or a rack from an ISP. To be sure, that's not MY issue. :ROFLMAO:
 
You can get used enterprise 4 TB HDD's for ~$40 each. SSD's can't come close to this price point yet, but everything has it's use.
I would take that "deal" with a bag of salt - used enterprise disks that were used, most likely heavily 24/7, have quite a bit of wear and tear.
As long as they are used in a RAID 1 or similar redundant array for personal use, the low cost will hopefully beat out the failures, but yikes...
 
I would take that "deal" with a bag of salt - used enterprise disks that were used, most likely heavily 24/7, have quite a bit of wear and tear.
As long as they are used in a RAID 1 or similar redundant array for personal use, the low cost will hopefully beat out the failures, but yikes...

24 Disks, 4 x 6Disk RaidZ2 vdevs, so yes, double parity protection. New hard drives fail as well as old ones, and with a proper burn in (in either case) you can catch a lot of problems early. I've utilized used drives for several years with no problems. My only failure so far as been during burn in, which takes about 200 hours of badblocks and SMART data testing.
 
To follow up, just had another drive die today, but it was a refurb (not a used pull, but s 2018 refurb from amazon) and it failed during burn in. I accumulated 4000 uncorrectable errors over 48 hours, and the drive somehow partitioned itself AND changed device letters in linux. Clearly a bad egg, but covered by warranty. Not looking good for refurb drives as this is one of 2 i bought to test, haha. Second one going in now...
 
Back
Top