SSD for NAS, do you think that high volume ssd will replace the hdd?

Joined
Apr 27, 2019
Messages
18
According to today's prices of the SSDs, we can say for sure that we will not see any computer in the future without an SSD.

but the question is that, do prices will keep falling while high volume SSDs will be available and affordable.
 
but the question is that, do prices will keep falling while high volume SSDs will be available and affordable.
It will happen, pricing on NAD and thus SSDs have been falling gradually, and will continue to do so, as has happened with all storage over time.

The real question is when will SSDs will be as cheap per tb as mechanical hdds, and there is no real definitive answer, as hdd manufactuers continue to release bigger capacity drives. Now i do think it will reach it, probably in the next 10 years.
 
It will happen, pricing on NAD and thus SSDs have been falling gradually, and will continue to do so, as has happened with all storage over time.

The real question is when will SSDs will be as cheap per tb as mechanical hdds, and there is no real definitive answer, as hdd manufactuers continue to release bigger capacity drives. Now i do think it will reach it, probably in the next 10 years.

I thought SSDs have a significantly lower read/write count on the drives compared to normal HDDs.
 
SSDs already way out-volume and out-density spinning rust; economy of scale and depreciation are the only thing keeping HDDs viable. HDDs are also plenty fast for so many applications that most of the benefits of SSDs are just not yet needed across the board.

Basically, as fabs are built and production volume increases, spinning rust is done. We're already at the point where no single 'node', be it a laptop or a server, should be running off of an HDD for primary storage. For most consumer devices that aren't compute-heavy, we're likely to see something that looks more like a scaled-up mobile SoC with all components in a single package. Operating systems come to look more like firmware, etc.

I thought SSDs have a significantly lower read/write count on the drives compared to normal HDDs.

They do- however- whether that matters is dependent on specification and workload. To be fair, the cheapest HDD is going to have far better sustained write performance than the cheapest SSD. But in situations where sustained write performance matters, there are SSD solutions available.

The bigger point is that sustained writes, at anywhere approaching the maximum write speed of the SSD, essentially are a very niche use case. Slower, or more sporadic writes, are simply not going to stress a modern SSD, and generally speaking an SSD is still going to outlast an HDD because HDDs are prone to mechanical failure and on average will still be functional when HDDs start to die.

Last point- since flash is more expensive today, if you do need sustained write performance, don't have the budget for SSDs (or the right kind), and don't necessarily need the read latency and speed advantages of SSDs, HDDs make sense. For consumers, this can be portable drives, backup drives, or NAS drives, for example. For enterprise, the use of 'tiered' storage where data is intelligently placed on HDD or SSD is pretty common.
 
New laptop 240GB SSD's are already under $30 which makes them about the same price as a new mechanical drive.
(I don't count 120GB SSD's as that's not enough storage for most people)
SSD's on the desktops are cheap enough that I've been gradually replacing all the spinners in the office over the past couple years.

It's the higher capacity drives (1TB+) that are still 3-4 times the price, and it will likely be a few more years before you see 1TB SSD's for the same price as mechanical 1TB drives.

With server level drives, it will take longer.
Server/data center SSD's are 4-8 times the price of a spinner, depending on how many writes they can handle.
However, due to the higher reliability and much faster speeds with SSD's, I've been able to use Raid 5's instead of the Raid 10's I've been using with mechanical drives on some of my servers. Less drive makes the cost more competitive.
 
With respect to SSD longevity, they're supposed to generally manage five years of normal use without a problem now. I used to worry about making drives last longer than that, so I've traditionally made certain my drives led very cushy lives, came from the best families (largely Western Digital, because I have had to RMA way too many Seagates), and had the best living conditions possible (fans and filtered intakes, just because).

That having been said, there's currently not a single drive in my house that's five years old that I would consider re-deploying if the chassis it was in died, because they're just too small to bother with. What use am I gonna have for a 120Gb drive other than maybe a temporary boot volume? At that scale I might as well boot off a 16Gb thumbdrive and just do the rest of the storage over the network to the boat anchor full of more sizeable disks. Available bus connectors are more precious than Ben Franklins. Even I'm switching to SSDs for anything but bulk storage (>1Tb per unit) because the speed difference is downright ludicrous. It's only going to be a short span of time now before we see the storage market resegment into faster flash-based SSDs and distinctly larger spinning disks that full emphasize economy over speed, with little overlap between the two.
 
New laptop 240GB SSD's are already under $30 which makes them about the same price as a new mechanical drive.
(I don't count 120GB SSD's as that's not enough storage for most people)
SSD's on the desktops are cheap enough that I've been gradually replacing all the spinners in the office over the past couple years.

It's the higher capacity drives (1TB+) that are still 3-4 times the price, and it will likely be a few more years before you see 1TB SSD's for the same price as mechanical 1TB drives.

With server level drives, it will take longer.
Server/data center SSD's are 4-8 times the price of a spinner, depending on how many writes they can handle.
However, due to the higher reliability and much faster speeds with SSD's, I've been able to use Raid 5's instead of the Raid 10's I've been using with mechanical drives on some of my servers. Less drive makes the cost more competitive.
IF/when/ever the SSD chip manufacturers decide that they need to create chips for server drives, that will greatly benefit all of us consumers. I would be thrilled to get a 4-8 GB SSD, at a "reasonable" price. Then I would use my spinners only in a NAS build (if I ever do that) or for backup. Aside from noise, power and heat issues, I would need less space in my case for multiple drives.

It's only a matter of time. It's only a matter of time. Issue is just how much time? And by that point in time, spinners might be up to 24-30-36 TB, more than I can possibly use.

x509
 
My latest build - 86 TB usable, is a short of 20-30 TBs of the end goal needs. Additional parts already ordered .
Chip-based memory tech grows fast but content creating devices aren't behind :)
....and 16x12tb hdd hgst 4kn raid10 array have a lovely performance :)
 
With respect to SSD longevity, they're supposed to generally manage five years of normal use without a problem now.

Note that while MTBF's are still in play, with respect to any flash-based device, you're looking at read / write cycles for longevity. The more an SSD is written to, the less 'life' it has in it, and SSDs vary greatly in terms of write endurance.
 
With QLC SSDs speeds in lower volume devices I can see HDDs being faster for the first time than SSDs :) . Well, with 2TB and above, they are on par for sequential transfers when the SLC cache is exhausted. These high capacity drives are used for large files and transfers anyway.
In one hand HDDs are getting ridiculously large (r than SSDs also) these days, but in the other, most consumers don't even need such capacities. If not storing all movies or series I ever watch, I can't see what use I can have for 10TB besides virtual machines -> currently 1-2TB including snapshots and backups.
Times may approach when the minimum HDD price for the lowest capacity models at the time will almost match that of SSDs for similar capacity and quality. Because I wouldn't trust a crappy noname mfg-r for my data only because 1TB costs $30 or so. And IF consumer needs don't advance so fast, then 2-3TB may become widespread for storage too.
 
QLC is far too slow to be usable in a NAS, especially since as you start to fill up the storage you lose your SLC cache and you will never get anything even close to decent speeds. Unless they can start making 10+ TB SSDs with MLC flash or some new tech that is equivalent or faster in speed, for under $500 then they will never replace mechanical drives in NAS or SAN systems.
 
With QLC SSDs speeds in lower volume devices I can see HDDs being faster for the first time than SSDs :) . Well, with 2TB and above, they are on par for sequential transfers when the SLC cache is exhausted. These high capacity drives are used for large files and transfers anyway.
In one hand HDDs are getting ridiculously large (r than SSDs also) these days, but in the other, most consumers don't even need such capacities. If not storing all movies or series I ever watch, I can't see what use I can have for 10TB besides virtual machines -> currently 1-2TB including snapshots and backups.
Times may approach when the minimum HDD price for the lowest capacity models at the time will almost match that of SSDs for similar capacity and quality. Because I wouldn't trust a crappy noname mfg-r for my data only because 1TB costs $30 or so. And IF consumer needs don't advance so fast, then 2-3TB may become widespread for storage too.
OK, call me uninformed. What is a QLC SSD and why is it slower than a spinning HDD?

x509
 
OK, call me uninformed. What is a QLC SSD and why is it slower than a spinning HDD?
x509

SLC is 1 bit per cell.
MLC is 2 bits per cell
TLC is 3 bits per cell.
QLC is 4 bits per cell.

The multiple bit per cell is achieved by storing different voltage levels in the cell.
The more bits per cell, the longer it takes to write or erase a block of data.
More bits per cell generally means less writes before cell wears out.

As for slower than a spinner, that depends.
Reads on a QLC drive will be much faster than a spinner.
Small writes should also be faster.

Large sustained write are the problem, especially on an almost full drive.
Even on the older TLC drives, large sustained writes can end up slower than a spinner if the drive has to stop to erase unused blocks.
QLC is even worse on the slowdown.

However, 4 bit per cell gives 30% more storage for the same costs, lowering the price of the SSD.
Most disk activity is reads, so most users won't notice performance difference.
 
QLC is far too slow to be usable in a NAS, especially since as you start to fill up the storage you lose your SLC cache and you will never get anything even close to decent speeds. Unless they can start making 10+ TB SSDs with MLC flash or some new tech that is equivalent or faster in speed, for under $500 then they will never replace mechanical drives in NAS or SAN systems.

If the NAS is used like a SAN, i.e. there are bootable shares on it or there are large, live databases on it (or both), then yeah, QLC (and most TLC) is a bad idea.

If the NAS is just used as mass storage, then QLC would be great- if it weren't more expensive than spinning storage. That's what really makes it less suitable for general consumer and most small office NAS, as just about any NAS these days can saturate 1Gbit, and nearly any eight-drive array of spinners can saturate 10Gbit. For general network mass storage, flash doesn't really present meaningful advantages like it does in desktop scenarios.
 
Note that while MTBF's are still in play, with respect to any flash-based device, you're looking at read / write cycles for longevity. The more an SSD is written to, the less 'life' it has in it, and SSDs vary greatly in terms of write endurance.

That is precisely why I said "normal use". Please don't pointlessly muddy the waters like this. Users generally know if their use is "normal" or not, and it's not that hard to figure out how many writes a day someone who isn't an "normal" user is doing and extrapolate longevity from there. Multiple sites have done SSD longevity tests now (feel free to Google around) and the prevailing result appears to be that if anything, vendors are being rather conservative about the volume of writes their units will tolerate before expiring, and those writes are still a pretty big number.

By example, the Samsung 860 QVO is promising 350Tb writes for the 1Tb unit which is a number that the kiddies are bound to get hyperbolic about and shout "OMG it'll be ded next week LOL" except that's enough to go ~950Gb per day, every day, for an entire year before reaching the expected EOL. In reality it's quite likely to keep chugging along for some time more, just like spinning disks mysteriously manage to live way past their MTBF. An early failure wil probably happen for someone with a serious torrenting problem, but they should darned well know their use is atypical (and will probably be puchasing much larger spinning disks anyway). For everyone else (including software developers, but excluding lunatic DBAs) their daily writes are going to be quite a bit less than the 191 GB per-day-every-day necessary to burn the thing up in five years.
 
Users generally know if their use is "normal" or not

They really, really do not. Users just use the product. When they do something that they consider normal (how nebulous is that?) but is not what the product is designed for and the product subsequently fails to perform or fails outright, they blame the product. Further, the point of the clarification is that MTBF is nearly useless for SSDs.

vendors are being rather conservative about the volume of writes their units will tolerate before expiring, and those writes are still a pretty big number.

I don't disagree at all- however, there is absolutely no reason to believe that this will continue to be the case and to end one's product search without considering what the manufacturer actually says and warranties for. This could change in the middle of a production cycle and manufacturers are under no obligation to provide any notification of the change when the specifications don't change.

Products performing out of spec can be a nice bonus, but it's not something that should be relied upon nor significantly factored into recommendations.
 
If the NAS is used like a SAN, i.e. there are bootable shares on it or there are large, live databases on it (or both), then yeah, QLC (and most TLC) is a bad idea.

If the NAS is just used as mass storage, then QLC would be great- if it weren't more expensive than spinning storage. That's what really makes it less suitable for general consumer and most small office NAS, as just about any NAS these days can saturate 1Gbit, and nearly any eight-drive array of spinners can saturate 10Gbit. For general network mass storage, flash doesn't really present meaningful advantages like it does in desktop scenarios.


The write speed on large QLC SSDs is slower than 5400RPM HDDs for most of their range though (160MB/s at best), and smaller QLC drives are just abysmal. No way they will replace mechanical drives when their size is slower, their speed is slower, their endurance is lower (0.3 drive writes per day....), and their cost is higher.
 
The write speed on large QLC SSDs is slower than 5400RPM HDDs for most of their range though (160MB/s at best), and smaller QLC drives are just abysmal. No way they will replace mechanical drives when their size is slower, their speed is slower, their endurance is lower (0.3 drive writes per day....), and their cost is higher.

If you need to write to the entire range regularly, sure, but that isn't a common consumer use-case- and 160MB/s isn't really that slow in terms of actual usage.

Now, I'm not arguing against cases where such write speeds actually are useful, but for day to day stuff, including running an operating system, launching applications and games, and media storage and playback? It's a non-issue.

If you're looking for a scratch drive for editing high resolution and / or high bitrate video, you'd probably do well to stear clear of TLC drives in addition to QLC drives. If you're looking for read and write caching for a NAS, you should definitely stear clear of TLC and QLC, same for large database applications and so on.
 
Further, the point of the clarification is that MTBF is nearly useless for SSDs.

Okay, so why are you "clarifying" this at me when I've said not one thing about MTBF. I was very clear about a volume of writes over a period of time, with that period of time being how long people generally expect their drives to last.

I don't disagree at all- however, there is absolutely no reason to believe that this will continue to be the case and to end one's product search without considering what the manufacturer actually says and warranties for. This could change in the middle of a production cycle and manufacturers are under no obligation to provide any notification of the change when the specifications don't change.

And what does this have to do with anything? More importantly, what does it have to do with the question posed by the thread? Also, the last time I checked, little details like warranty duration and warrantied volume of writes were considered to be "specifications" by just about everyone. Switching to cheaper and less reliable methods on a whim and failing to uphold warranties as stated at the time of purchase is how one winds up being Seagate's next acquisition.

On the basis of cost and more importantly, greed*, I don't forsee SSD/flash replacing high-volume-per-unit storage anytime soon. SSD/Flash is qualifiably faster than spinning disks (trying to compare them to large-cache 10k SAS disks to claim otherwise because of some overlap is just being thick) so there's a valid use case for streamlining computing operations with them, but the media/form is still new enough that there's going to be a "luxury tax" applied to their costs for a couple years yet and they're nowhere near as cost-efficient by storage capacity to manufacture as spinning disks.

* - Let's not forget that these are largely the same actors who've gotten spanked for price-fixing a half dozen times in the last 20 years.
 
For day to day stuff sure. But here we were talking about using them in a NAS (scenario). I would write (and read) very large files to/from a NAS or storage. Smaller QLC (for example 1TB Samsung) use less channels of their controller (because less chips) and max out at 80MB/s by their specs (when SLC is filled). The bigger 2TB is what it's worth as a minimum - with 160MB/s.
My desktop drives reach up to 210MB/s when empty, and about 175MB/s when half full. It's all about when we talk about seq writes.
 
Okay, so why are you "clarifying" this at me when I've said not one thing about MTBF. I was very clear about a volume of writes over a period of time, with that period of time being how long people generally expect their drives to last.

The part I quoted and responded to that mentioned MTBF had:

With respect to SSD longevity, they're supposed to generally manage five years of normal use without a problem now.

Volume of writes is expressed in bytes, MTBF is expressed in time, and you referenced time.

the last time I checked, little details like warranty duration and warrantied volume of writes were considered to be "specifications" by just about everyone. Switching to cheaper and less reliable methods on a whim and failing to uphold warranties as stated at the time of purchase is how one winds up being Seagate's next acquisition.

If the new production method meets the warrantied specifications of the old production method, then there's no need to update. You're misquoting me here.

On the basis of cost and more importantly, greed*, I don't forsee SSD/flash replacing high-volume-per-unit storage anytime soon.

Completely? No. But certainly replacing more and more of that market. Remember that cost is not just unit cost, but also cost to house and cost to power, when considering datacenter applications. For many enterprise environments, unit costs that differ on orders of magnitude may be inconsequential when considered alongside other operating costs.
 
The write speed on large QLC SSDs is slower than 5400RPM HDDs for most of their range though (160MB/s at best), and smaller QLC drives are just abysmal. No way they will replace mechanical drives when their size is slower, their speed is slower, their endurance is lower (0.3 drive writes per day....), and their cost is higher.

This is just aggressively wrong. I happen to have a brand new Samsung QVO (which happens to be QLC) 860 and a new Western Digital 2Tb "Enterprise" disk right across the room from me that the nice ninjas at UPS brought this very afternoon. Out of the box on an ASUS Prime motherboard with a Ryzen 7 2700X CPU and 16Gb of Corsair LPX (I know how the folks here looove their specs) with Fedora 30 freshly installed, it's easily demonstrated to be wrong. Benching with dd bs=4096 oflag=nocache count=500000 if=/dev/zero of=/dev/whichdisks, the spinning disk is slogging along at ~132MB/s (not bad really) and the SSD is crushing it at >400MB/s. ...and that's pretty much what all the benchmarking sites are showing as well. Pretty much every 1Tb SSD on the market right now is showing sustained writes over 300MB/s. Feel free to check userbenchmarks, man. In general, spinning disks are simply not faster than decent SSDs. Not for writes and not for reads, either.

That having been said, people tend to punish NAS in the enterprise with a lot more writes (and IO in general) than a home user would. SSDs would wind up needing replacing much more often than spinning disks, and that's going to annoy the people responsible for managing the racks (and potentially cost more)--especially if they're following ITIL and a bunch of PHBs have to sign off on each and every freakin' replacement. Home NAS is only a tiny little sliver of a market. It's not going to be enough to push SSDs to the top. ...but it is absolutely not because "they're slower". I can think of a couple outfits here in town that are already building out SSD arrays just because they are crunching ridiculous number sets. They're not under a five nines obligation and only care about how fast the numbers crunch, so no PHB-induced drag.
 
Volume of writes is expressed in bytes, MTBF is expressed in time, and you referenced time.

...and yet that still does not make what I said an MTBF. It was an estimation of the expectation users have that a drive (as well as the whole computer) is likely to last about five years, after which they'll probably just replace the whole machine if it doesn't up and die first, combined with typical write loads per day of the average user. Those write loads translate into an SSD no longer being the obvious weak link in the chain. Now stop with the pedantics. If someone has to have those things explained to them, then they're probably not qualified to be involved in this conversation in the first place.
 
Considering I've been building ssd-based NASs and other storage systems for the last 7+ years, it'll eventually be all flash for enterprise stuff.

As a prosumer, I already have 10gig networking and all SSD volumes in my home NAS for certain workloads, and regular HDD for bulk.

You can easily build out fairly large NAS devices with cheap consumer NAND as well. What you have to remember is that while a handful of consumer SSDs will suck for sustained write/wear life, when you have 64+ in an array, that stuff doesn't matter so much.

The wear-life of my latest consumer SSD is 1600TB... make an array of 256 of them and you get 409PBW.
At 1GB/sec (10gige) write speeds, fully saturated, that'll take 113611 hours to wear out.

That's 12 years...

Not saying that is what you would build, but 256 1TB SSDs at $100ea is only $25k, which is just the cost of a single server these days... load up a few jbod boxes and throw gluster/etc on there and you are good to go.

gSwbdMN.jpg

One of my old Infiniflash boxes. 512TB in 3U.
You can do 1-2PB in this form factor today.
 
This is just aggressively wrong. I happen to have a brand new Samsung QVO (which happens to be QLC) 860 and a new Western Digital 2Tb "Enterprise" disk right across the room from me that the nice ninjas at UPS brought this very afternoon. Out of the box on an ASUS Prime motherboard with a Ryzen 7 2700X CPU and 16Gb of Corsair LPX (I know how the folks here looove their specs) with Fedora 30 freshly installed, it's easily demonstrated to be wrong. Benching with dd bs=4096 oflag=nocache count=500000 if=/dev/zero of=/dev/whichdisks, the spinning disk is slogging along at ~132MB/s (not bad really) and the SSD is crushing it at >400MB/s. ...and that's pretty much what all the benchmarking sites are showing as well. Pretty much every 1Tb SSD on the market right now is showing sustained writes over 300MB/s. Feel free to check userbenchmarks, man. In general, spinning disks are simply not faster than decent SSDs. Not for writes and not for reads, either.

That having been said, people tend to punish NAS in the enterprise with a lot more writes (and IO in general) than a home user would. SSDs would wind up needing replacing much more often than spinning disks, and that's going to annoy the people responsible for managing the racks (and potentially cost more)--especially if they're following ITIL and a bunch of PHBs have to sign off on each and every freakin' replacement. Home NAS is only a tiny little sliver of a market. It's not going to be enough to push SSDs to the top. ...but it is absolutely not because "they're slower". I can think of a couple outfits here in town that are already building out SSD arrays just because they are crunching ridiculous number sets. They're not under a five nines obligation and only care about how fast the numbers crunch, so no PHB-induced drag.

As mentioned before, once you fill that small SLC cache in the QLC drives, write speed drops off a cliff. You are only quoting SLC cache speeds, not real drive speeds. You will very quickly run out of any space to use as SLC cache if using the drive for a NAS.
Why are you comparing it to a "2TB mechanical"? Give us a model number at least so we all know what you are talking about. Right now all I can do is guess based on the disk size that it is a very outdated model, given that modern generation drives are 10+ TB.
 
I'd also like to point out that dd is the worst kind of benchmark you can run.

Load up fio, use direct IO, and use a test file at least 2x the size of your cache if you want to benchmark disks.

https://fio.readthedocs.io/en/latest/fio_doc.html

-- Dave
So if you'd like to actually dispute the results as being significantly different from what I'm saying, go right ahead. ...but considering that the OP appears to have just been making things up (and not even researching to see if their results were being repeated elsewhere) I wasn't really interested in going to the trouble of a formal testing with fio and bonnie++. Dd was "chosen" because it's practically guaranteed to be on any Linux/Unix install. There's also the matter of literally dozens of websites reporting relatively the same thing. As to "2x the size of your cache" you might want to consider that maybe a block size of 4k and a count of 500,000 (totalling 2Gb) was chosen to do just that.
 
2GB is small... a lot of these drives have 20GB+ SLC cache onboard. I like to usually use at least half the drive.

But it doesn't really matter, you're just benching synthetic workloads that don't exist for 99% of users.
 
Last edited:
Dagmar dSurreal Simply ignoring all evidence and saying "I know I'm right, therefore I'm right" doesn't actually make you right. Since you are too lazy to bother taking 5 seconds out of your time to even know what you purchased, here you go:

QVO.png


That SLC cache on the 2TB drive also takes up as much as 312GB of your drive space. As your drive fills up, you can no longer use the SLC cache as much.
 
Last edited:
That SLC cache on the 2TB drive also takes up as much as 312GB of your drive space. As your drive fills up, you can no longer use the SLC cache as much.
Where do you get 312GB? The cache is variable between 6GB and 78GB. I don't see how it gets as large as 312GB. I'm not sure there's that much physical SLC on the card.

Also, over time, the SLC cache should be flushed to the QLC to prepare it for new write workloads, or read caching, so it'll always be used in one way or another. It all depends on what your steady-state workload looks like.
 
Where do you get 312GB? The cache is variable between 6GB and 78GB. I don't see how it gets as large as 312GB. I'm not sure there's that much physical SLC on the card.


The 860 QVO only has 6 GB of "real" SLC cache. The rest (the "max" size listed, minus the real) is actually QLC NAND configured dynamically to act as SLC cache.

When TLC/QLC NAND is acting as SLC for caching, its storage efficiency is only a fraction of what it normally would be. Where normally three (TLC) or four (QLC) bits of information would be stored per cell, only a single bit is. In order to get to a given amount of SLC cache, it takes 3-4 times as much TLC/QLC NAND.

So, for the 4 TB model to get the full 78 GB of SLC cache, it's going to reserve 288 GB ((78-6)*4) of the drive's capacity for the purpose. And the amount of cache the unit is able to reserve will decrease as the drive fills with regular data.
 
Aah, we used to call that SMLC back in the day when we were doing that with MLC chips.

I figured folks had moved past that by now.
 
I figured folks had moved past that by now.

To move past it with todays technology, we'd have to go back to using only MLC / two bits-per-cell, at most, and that would cost :).

I'll take QLC for mass storage with high bandwidth, lower latency, less space / noise / power usage than spinning storage.
 
It's less about what it would cost and more about what's happened in the form factors...

2.5in SSDs used to be PACKED with NAND, folded in half sometimes with two PCBs stacked together... now it's just a tiny PCB with 1 or 2 chips on it at most.

With the 96-layer 3d nand, you could fit a shit-ton of MLC capacity inside a 2.5in drive, but then the manufacturers wouldn't be making the margins they are making currently.

"Good enough" wins again.
 
I mean, you can get those for enterprise purposes, but the bare cost of >2TB consumer drives makes the whole proposition not terribly attractive from a manufacturing perspective. You're right, the margins would be terrible at the consumer level.

But they'll come down.

One big challenge is that fast mass storage is simply not in demand at the consumer level. If I could have two M.2 NVMe drives at 16TB each, I could mirror them, and hell, there's my whole NAS. I could probably hang that off of a PoE 10Gbase-T port....

And it'd be stupid expensive, when I can grab a stack of spinners for pennies and for home NAS purposes, have the same basic performance :/.
 
Yeah, I took a hybrid to that approach.

I have 4x480GB SSD in a raidz1 and 8x4TB HDD raidz2 with a 280GB optane 900P SLOG.
All running on freenas on a xeon-d 1541 and wired up with 10gige to my desktop and servers.

That said, I have a dozen fusion-io cards I could make into a real all flash san, but they make for real nice VM stores in my hypervisors.
 
I think it'll happen eventually.

I finally grabbed a 4TB 860 EVO a couple of weeks ago.. $50 off at Dell minus 15% Ebates cash back. It's still high but I pulled 3 other SSDs to swap this in so that'll offset some cost and simply my PC.
 
I think it'll happen eventually.

I finally grabbed a 4TB 860 EVO a couple of weeks ago.. $50 off at Dell minus 15% Ebates cash back. It's still high but I pulled 3 other SSDs to swap this in so that'll offset some cost and simply my PC.

"high"

I get that there's a cost for convenience, but man...
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Back
Top