WD Easystore 14TB External Harddrive $199

Are people cracking these to throw in a NAS or something? I never understood what kind of normie would spend a decent chunk of money on a single large capacity spinning drive, to put all of their data on
I don't own this one, but I bought 5 8TB drives a few years ago for a NAS. By the time I use up that space, I'm sure there will be 20 or 30TB drives to replace the ones i have (and I rip my 4K disks (as well as non-duplicate Blu Ray disks) to MKV without any additional compression).

By then, I'd need 8K media to fill up more than 2 or 3 disks (not counting parity).
 
You say it "really sucks" but in what way? Its braindead simple and easy to setup w/ the W10 GUI, the performance is in theory slightly faster than hardware raid (the low end found on mainstream consumer equipment) since you eliminate protocol transfers between Windows driver stack and the RAID card, and its motherboard agnostic. I've taken a W10 software RAID array and imported it into another computer without issues twice now. I have a feeling you haven't worked with windows software raid in quite a while.
Less reliable when compared to a hardware controller. And I'm not talking about windows software raid, but the semi-hardware raid like Intel rapid raid.

And I'm sure windows software raid works 'good enough' like other software raid solutions, but if it was so great you would find it in the enterprise along side things like freenas.
 
Less reliable when compared to a hardware controller.
Do you actually have datapoints to demonstrate that?
And I'm sure windows software raid works 'good enough' like other software raid solutions, but if it was so great you would find it in the enterprise along side things like freenas.
That's like saying that if a Toyota Camry powertrain were so good for regular consumers, you would see it used in more industrial and commercial vehicles like 18-wheelers. Different market with a different purpose. W10 software raid has limitations in the types of RAID supported and how many ports are built into the motherboard and they don't have built in caching capabilities like higher end hardware RAID, but are great for a home NAS with cheap shucked sata drives like this and IMO simpler and better than the cheapo hardware RAID you find built into mainstream motherboards or cheap ones on newegg/amazon and will work great, just like a Camry is great as a roundabout for a family. Most enterprise are using SAN now anyway to pool everything, so its all kind of apples and oranges.
 
I don't own this one, but I bought 5 8TB drives a few years ago for a NAS. By the time I use up that space, I'm sure there will be 20 or 30TB drives to replace the ones i have (and I rip my 4K disks (as well as non-duplicate Blu Ray disks) to MKV without any additional compression).

By then, I'd need 8K media to fill up more than 2 or 3 disks (not counting parity).
Yeah, I think these drives when they go on sale have made storage cheap enough and plenty enough for massive storage repositories. A petabyte at home was unthinkable just a few years ago, and now it's quite possible.
 
Do you actually have datapoints to demonstrate that?

That's like saying that if a Toyota Camry powertrain were so good for regular consumers, you would see it used in more industrial and commercial vehicles like 18-wheelers. Different market with a different purpose. W10 software raid has limitations in the types of RAID supported and how many ports are built into the motherboard and they don't have built in caching capabilities like higher end hardware RAID, but are great for a home NAS with cheap shucked sata drives like this and IMO simpler and better than the cheapo hardware RAID you find built into mainstream motherboards or cheap ones on newegg/amazon and will work great, just like a Camry is great as a roundabout for a family. Most enterprise are using SAN now anyway to pool everything, so its all kind of apples and oranges.
Yep, my own experience. That's all the data I need.

Yep, but if you put a commercial transmission in a Camry, it will definitely hit a million miles like the big rigs do (at least on the trans). Most everyone I know with a home lab is running DAS with dedicated controllers and zfs, which is software, but a totally different animal than the consumer solutions. And the stock Dell perc and HP controllers as well as LSI are popular for a reason.

I'll take an sas raid any day over any sata software solution. Those systems have been shaken down and tested well beyond what my use case would be so I don't have to worry.
 
Yep, my own experience. That's all the data I need.
So no data, OK.
And the stock Dell perc and HP controllers as well as LSI are popular for a reason.
I'm guessing you also have no datapoints to show that those hardware controllers are actually more popular than software RAID (quite unlikely), and yes, that matters when your only real argument is that "its better because its more popular" rather than showing specific performance differences.

Most home users are just throwing together a simple software array on a PC or NAS, and I think if you actually try one out you will find its simpler and cheaper to setup, there is virtually no performance difference to sub $100 RAID controllers with a modern processor, and its easy to swap between systems as long as you stick with the same operating system. And lets face facts, money does matter, or you wouldn't be using inexpensive shucked white label 5400rpm easystores, which will likely be moot anyway because chances are most of us are accessing this data over a gigabit ethernet connection which can easily be saturated.

For example, here's my W10 software RAID benchmark on my ancient Dell T20 ($199 computer purchased in 2014):
disk.PNG


So even on gigabit, yet alone wifi, you'd never notice the difference wasting time and money w/ a hardware raid array and easy to import when/if you move or something breaks.
 
So no data, OK.

I'm guessing you also have no datapoints to show that those hardware controllers are actually more popular than software RAID (quite unlikely), and yes, that matters when your only real argument is that "its better because its more popular" rather than showing specific performance differences.

Most home users are just throwing together a simple software array on a PC or NAS, and I think if you actually try one out you will find its simpler and cheaper to setup, there is virtually no performance difference to sub $100 RAID controllers with a modern processor, and its easy to swap between systems as long as you stick with the same operating system. And lets face facts, money does matter, or you wouldn't be using inexpensive shucked white label 5400rpm easystores, which will likely be moot anyway because chances are most of us are accessing this data over a gigabit ethernet connection which can easily be saturated.

For example, here's my W10 software RAID benchmark on my ancient Dell T20 ($199 computer purchased in 2014):
View attachment 233378

So even on gigabit, yet alone wifi, you'd never notice the difference wasting time and money w/ a hardware raid array and easy to import when/if you move or something breaks.
I don't need to post the data I have for Internet arguments. You can either understand what I'm saying or don't--I could care less.

I never argued performance, but reliability. Performance if it's running the max the hardware can do, then it's maxed out. Whoopdee freaking doo. So you can build a raid on the cheap that has decent performance. So what? Anyone can. Build one that scales without issues, is enterprise-level reliable and you'll learn what the people at backblaze have been learning through their consumer storage pod evolution as they keep adding enterprise quality in the mix--enterprise stuff is where reliability is at.

Personally, I use the easystore drives in stock form as not-to-be-counted-on off-site backups that augment other backups. I wouldn't trust them with my data as that is not what they are designed for. And I wouldn't trust a raid system to be reliable either unless it was built for it, especially any type of stripe (5/6/etc). But hey, it's your data, you can choose how you want to lose it. ;)

This isn't a money thing--I can build enterprise stuff on the cheap as well. You don't have to spend big money for sas and lsi controllers. In fact, many times they are actually cheaper than consumer sata.

And that T20 setup has ridiculously low 4k performance--not sure what is going on there...
 
I don't need to post the data I have for Internet arguments.
You don't have any. You already cited "personal experience" as if that could at all be valuable with how few systems you likely have built and managed lately, especially if you're already so biased against software RAID to not have likely even used it in the last decade. Software RAID used to suck, my whole point is to open your eyes, but you're being rather pigheaded considering you have this entrenched position based on, well, nothing.
I never argued performance, but reliability.
Without the ability to demonstrate that inexpensive hardware RAID is more reliable than software RAID.
This isn't a money thing--I can build enterprise stuff on the cheap as well. You don't have to spend big money for sas and lsi controllers. In fact, many times they are actually cheaper than consumer sata.
Link a build and some data to show its higher performance and reliability and not "trust me, I say so, I don't care if you believe me" as that's not an argument.
And that T20 setup has ridiculously low 4k performance--not sure what is going on there...
What are you smoking? :confused: 1) That's way above average performance for archival platter drives, especially for their age, and especially since you can see those platters are almost full. 2) Seek time is not improved by RAID, derp. 3) Its not really relevant to common use archival file storage, streaming is (large videos) and will max out at 100mb/s for most which means this is already well over network capacity performance which is my points.
 
You don't have any. You already cited "personal experience" as if that could at all be valuable with how few systems you likely have built and managed lately, especially if you're already so biased against software RAID to not have likely even used it in the last decade. Software RAID used to suck, my whole point is to open your eyes, but you're being rather pigheaded considering you have this entrenched position based on, well, nothing.

Without the ability to demonstrate that inexpensive hardware RAID is more reliable than software RAID.

Link a build and some data to show its higher performance and reliability and not "trust me, I say so, I don't care if you believe me" as that's not an argument.

What are you smoking? :confused: 1) That's way above average performance for archival platter drives, especially for their age, and especially since you can see those platters are almost full. 2) Seek time is not improved by RAID, derp. 3) Its not really relevant to common use archival file storage, streaming is (large videos) and will max out at 100mb/s for most which means this is already well over network capacity performance which is my points.
This is typical Internet troll behavior. I have over 50 systems running right now on 3 different sites. One of the sites is a commercial operation that until recently had a system with the pita Intel raid in it. I don't think you've built anything other than a windows software raid and hence that's what you keep touting.

Let's get down to some fundamentals about computers--anything that can be done in hardware can be done in software and vice-versa. This isn't something I made up, this was part of the core concept behind the computer engineering degree at a major university. Extending this to raid, it is obvious that raid can be implemented either way. The benefit and drawback of software is that it can be changed. This is also the strength and weakness of hardware implementation. The core drawback with a software based raid (any of them--and even includes hardware raid systems that store the configuration in only the controller or the drives) is that any inadvertent bit changes can render the raid useless. In my experience, this happens a order of magnitude more on 'software' raids than on hardware ones. That's my point, and if you don't agree then fine. Like I said, I could care less.

I don't have to demonstrate reliability--backblaze has documented that very well. Just go through their pod designs and see the points of failure. I haven't had any failures with any with the hardware raid systems I've implemented, going back to the mid 1990s.

I'm not here to argue, but to share what I know. If you want to play Internet debate, find someone else to troll. My Dell R710 is was under $200 with drives. Reliability is definitely higher because this stuff is old and still works. Performance as I've mentioned before will be the same when you're maxing out things--read.

So that T20 setup is what you would call running correctly? No one building a proper setup will ever accept 3MB/sec as acceptable unless you have massive ios, doesn't matter what you're doing with it. Frankly, those numbers are embarrassing.
 
This is typical Internet troll behavior.
Anyone that disagrees with you and points out that you haven't even attempted to show any data to back your claims is troll behavior in your book? I've remained on topic and demonstrated more than adequate performance for any mainstream home network using free software RAID.
I have over 50 systems running right now on 3 different sites.
So the translation is that you don't have any recent experience with software RAID in a home use environment, the intended market for shucking a mainstream consumer drive like this.
The core drawback with a software based raid (any of them--and even includes hardware raid systems that store the configuration in only the controller or the drives) is that any inadvertent bit changes can render the raid useless. In my experience, this happens a order of magnitude more on 'software' raids than on hardware ones. That's my point, and if you don't agree then fine. Like I said, I could care less.
With no datapoints to reinforce your opinion, you just keep repeating "trust me". Virtually every home use NAS on the market today is also implementing software RAID, and there are no widespread reports of RAID failures.
backblaze has documented that very well.
Are you willing to share that documentation?
going back to the mid 1990s.
Exactly my point, data points (that you haven't shared) about software RAID from the mid 1990s are irrelevant because both the hardware and software has changed so much.
Reliability is definitely higher because this stuff is old and still works.
And I have four software RAID systems operating with demonstrably adequate performance without fault running continuously for six years now.

The problem I have is that you are making unsubstantiated outdated claims that will cost people time and money. So far your only justification is "OMG it is because I say so, how dare you question that you troll!" which doesn't help anyone.
No one building a proper setup will ever accept 3MB/sec
Speaking of troll behavior, I have already explained why sequential performance is the relevant benchmark for large array home use, typically 10gig+ video files, which is already at three times the capacity of a gigabit LAN, and far more than is needed. If you have a bunch of tiny files that you need to serve as well, they can fit on my 2TB SSD that's holding the OS, and as was already pointed out, no RAID array, software or hardware, is going to improve the seek times so again what are you smoking? You're not even making a good faith argument at this point.

Edit: tl;dr version: Why not keep things simple and show us what hardware RAID system you are using at your house (we'll estimate costs), run crystaldiskmark on it real quick (takes minutes) and post a quick screengrab.
 
Last edited:
The problem is that you're trying to play a 'right and wrong' game here. We are actually waaaay the F off-topic and honestly you are going to get this thread locked if you keep spouting off. Stop your trolling and leave it alone. You don't agree--fine.

No translation or interpretation or manipulation of what I said needed--I was very clear in what I stated.

Stop your Internet troll right/wrong arguing. :meh: Yes, today's nas units are primarily software based, but they are highly specialized and rigorously tested as commercial products. And there's only a few brands that have got this right and become the 'standard' (Synology Qnap). The others have had issues and data retrieval when these units mess up has not been trivial.

You know how to search--you keep touting it all the time--so go find it. o_O

You really don't understand stuff very well. Let me spell it out for you more since you're having trouble--everything I have built from the 1990s until recently (just a month ago) is still working. If something worked back then and it still working, the methodology is still relevant. Geez, you sound like one of these dumb millennials that think the world started when they were born. :ROFLMAO:

Because hardware has improved in reliability over time in certain areas, you can achieve much better reliability today than in the past. But that being said 6 years in a home environment is nothing compared to 10 years in a 24x7 data center. Your raid would have been destroyed in that environment. And remember this point is pertaining to reliability.

Awesome, you've admitted you have a problem. Please go seek help with that and stop fighting here.

No one in their right mind is going to read a single statement either you or I have written and go, 'oh I will do just what is said here without using my brain'. If so, fool and money parted. And the point you're missing is that there is no right and wrong to any of this because it is up to an individual's situation. Someone with their ripped collection of movies off of owned media potentially cares a lot less about data loss than someone who has 100 years of scanned family photos to lose.

People who know what they're doing choose the right tool for the job. I'm not saying that a cheap windows raid doesn't fulfill some requirements out there, but it's absolutely trivial how much a proper hardware raid costs that brings reliability to the game. Honestly, if anyone puts any important data on the windows raid that you're recommending, they're playing with fire imo and risking their data unnecessarily. But I'll leave that up to the reader to decide, not force a decision down their throat with an Internet bibliography to 'prove' my point.

And I've already reiterated how 3MB/sec outside of high iops use is completely messed up. You can defend that with 'use case' arguments as to why this deficiency doesn't matter, but it exists when it shouldn't, and a proper setup wouldn't have this issue. I've never seen such skewed results from crystal before--usually the biggest dropoff in speed is the last test, not straight from test 2. If you want to ignore that, then that's fine. But anyone reading this just has to compare your crystal disk with the millions of others out there to see something is wrong.
 
The problem is that you're trying to play a 'right and wrong' game here. We are actually waaaay the F off-topic and honestly you are going to get this thread locked if you keep spouting off. Stop your trolling and leave it alone. You don't agree--fine.
I find it interesting that you can offer this advice that you can't seem to take yourself.
But that being said 6 years in a home environment is nothing compared to 10 years in a 24x7 data center. Your raid would have been destroyed in that environment. And remember this point is pertaining to reliability.
6-years and going, and hardware like this has planned obsolescense. You don't seem to understand where you are or what product you're looking at. I keep trying to keep you on topic to understand that this is a CONSUMER white label inexpensive drive, this is not enterprise hardware that will be used in some data center.
And I've already reiterated how 3MB/sec outside of high iops use is completely messed up.
300+MB/sec, learn to read. Take three minutes, far less time than it takes you to keep regurgitating the same unsubstantiated claims, and take a screenshot of any of your home RAID setups performance in diskmark.

Or, let me guess, you don't even have one... *facepalm*
 
I keep seeing this thread at the top, hoping people are commenting because the deal is back in stock, as I'm interested in taking advantage of this deal.

Nope. Just two manbabies that are both right about different use cases measuring their e-peens.
 
Hardware raid is just software raid running on dedicated hardware. :-p

I used to be all about hardware raid too but even I can admit times change. Today, all I need to know is that if my company can trust a petabyte of data to software raid, it has to be good enough for my movies and home videos.
 
Hardware raid is just software raid running on dedicated hardware. :-p

I used to be all about hardware raid too but even I can admit times change. Today, all I need to know is that if my company can trust a petabyte of data to software raid, it has to be good enough for my movies and home videos.
This is my take too. I just put them on their. I have a parity drive just in case one dies. If 2 die, I also have a separate backup of the videos (never mind that I have all the original Blu Rays, 4K disks and generally another copy of any videos I took. I just don't get the need for expensive hardware (including drives), for a home system. If you're really that worried, buy extra drives, back the entire thing up every week. If you're really paranoid, back it up and move the drive(s) offsite (and bring previous backup back onsite for next backup) If you need more security than that, it's time to backup everything to the cloud.
 
Hardware raid cards are dirt cheap. So price isn't even an issue. Its just another solution to a problem. But for the average home use it really doesn't matter which way you go. But for a corporation their could be big difference in capability.
 
This is my take too. I just put them on their. I have a parity drive just in case one dies. If 2 die, I also have a separate backup of the videos (never mind that I have all the original Blu Rays, 4K disks and generally another copy of any videos I took. I just don't get the need for expensive hardware (including drives), for a home system. If you're really that worried, buy extra drives, back the entire thing up every week. If you're really paranoid, back it up and move the drive(s) offsite (and bring previous backup back onsite for next backup) If you need more security than that, it's time to backup everything to the cloud.

Wait, what? Did you say that if you need more SECURITY to back up to the CLOUD!? "The Cloud" is called "The Cloud" because you have no idea how exactly it works internally, who has access to it, how it is archived there, and if the security of your connection to it has been compromised in any way or not. This is by definition LESS SECURE than hardware that you have under your complete control and supervision. The only things "The Cloud" has going for it is that it is offsite, cost deferred over dedicated hardware, and relatively convenient IF you have a good internet connection.
 
Just wanted to provide a data point that I do use shucked easystores for my arrays, with a more expensive controller: https://www.newegg.com/areca-arc-1883ix-16-2g-sas/p/N82E16816151167 I did get it open box for $530, back in 2017. Previous to that, I had used an Areca controller purchased in 2008, I believe it was the ARC-1220, which maintained an array from 2008 to 2017 over Windows Vista, 7, and 10. Here's hoping 10 hangs around a while, although if it does change and the software raid setup changes, everyone with it will be stuck on 10. The old controller still works, I just updated to a newer one to add a second array for the easystores. I was able to carry the array from the old controller over to the new one easily. Another big point is the capacity you're adding by using a hardware raid controller, since you will end up needing to add ports anyway.

I'm not sure crystal disk mark is a great tool to measure performance, just raw numbers that don't necessarily reflect your actual use case (lots of large continuous files, or millions of tiny ones, lots of reads but few writes, etc.) Here's some tests of the 8x 8tb raid 6 array using the same settings, one from 2017 and one I just did:
Areca 1883 crystaldiskmark 10-29-2017.png
Areca 1883 crystaldiskmark 04-01-2020.png

As you can see, these numbers mean little. I do actually have 10gbit fiber between the server and my pc, which was cheap too with some mellanox cards. Realistically I get about 350MB/sec on large transfers over the fiber to and from the arrays to my nvme.

I think software raid can be fine for some people. The level of reliability you want is really up to you. I really don't want the headache of restoring from backup, so I chose a more robust solution that also offers me great expandability and performance. I also backup everything to Backblaze, since I have the bandwidth for it. My own past experience with a lesser Areca card was my data point for it being reliable. I had previously used Seagate 1tb drives that had that famous firmware bug, and one of course died on me. When the warranty replacement showed up it worked fine rebuilding the array.
 
You're surely measuring the up to 8GB of cache on that $1000+ RAID card that apparently draws as much power as some GPUs, and as was mentioned is not going to be on sub $100 RAID cards for typical home use making such an investment (cheap RAID card) a waste of time/money/slots unless your mobo lacks ports. If your motherboard dies or becomes obsolete, importing to another W10 system is simple, whereas with the failure of a $1K RAID card, sourcing another can be quite expensive and a challenge not typically needed. Most home users are going to be worrying about sequential read/write performance over a gigabit wired (or worse wireless) network, more than saturated by simple software RAID setup even on the most basic of cheap systems like my example. Any numerous tiny files can be accessed over the SSD OS drive, in my case 2TB, rather than the platter drives (pics, plex thumbnails, etc), and while that is still benchmarking higher than my SSD even the 4K performance was at gigabit network saturation. Thus, no, software Raid does not "suck" for home use, and is usually the simplest and cheapest option for mainstream use.
 
You're surely measuring the up to 8GB of cache on that $1000+ RAID card that apparently draws as much power as some GPUs, and as was mentioned is not going to be on sub $100 RAID cards for typical home use making such an investment (cheap RAID card) a waste of time/money/slots unless your mobo lacks ports. If your motherboard dies or becomes obsolete, importing to another W10 system is simple, whereas with the failure of a $1K RAID card, sourcing another can be quite expensive and a challenge not typically needed. Most home users are going to be worrying about sequential read/write performance over a gigabit wired (or worse wireless) network, more than saturated by simple software RAID setup even on the most basic of cheap systems like my example. Any numerous tiny files can be accessed over the SSD OS drive, in my case 2TB, rather than the platter drives (pics, plex thumbnails, etc), and while that is still benchmarking higher than my SSD even the 4K performance was at gigabit network saturation. Thus, no, software Raid does not "suck" for home use, and is usually the simplest and cheapest option for mainstream use.
Sure, just wanted to provide some data points. Absolutely it's grabbing the cache, I just posted it to show how useless a metric it was. I never said software raid sucked. Truth be told for home use, most will simply not use raid at all, software or otherwise. Just a few plain disks, and hopefully some kind of backup implementation.
 
I'm beginning to wonder if a hardware RAID card somewhere kicked somebody's puppy... :)
 
Can we move this thread to the RAID discussion forum rather than get my hopes up for cheap drives in the hot deals subforum?
Yeah it's getting off topic, you are right. Currently, the 12tb easystore is available for $180.
 
Wait, what? Did you say that if you need more SECURITY to back up to the CLOUD!? "The Cloud" is called "The Cloud" because you have no idea how exactly it works internally, who has access to it, how it is archived there, and if the security of your connection to it has been compromised in any way or not. This is by definition LESS SECURE than hardware that you have under your complete control and supervision. The only things "The Cloud" has going for it is that it is offsite, cost deferred over dedicated hardware, and relatively convenient IF you have a good internet connection.
It's a backup. If your house burns to the ground tomorrow, you can get your data back. If you're really worried about the man looking at your info, encrypt it before sending it up.
 
It's a backup. If your house burns to the ground tomorrow, you can get your data back. If you're really worried about the man looking at your info, encrypt it before sending it up.

I'm not worried about The Man looking at my data, so much as I am offended that you recommended a cloud backup as a valid method for those seeking "more security." The Cloud and everything it represents is the polar opposite of "more security."
 
I'm not worried about The Man looking at my data, so much as I am offended that you recommended a cloud backup as a valid method for those seeking "more security." The Cloud and everything it represents is the polar opposite of "more security."
Then you need to explain what you mean, because I don't follow you at all. I don't know anyone whose house as burned to the ground, but I know several that have had catastrophic failures and they've retrieved all their data from the cloud every single time.
 
I'm not worried about The Man looking at my data, so much as I am offended that you recommended a cloud backup as a valid method for those seeking "more security." The Cloud and everything it represents is the polar opposite of "more security."
There's a difference between security from losing data and security from someone else getting your data. You need offsite backup of data for anything important and a well secured cloud solution is the easiest way to stop anyone without a subpoena.
 
Back
Top