Best Buy 12TB WD Easy Store External HDD 179.99

mls1995

2[H]4U
Joined
Jan 20, 2007
Messages
3,411
picked up one of these yesterday. Moved 6tb to it and now doing a sector scan. So far so good!
 
Joined
Jun 4, 2008
Messages
2,607
Just picked up one today , put it in my unraid server. Running pre clear on it. Going to be my parity drive. Almost bought 3 but I really don’t need the space at the moment. Sitting at 42tb. Still have 8 hot swap bays left.
 

RanceJustice

Supreme [H]ardness
Joined
Jun 9, 2003
Messages
5,796
All of the 8-12TB easystore appear to be the same HGST HE10 drives (I would think its a safe guess the 14s are too).

It all depends on your storage speed/space needs and redundancy, keep in mind if you're doing a raid setup with parity the larger the drive the longer the rebuild will take (and increased likely hood another drive will fail during rebuild).
I have room in my cases for 20 drives, I only have half the slots filled with 8's and I have free space still.
One of my coworkers is using 12's since he only has space for 7 drives, we have the same amount of usable space.

So thats the first questions what raid form do you need, whats your max drives, whats the max usable space you think you need?
Ahh so they're all HGST HE10 based now?! Does this mean they can run at up to 7200rpm when connected via SATA - I might be wrong but I seem to recall that the "WD Red" style capped out at 5400rpm (which isnt really a problem for NAS usage of course), but that HGST HE10 models could run up to 7200 along with some otherbenefits, though these may have been firmware locked or otherwise downgraded in these externals.

I'm considering a long overdue rebuild of my home server/NAS box so the blueprint is somewhat open. I have a single shucc-able 8TB I picked up awhile ago , but besides that things are open. Given this is a server box, I'm not limited space-wise when it comes to expansion of drives in the long run (if I reuse an old case, it has room for 4 drives in a backplane, 3-4 more in other mounts, and if necessary bay devices can be added later) , though I'll probably "start" it with just a few drives depending on RAID requirements.

As for RAID, I'm not sure what the best option is anymore - the ancient thing I have is basically running as JBOD, but it will be nice to have some redundancy. I don't know if its worth it to go hardware RAID card these days, but even if its a software setup I was thinking Raid 5 originally (maybe Raid 6?) in order to get a little redundancy, but there is also the kind of ZFS / BTRFS file system way of handling RAID as well. I'm to understand that the ZFS / BTRFS / Unraid manner even allows for mixed drive sizes if I recall.

Max space is hard to judge, but these days even a single 8TB drive is a significant storage increase for me in terms of minimum requirements, so that will be an improvement ; to date I've been limiting what I save and what I task the server to do thanks to its limitations in both space and computing power (its got a Core2Quad era platform and does little except SAMBA/NFS shares for that reason). but that will be a big change via rebuild.
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,254
Ahh so they're all HGST HE10 based now?! Does this mean they can run at up to 7200rpm when connected via SATA - I might be wrong but I seem to recall that the "WD Red" style capped out at 5400rpm (which isnt really a problem for NAS usage of course), but that HGST HE10 models could run up to 7200 along with some otherbenefits, though these may have been firmware locked or otherwise downgraded in these externals.
oCwGPJes.jpg

Here's one of my white label 8TB EMAZ Drives unraid identifies as HE10, just because its based off the model it doesnt mean the specs are the same, these are 5400 RPM drives, the WD Red regular are 5400, the WD Red Pro are 7200.
You're getting into rumor territory, it has long been suspected they are relabeled versions of the WD drives are ones that failed initial QA and are gimped either in size, speed, and/or features.
To my knowledge there has been no confirmation, but they seem to be exceptionally fast and reasonably reliable drives especially for the cost.

I'm considering a long overdue rebuild of my home server/NAS box so the blueprint is somewhat open. I have a single shucc-able 8TB I picked up awhile ago , but besides that things are open. Given this is a server box, I'm not limited space-wise when it comes to expansion of drives in the long run (if I reuse an old case, it has room for 4 drives in a backplane, 3-4 more in other mounts, and if necessary bay devices can be added later) , though I'll probably "start" it with just a few drives depending on RAID requirements.

As for RAID, I'm not sure what the best option is anymore - the ancient thing I have is basically running as JBOD, but it will be nice to have some redundancy. I don't know if its worth it to go hardware RAID card these days, but even if its a software setup I was thinking Raid 5 originally (maybe Raid 6?) in order to get a little redundancy, but there is also the kind of ZFS / BTRFS file system way of handling RAID as well. I'm to understand that the ZFS / BTRFS / Unraid manner even allows for mixed drive sizes if I recall.

Max space is hard to judge, but these days even a single 8TB drive is a significant storage increase for me in terms of minimum requirements, so that will be an improvement ; to date I've been limiting what I save and what I task the server to do thanks to its limitations in both space and computing power (its got a Core2Quad era platform and does little except SAMBA/NFS shares for that reason). but that will be a big change via rebuild.
You are correct, hardware raid is less recommended as the controller can fail vs unraid/zfs which use JBOD disks and a software raid solution that makes it hardware agnostic and allows different sizes of disks.
That being said for different disk sizes ZFS also uses the smallest disk to equate the array ~ example (3TB, 6TB, 8TB, 8TB in a raid-z (one redundant) would see all the disks as 3TB total of 9TB ~ you could upgrade the 3TB to a 6TB and then they would all be 6TB though)
On the flip side unRAID uses all the disks pooled together so the same 3+6+8+8 with one parity drive would be 17TB (3+6+8) ~ parity drive has to be equal or larger to the largest array drive.
However there is performance differences between them the ZFS is going to perform more like a traditional hardware raid array in terms of speed (about 350MB/s for that 4 disk array), unraid is generally limited to the speed of the individual disks (about 150MB/s).
Performance probably isn't a huge concern unless you have/plan on getting 10 gig networking or if you're running VMs/databases on your main data array.

The biggest drawback IMO to ZFS, is that you either have to replace all/the lowest disks to expand the array, or you have to add a entirely full/new vdev (which is generally a minimum of 4 new drives purchased).
unRAID allows individual drive additions to the array.

Lastly is cost, ZFS in general is free, unraid costs $60-130 depending on how many drives you have ~ you do get about 2 months of free trial though (keep in mind if you plan on adding SSD for cache those drives count towards the device total for licensing).

Depending on your growth expectations, cost might be better suited with a 4-8 bay QNAP/synology NAS and use your server solely as compute, but thats up to you.

The biggest thing I wanted was the ability to expand my array one disk at a time, so unRAID is what I ended up going with, previously I had a drobo and a QNAP NAS both of which allowed expanding by one drive as well (they use custom ZFS OS'es behind the app portals).
 

nwrtarget

Gawd
Joined
Aug 10, 2010
Messages
879
I will offer some thoughts on hardware raid controllers.
Modern controllers store the array configuration on the discs, not on the controller. If the controller fails, you replace it with an equal or newer generation from the same manufacturer and it reads the configuration in from the disks. Some controllers might ask you to confirm it, but generally it is trivial to recover.
Most controllers allow you to add disks to existing arrays. Some even allow you to migrate between types of RAID. For example, you could start out with a RAID 1 pair and migrate to RAID 5 when you add two more disks, and then migrate to raid 6 when you add the next bunch of disks. You do these disk additions and migrations, using CLI tools, while the system is online and once it is complete, you add the now available space to the formatted space (OS specific how that works).

Caveat to the above, most controllers require you to have some amount of cache on the controller to do many of these functions.

I had an Adaptec RAID controller fail last year (this was my fault, I didn't have enough air flow over it) and I bought another one on ebay and plugged everything in and with a single click on the controller BIOS everything was back online.

I just wanted to be sure that everyone understands that losing a controller isn't a big deal. I am not saying it is the best for every use case, but I feel like it doesn't get a fair shake sometimes.

I have an Adaptec RAID 71605 16-Port 6Gb SAS PCI-E 3.0 running my arrays inside my server. I see on ebay it is presently $85 which includes the 1 gig of cache, the capacitor based backup battery and even the cables. For working with platter SATA drives this controller is very fast. Hardware raid controllers were much slower before SSD's came to the server space, now that they have to be capable of calculating parity at SSD throughput's they can more easily handle spinning platter throughput's.
 

jordan12

[H]F Junkie
Joined
Dec 29, 2000
Messages
9,576
I just went ahead and bought 10 to replace the 10 x 10tb I bought four months ago. Guess I'm going to have one hell of a ebay sell on 10tb drives. Love these synology nas make you so addicted to storage.

I would also be interested in a couple.
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,254
I just wanted to be sure that everyone understands that losing a controller isn't a big deal. I am not saying it is the best for every use case, but I feel like it doesn't get a fair shake sometimes.
True, but unless you keep a spare controller around it'll take some time to get the controller in, I don't want that much downtime.
I can pull spare hardware out of my closet and hook it up should something go awry be it may it'll be performance gimped but it'll function.
 

SamirD

2[H]4U
Joined
Mar 22, 2015
Messages
3,368
True, but unless you keep a spare controller around it'll take some time to get the controller in, I don't want that much downtime.
I can pull spare hardware out of my closet and hook it up should something go awry be it may it'll be performance gimped but it'll function.
This is why if you're using any type of setup that will not allow the data to be pulled off a drive just by hooking the drive up to another system, you need to have a redundant system as well. Otherwise, downtime can be a real issue.
 

Silentbob343

[H]ard|Gawd
Joined
Aug 2, 2004
Messages
1,782
View attachment 204308
Here's one of my white label 8TB EMAZ Drives unraid identifies as HE10, just because its based off the model it doesnt mean the specs are the same, these are 5400 RPM drives, the WD Red regular are 5400, the WD Red Pro are 7200.
You're getting into rumor territory, it has long been suspected they are relabeled versions of the WD drives are ones that failed initial QA and are gimped either in size, speed, and/or features.
To my knowledge there has been no confirmation, but they seem to be exceptionally fast and reasonably reliable drives especially for the cost.


You are correct, hardware raid is less recommended as the controller can fail vs unraid/zfs which use JBOD disks and a software raid solution that makes it hardware agnostic and allows different sizes of disks.
That being said for different disk sizes ZFS also uses the smallest disk to equate the array ~ example (3TB, 6TB, 8TB, 8TB in a raid-z (one redundant) would see all the disks as 3TB total of 9TB ~ you could upgrade the 3TB to a 6TB and then they would all be 6TB though)
On the flip side unRAID uses all the disks pooled together so the same 3+6+8+8 with one parity drive would be 17TB (3+6+8) ~ parity drive has to be equal or larger to the largest array drive.
However there is performance differences between them the ZFS is going to perform more like a traditional hardware raid array in terms of speed (about 350MB/s for that 4 disk array), unraid is generally limited to the speed of the individual disks (about 150MB/s).
Performance probably isn't a huge concern unless you have/plan on getting 10 gig networking or if you're running VMs/databases on your main data array.

The biggest drawback IMO to ZFS, is that you either have to replace all/the lowest disks to expand the array, or you have to add a entirely full/new vdev (which is generally a minimum of 4 new drives purchased).
unRAID allows individual drive additions to the array.

Lastly is cost, ZFS in general is free, unraid costs $60-130 depending on how many drives you have ~ you do get about 2 months of free trial though (keep in mind if you plan on adding SSD for cache those drives count towards the device total for licensing).

Depending on your growth expectations, cost might be better suited with a 4-8 bay QNAP/synology NAS and use your server solely as compute, but thats up to you.

The biggest thing I wanted was the ability to expand my array one disk at a time, so unRAID is what I ended up going with, previously I had a drobo and a QNAP NAS both of which allowed expanding by one drive as well (they use custom ZFS OS'es behind the app portals).
There is snapraid which is free. It requires the parity drive to be the same size as the largest data drive. It allows adding drives or replacing them.
 

nwrtarget

Gawd
Joined
Aug 10, 2010
Messages
879
True, but unless you keep a spare controller around it'll take some time to get the controller in, I don't want that much downtime.
I can pull spare hardware out of my closet and hook it up should something go awry be it may it'll be performance gimped but it'll function.
If downtime is that expensive, keeping a spare $80 controller on hand doesn't seem that daunting. If you are running large arrays, you likely exceed the on board connectivity of most mainboards around.
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,254
Its not expensive I just have a heavy disdain for downtime.
 
Last edited:

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,254
This ^, The best buy 12 days of deals usually have them on sale as well as the new years sale that comes after.
 

SamirD

2[H]4U
Joined
Mar 22, 2015
Messages
3,368
This ^, The best buy 12 days of deals usually have them on sale as well as the new years sale that comes after.
Yeah, it's crazy how many times the 8/10/12 been on sale this year. Anybody keep track of these somewhere? It would be interesting to analyze for patterns.
 

AP514

Limp Gawd
Joined
Feb 15, 2006
Messages
426
So what is the difference between the 12TB drives I Shucked ??

WD120EMFZ

WD120EMAZ
 

Luke M

Limp Gawd
Joined
Apr 20, 2016
Messages
423
So what is the difference between the 12TB drives I Shucked ??

WD120EMFZ

WD120EMAZ
The 12TB EMFZ is similar to the 14TB. 512MB cache, missing side screw hole. Possibly a defective 14TB drive.
 

CruisD64

2[H]4U
Joined
Mar 6, 2007
Messages
2,139
Bit on the 14TB. Time to upgrade my ZFS pool from 4TB drives to 8TB drives and this will let me do it. Thanks!
 

AP514

Limp Gawd
Joined
Feb 15, 2006
Messages
426
The 12TB EMFZ is similar to the 14TB. 512MB cache, missing side screw hole. Possibly a defective 14TB drive.
So does the EMAZ have 512 cache or just 256MB ?
reason I ask is I have 3 - 12TB
2 are the EMAZ
1 EMFZ
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,254
The only whites that aren't 256 are the EMFZ 14TB, those have been reported to be 512 as noted.
 

coalfired

n00b
Joined
Jan 6, 2020
Messages
2
Crystalmark for the WD120EMAZ 12TB received in mine shows the same read speed as the WD Red which confirms suspicions and shows its not SMR

2020-01-06 (1).png


2020-01-06 (3).png


Saumsung benchmark shows:

Sequential Read 200 MB/s
Sequential Write 199 MB/s
Random Read 408 IOPS
Random Write 1708 IOPS
 
Last edited:

SamirD

2[H]4U
Joined
Mar 22, 2015
Messages
3,368
Saumsung benchmark shows:

Sequential Read 200 MB/s
Sequential Write 199 MB/s
Random Read 408 MB/s
Random Write 1708 MB/s
Wait, that's what the Samsung benchmark shows on the WD 12TB for random read and random write? Those numbers don't make sense. No platter drive can transfer faster randomly than sequentially.
 

coalfired

n00b
Joined
Jan 6, 2020
Messages
2
Wait, that's what the Samsung benchmark shows on the WD 12TB for random read and random write? Those numbers don't make sense. No platter drive can transfer faster randomly than sequentially.
Good spot sorry it was meant to be IOPS I have amended!
 

deyer

n00b
Joined
Jun 13, 2006
Messages
37
Dang, to bad I needed hard drives a few months ago. I would have paid an extra $20 for 2 TBs more. However, the 10TBs have been working great even with a faulty ups that made the NAS lose power twice.
 

Joust

2[H]4U
Joined
Nov 30, 2017
Messages
3,306
Watching for when it's back. Though, I would really prefer the 14tb @ $200. I'll certainly be jumping on something soon.
 
Top