Separate names with a comma.
Discussion in '[H]ot|DEALS' started by LurkerLito, Nov 7, 2019.
picked up one of these yesterday. Moved 6tb to it and now doing a sector scan. So far so good!
Picked up 4 of these for my new NAS at home. Can't wait!
Just picked up one today , put it in my unraid server. Running pre clear on it. Going to be my parity drive. Almost bought 3 but I really don’t need the space at the moment. Sitting at 42tb. Still have 8 hot swap bays left.
Ahh so they're all HGST HE10 based now?! Does this mean they can run at up to 7200rpm when connected via SATA - I might be wrong but I seem to recall that the "WD Red" style capped out at 5400rpm (which isnt really a problem for NAS usage of course), but that HGST HE10 models could run up to 7200 along with some otherbenefits, though these may have been firmware locked or otherwise downgraded in these externals.
I'm considering a long overdue rebuild of my home server/NAS box so the blueprint is somewhat open. I have a single shucc-able 8TB I picked up awhile ago , but besides that things are open. Given this is a server box, I'm not limited space-wise when it comes to expansion of drives in the long run (if I reuse an old case, it has room for 4 drives in a backplane, 3-4 more in other mounts, and if necessary bay devices can be added later) , though I'll probably "start" it with just a few drives depending on RAID requirements.
As for RAID, I'm not sure what the best option is anymore - the ancient thing I have is basically running as JBOD, but it will be nice to have some redundancy. I don't know if its worth it to go hardware RAID card these days, but even if its a software setup I was thinking Raid 5 originally (maybe Raid 6?) in order to get a little redundancy, but there is also the kind of ZFS / BTRFS file system way of handling RAID as well. I'm to understand that the ZFS / BTRFS / Unraid manner even allows for mixed drive sizes if I recall.
Max space is hard to judge, but these days even a single 8TB drive is a significant storage increase for me in terms of minimum requirements, so that will be an improvement ; to date I've been limiting what I save and what I task the server to do thanks to its limitations in both space and computing power (its got a Core2Quad era platform and does little except SAMBA/NFS shares for that reason). but that will be a big change via rebuild.
Here's one of my white label 8TB EMAZ Drives unraid identifies as HE10, just because its based off the model it doesnt mean the specs are the same, these are 5400 RPM drives, the WD Red regular are 5400, the WD Red Pro are 7200.
You're getting into rumor territory, it has long been suspected they are relabeled versions of the WD drives are ones that failed initial QA and are gimped either in size, speed, and/or features.
To my knowledge there has been no confirmation, but they seem to be exceptionally fast and reasonably reliable drives especially for the cost.
You are correct, hardware raid is less recommended as the controller can fail vs unraid/zfs which use JBOD disks and a software raid solution that makes it hardware agnostic and allows different sizes of disks.
That being said for different disk sizes ZFS also uses the smallest disk to equate the array ~ example (3TB, 6TB, 8TB, 8TB in a raid-z (one redundant) would see all the disks as 3TB total of 9TB ~ you could upgrade the 3TB to a 6TB and then they would all be 6TB though)
On the flip side unRAID uses all the disks pooled together so the same 3+6+8+8 with one parity drive would be 17TB (3+6+8) ~ parity drive has to be equal or larger to the largest array drive.
However there is performance differences between them the ZFS is going to perform more like a traditional hardware raid array in terms of speed (about 350MB/s for that 4 disk array), unraid is generally limited to the speed of the individual disks (about 150MB/s).
Performance probably isn't a huge concern unless you have/plan on getting 10 gig networking or if you're running VMs/databases on your main data array.
The biggest drawback IMO to ZFS, is that you either have to replace all/the lowest disks to expand the array, or you have to add a entirely full/new vdev (which is generally a minimum of 4 new drives purchased).
unRAID allows individual drive additions to the array.
Lastly is cost, ZFS in general is free, unraid costs $60-130 depending on how many drives you have ~ you do get about 2 months of free trial though (keep in mind if you plan on adding SSD for cache those drives count towards the device total for licensing).
Depending on your growth expectations, cost might be better suited with a 4-8 bay QNAP/synology NAS and use your server solely as compute, but thats up to you.
The biggest thing I wanted was the ability to expand my array one disk at a time, so unRAID is what I ended up going with, previously I had a drobo and a QNAP NAS both of which allowed expanding by one drive as well (they use custom ZFS OS'es behind the app portals).
I will offer some thoughts on hardware raid controllers.
Modern controllers store the array configuration on the discs, not on the controller. If the controller fails, you replace it with an equal or newer generation from the same manufacturer and it reads the configuration in from the disks. Some controllers might ask you to confirm it, but generally it is trivial to recover.
Most controllers allow you to add disks to existing arrays. Some even allow you to migrate between types of RAID. For example, you could start out with a RAID 1 pair and migrate to RAID 5 when you add two more disks, and then migrate to raid 6 when you add the next bunch of disks. You do these disk additions and migrations, using CLI tools, while the system is online and once it is complete, you add the now available space to the formatted space (OS specific how that works).
Caveat to the above, most controllers require you to have some amount of cache on the controller to do many of these functions.
I had an Adaptec RAID controller fail last year (this was my fault, I didn't have enough air flow over it) and I bought another one on ebay and plugged everything in and with a single click on the controller BIOS everything was back online.
I just wanted to be sure that everyone understands that losing a controller isn't a big deal. I am not saying it is the best for every use case, but I feel like it doesn't get a fair shake sometimes.
I have an Adaptec RAID 71605 16-Port 6Gb SAS PCI-E 3.0 running my arrays inside my server. I see on ebay it is presently $85 which includes the 1 gig of cache, the capacitor based backup battery and even the cables. For working with platter SATA drives this controller is very fast. Hardware raid controllers were much slower before SSD's came to the server space, now that they have to be capable of calculating parity at SSD throughput's they can more easily handle spinning platter throughput's.
I would also be interested in a couple.
True, but unless you keep a spare controller around it'll take some time to get the controller in, I don't want that much downtime.
I can pull spare hardware out of my closet and hook it up should something go awry be it may it'll be performance gimped but it'll function.
This is why if you're using any type of setup that will not allow the data to be pulled off a drive just by hooking the drive up to another system, you need to have a redundant system as well. Otherwise, downtime can be a real issue.
There is snapraid which is free. It requires the parity drive to be the same size as the largest data drive. It allows adding drives or replacing them.
If downtime is that expensive, keeping a spare $80 controller on hand doesn't seem that daunting. If you are running large arrays, you likely exceed the on board connectivity of most mainboards around.
Its not expensive I just have a heavy disdain for downtime.