Seagate IronWolf Pro 16TB SATA NAS Hard Drive, 7200 RPM, 2-Pack $560 shipped @Adorama

lifanus

Gawd
Joined
Aug 26, 2008
Messages
870
Here is the Deal link from Adorama

Haven't posted for a while, but I just bought the 2-pack for my own NAS, the cheapest price I can find that offers a full warranty from a legit retailer.

Usually if you buy one it's $299. This deal makes the cost @ $17.5 per TB, some states might need to pay tax. Cheers!

set16000ne02.jpg
 
Hardly a deal.... Or is inflation really this high now?
It's a decent price per TB for nearly full blown enterprise drives. You really can't compare these to the easystore drives that are typically in the $14-15/TB range as these have over 2x the warranty and design life. But lately, even the full out enterprise drives have been having deals like this like the 2-pack of WD Gold for $5xx a few months back, making them in the same price range as these, but even a notch better in terms of quality.

There are also other deals out there like these push the price envelope even lower:
https://serverpartdeals.com/collect...6gb-s-512e-4kn-sed-3-5-recertified-hard-drive

This particular drive is at $12/TB with only a 2yr warranty because it is recertified by Seagate. But when you compare this, an enterprise drive design build with a 2yr warranty for $12/TB with an easystore shuck that is a consumer drive with a supposed enterprise design with a 2yr warranty for $14+/TB, it's a no brainer that this is a better deal.
 
It's a decent price per TB for nearly full blown enterprise drives. You really can't compare these to the easystore drives that are typically in the $14-15/TB range as these have over 2x the warranty and design life. But lately, even the full out enterprise drives have been having deals like this like the 2-pack of WD Gold for $5xx a few months back, making them in the same price range as these, but even a notch better in terms of quality.

There are also other deals out there like these push the price envelope even lower:
https://serverpartdeals.com/collect...6gb-s-512e-4kn-sed-3-5-recertified-hard-drive

This particular drive is at $12/TB with only a 2yr warranty because it is recertified by Seagate. But when you compare this, an enterprise drive design build with a 2yr warranty for $12/TB with an easystore shuck that is a consumer drive with a supposed enterprise design with a 2yr warranty for $14+/TB, it's a no brainer that this is a better deal.
What is the difference between that drive, and this one? https://serverpartdeals.com/product...b-3-5-fastformat-manufacturer-recertified-hdd

The only thing I can see is "SED", but I am unsure what that is.


EDIT: Self-Encrypting Drive.
 
Last edited:
I’ve had one in my media server for a few years, first one died within a month and was replaced by Seagate. Been great since then.

Only thing is these Seagates are LOUD compared to WD NAS drives and the white label drives from enclosures.
 
Only thing is these Seagates are LOUD compared to WD NAS drives and the white label drives from enclosures.
That's the case with any enterprise class drive made by anyone. It's typically a low 'grumble' that's a product of them just being made to be reliable, not featherweights like their consumer cousins. If this is a problem, then definitely don't get enterprise drives.
 
That's the case with any enterprise class drive made by anyone. It's typically a low 'grumble' that's a product of them just being made to be reliable, not featherweights like their consumer cousins. If this is a problem, then definitely don't get enterprise drives.
No these are next level loud. I’m talking 10K Velociraptor loud. Like gerbil running amok in your walls loud.

I have WD Red Pro, White Label NAS and even 10TB Black Gaming 7200RPM drives in my media server and Synology 1618+ and this ironwolf pro drive is louder than all of them combined.

It’s so loud that it can be heard on the other side of the wall from my office in the family room since I have the shelving rack secured to the wall to avoid tipping over. I’ve added rubber dampeners but short of adding rockwool I’ll still be able to hear it.
 
No these are next level loud. I’m talking 10K Velociraptor loud. Like gerbil running amok in your walls loud.

I have WD Red Pro, White Label NAS and even 10TB Black Gaming 7200RPM drives in my media server and Synology 1618+ and this ironwolf pro drive is louder than all of them combined.

It’s so loud that it can be heard on the other side of the wall from my office in the family room since I have the shelving rack secured to the wall to avoid tipping over. I’ve added rubber dampeners but short of adding rockwool I’ll still be able to hear it.
So you've probably got a super power because there's no way I could hear even an entire raid of hard drives through a wall. Half the time I can't even hear my wife when she calls for me. :ROFLMAO:

Depending on a person's sensitivity to noise, enterprise drives may be over their limit. However, the tradeoff is reliability. There are no enterprise drives that are silent besides sas ssds.

Personally, I'll take reliability over noise any day. So much so that I set every fan in a system or nas to 100% and it doesn't bother me--especially knowing I can push it and not worry about longevity or heat issues.
 
So you've probably got a super power because there's no way I could hear even an entire raid of hard drives through a wall. Half the time I can't even hear my wife when she calls for me. :ROFLMAO:

Depending on a person's sensitivity to noise, enterprise drives may be over their limit. However, the tradeoff is reliability. There are no enterprise drives that are silent besides sas ssds.

Personally, I'll take reliability over noise any day. So much so that I set every fan in a system or nas to 100% and it doesn't bother me--especially knowing I can push it and not worry about longevity or heat issues.
Lol maybe, I could live with hearing it in the office but even she asked what the sound was when we were watching TV.

The main reason we could hear it through the walls was because the storage rack is secured into the wall so the sound and movement of the drive was like someone tapping their finger on the wall. It really is that loud and intense. It’s actually the actuator arm slamming around and it’s only really bad when doing write-intensive or random IO of small files like loading photos.

I’m not too worried about the drive reliability tbh, that’s what the NAS is for running RAID6 I can have 2/6 fail with dual parity and still hot swap and rebuild. And as NAS drives they already have good reliability compared to consumer drives.

The Ironwolf is over 3 years old and one of the first samples, so maybe they’ve updated them to be not so aggressively loud. Just saying my experience is they are way louder than anything I’ve used or heard since Raptor X and that includes Enterprise grade WD Golds.
 
Well the specific link you gave is for "manufactured recertified" drives which is refurb any way you spell it. That said Exos drives are better than Ironwolf in general, in theory more durable for a longer life, and a heavier workload, but probably not something the average home server needs to worry about.
My link and the link SamirD gave were for identical drives (Both Exos, both manufacturer recertified), except for the SED. I didn't know what the SED was.
 
Lol maybe, I could live with hearing it in the office but even she asked what the sound was when we were watching TV.

The main reason we could hear it through the walls was because the storage rack is secured into the wall so the sound and movement of the drive was like someone tapping their finger on the wall. It really is that loud and intense. It’s actually the actuator arm slamming around and it’s only really bad when doing write-intensive or random IO of small files like loading photos.

I’m not too worried about the drive reliability tbh, that’s what the NAS is for running RAID6 I can have 2/6 fail with dual parity and still hot swap and rebuild. And as NAS drives they already have good reliability compared to consumer drives.

The Ironwolf is over 3 years old and one of the first samples, so maybe they’ve updated them to be not so aggressively loud. Just saying my experience is they are way louder than anything I’ve used or heard since Raptor X and that includes Enterprise grade WD Golds.
I'd stick the nas on a mouse or gaming pad to isolate the whole thing from the wall unit. Even something as simple as a flattened cardboard box would probably do the trick.

NAS drives started out as consumer drives with a firmware revision to keep them from causing a raid volume to fail. And while they've had some improvements over the years, that's still their niche, using the quiet consumer hardware tweaked with some enterprise features to increase their reliability. They're basically bottom up versus the drives that are more top down from the enterprise line like your ironwolf pro.

And don't for a second think that even RAID6 will save you in the event of drive failures. I've been working with raid since the 1990s and abandoned raid5 back in that era--today's rebuild stresses from today's drive sizes will almost certainly make a raid volume fail unless it's raid1 or 0+1. RAID is for continuous availability not redundancy or backup so you'll need to have a complete backup somewhere else to be truly safe from data loss due to drive failures.

The Seagates have a different frequency for their noise than the HGST/WDs (lower frequency I think), and because of that they can sound louder or less loud depending on one's sensitivity.
 
I'd stick the nas on a mouse or gaming pad to isolate the whole thing from the wall unit. Even something as simple as a flattened cardboard box would probably do the trick.

NAS drives started out as consumer drives with a firmware revision to keep them from causing a raid volume to fail. And while they've had some improvements over the years, that's still their niche, using the quiet consumer hardware tweaked with some enterprise features to increase their reliability. They're basically bottom up versus the drives that are more top down from the enterprise line like your ironwolf pro.

And don't for a second think that even RAID6 will save you in the event of drive failures. I've been working with raid since the 1990s and abandoned raid5 back in that era--today's rebuild stresses from today's drive sizes will almost certainly make a raid volume fail unless it's raid1 or 0+1. RAID is for continuous availability not redundancy or backup so you'll need to have a complete backup somewhere else to be truly safe from data loss due to drive failures.

The Seagates have a different frequency for their noise than the HGST/WDs (lower frequency I think), and because of that they can sound louder or less loud depending on one's sensitivity.
Honestly, with drive sizes any more, JBOD + Parity drive(s) are probably the best, IMO.
 
I’ve had one in my media server for a few years, first one died within a month and was replaced by Seagate. Been great since then.

Only thing is these Seagates are LOUD compared to WD NAS drives and the white label drives from enclosures.
Ya I returned mine, due to noise, and went with WD. Normally use Seagate. Not so much anymore...
 
Last edited:
Honestly, with drive sizes any more, JBOD + Parity drive(s) are probably the best, IMO.
I just do JBOD with a manual mirror. I don't think you can do parity drives per-se with JBOD, but many of the modern nas units are using some form of zfs to deal with bit rot.
 
Ya I returned mine and went with WD. Normally use Seagate. Not so much anymore...
It's all about what works for you. While there isn't much difference in performance with the enterprise drives, there are small subtle differences that may make one preferable over another.
 
I'd stick the nas on a mouse or gaming pad to isolate the whole thing from the wall unit. Even something as simple as a flattened cardboard box would probably do the trick.

NAS drives started out as consumer drives with a firmware revision to keep them from causing a raid volume to fail. And while they've had some improvements over the years, that's still their niche, using the quiet consumer hardware tweaked with some enterprise features to increase their reliability. They're basically bottom up versus the drives that are more top down from the enterprise line like your ironwolf pro.

And don't for a second think that even RAID6 will save you in the event of drive failures. I've been working with raid since the 1990s and abandoned raid5 back in that era--today's rebuild stresses from today's drive sizes will almost certainly make a raid volume fail unless it's raid1 or 0+1. RAID is for continuous availability not redundancy or backup so you'll need to have a complete backup somewhere else to be truly safe from data loss due to drive failures.

The Seagates have a different frequency for their noise than the HGST/WDs (lower frequency I think), and because of that they can sound louder or less loud depending on one's sensitivity.
Media server is in a Corsair crystal 280x that has rubber feet but that’s a good idea to try a mouse pad for further vibration dampening.

As for the WD Reds I was under the impression they were top down after the HGST acquisition and they started using 8+TB Helium drives for the Red/Pro and then the white label enclosure drives?

I’m not super worried about data loss on the NAS, RAID6 was more about losing 1/3rd instead of 1/2 of my total storage with some resiliency and repair capability. There’s nothing I can’t recover without some effort which is most of what I’m trying to avoid whether that is time and labor intensive like music or blu-ray rips. For music I already replaced most of my dependency on mp3 with streaming services and movies are going that direction too and there’s a live copy on my media server too. Stuff that’s more important like pictures, videos of the kids are on source media, cloud, external ssd and 2TB ssd RAID1 on my PC.

I’ve been in charge of large amounts of raw medical imaging data stored across multiple enterprise NAS racks and in that case I’d definitely agree RAID10 with enterprise drives is the way to go but I’m not spending my own money in that case. :)

We’re hosted in Azure now so I no longer have to worry about replacing drives, rebuilding arrays or noisy disks in the data center.
 
Honestly, with drive sizes any more, JBOD + Parity drive(s) are probably the best, IMO.
I bought the NAS and drives all at the same time so they’re all the same size and spec with no plans to add or accommodate diff drive sizes. Only replace if needed.

Synology does offer Synology Hybrid Raid though which is basically JBOD+parity.
 
Last edited:
Media server is in a Corsair crystal 280x that has rubber feet but that’s a good idea to try a mouse pad for further vibration dampening.

As for the WD Reds I was under the impression they were top down after the HGST acquisition and they started using 8+TB Helium drives for the Red/Pro and then the white label enclosure drives?

I’m not super worried about data loss on the NAS, RAID6 was more about losing 1/3rd instead of 1/2 of my total storage with some resiliency and repair capability. There’s nothing I can’t recover without some effort which is most of what I’m trying to avoid whether that is time and labor intensive like music or blu-ray rips. For music I already replaced most of my dependency on mp3 with streaming services and movies are going that direction too and there’s a live copy on my media server too. Stuff that’s more important like pictures, videos of the kids are on source media, cloud, external ssd and 2TB ssd RAID1 on my PC.

I’ve been in charge of large amounts of raw medical imaging data stored across multiple enterprise NAS racks and in that case I’d definitely agree RAID10 with enterprise drives is the way to go but I’m not spending my own money in that case. :)

We’re hosted in Azure now so I no longer have to worry about replacing drives, rebuilding arrays or noisy disks in the data center.
I love that case--nice and big. :) But I bet the feet are there to keep it solidly planted versus cushioning the shock so I'm sure some dampening won't hurt. If you have an old piece of carpet or carpet padding that stuff will work wonders for vibration isolation. :)

They did move some enterprise tech into the nas line, but none of the heavy duty stuff that changes the noise factor since that must have been a large enough feature of the drive to not touch it. I believe the Red Pros are just one notch down from full enterprise since the warranty and MTBF nearly match (and the noise). I've got several of these too I think--I've got so many drives and NAS units now it's hard to keep up, lol.

You definitely know what you're doing in terms of data retention and storage, so no worries there. I just hope you never get into a loss situation where you wish you would have gotten a used enterprise rack vs your consumer setup. Most of us that have been through this only get burned once, lol.

The cloud has definitely changed the way local storage works. I use rsync.net for backup for some web work, and looking at the pricing recently I think I could even afford it now for our main storage--which is amazing as just a few years ago it was over a grand a year (and hence why we stayed local).
 
NAS drives started out as consumer drives with a firmware revision to keep them from causing a raid volume to fail. And while they've had some improvements over the years, that's still their niche, using the quiet consumer hardware tweaked with some enterprise features to increase their reliability. They're basically bottom up versus the drives that are more top down from the enterprise line like your ironwolf pro.

And don't for a second think that even RAID6 will save you in the event of drive failures. I've been working with raid since the 1990s and abandoned raid5 back in that era--today's rebuild stresses from today's drive sizes will almost certainly make a raid volume fail unless it's raid1 or 0+1. RAID is for continuous availability not redundancy or backup so you'll need to have a complete backup somewhere else to be truly safe from data loss due to drive failures.

That has not been my experience. I'm running a pair of RAID60 arrays with 24x WD Red 6TB and 24x WD Red 8TB drives. Some are white label, others regular red labels. Mostly from BB during sales in the WD Easystore and MyBook or whatever they call them these days. Here's the RAID layout:

areca_raid_set.jpg


These are the drives:

areca_drives.jpg


Detailed stats about them:

drive_stats.jpg


While the 8TB have been reliable, the older 6TB drives starting failing about 2 years. In fact, I just had one fail a few days ago. I probably loose one about once every other month or so (lost 11 so far). Never had an issue with loosing a 2nd drive during the rebuild process after replacing a failed drive. I would be able to survive loosing a 2nd drive in a set during rebuild, but not a 3rd. I do have a friend with an almost identical setup, and we sync data between our systems, so if one of us lost it all, the other person would still have the data.

areca_events.jpg


I do run volume checks every 2 weeks:

areca volume check.jpg


All that said, I'm now down to 2 spare 6TB and 3 spare 8TB (haven't had to use them yet) and I have started the planning phase to migrate all my data to TrueNAS, probably using 16TB or so drives, in order to fit all my data in a single 24 bay chassis.
 
Last edited:
That has not been my experience. I'm running a pair of RAID60 arrays with 24x WD Red 6TB and 24x WD Red 8TB drives. Some are white label, others regular red labels. Mostly from BB during sales in the WD Easystore and MyBook or whatever they call them these days. Here's the RAID layout:

View attachment 500050

These are the drives:

View attachment 500051

Detailed stats about them:

View attachment 500063

While the 8TB have been reliable, the older 6TB drives starting failing about 2 years. In fact, I just had one fail a few days ago. I probably loose one about once every other month or so (lost 11 so far). Never had an issue with loosing a 2nd drive during the rebuild process after replacing a failed drive. I would be able to survive loosing a 2nd drive in a set during rebuild, but not a 3rd. I do have a friend with an almost identical setup, and we sync data between our systems, so if one of us lost it all, the other person would still have the data.

View attachment 500052

I do run volume checks every 2 weeks:

View attachment 500053

All that said, I'm now down to 2 spare 6TB and 3 spare 8TB (haven't had to use them yet) and I have started the planning phase to migrate all my data to TrueNAS, probably using 16TB or so drives, in order to fit all my data in a single 24 bay chassis.
This has also been the experience of a lot of people. It all depends on workload and what one is comfortable with. Enterprises won't take the risk and I've basically found that I'm in the same boat.

Just out of curiosity, how are you syncing with your friend? With such a large volume even a small change will be a lot of data to sync.
 
My link and the link SamirD gave were for identical drives (Both Exos, both manufacturer recertified), except for the SED. I didn't know what the SED was.
SED is nice if you want to quickly wipe your data since instead of using your system to do the encryption it does it on the drive itself and holds the key on it so when you tell it to erase itself it just deletes the key and all the data is unreadable. This is vs a traditional system where if someone had the key still from your manual encryption they could recover data still. There may be some way to recover the key off the SED drive I have no idea but that's basically the point of them is it's unrecoverable.
 
This has also been the experience of a lot of people. It all depends on workload and what one is comfortable with. Enterprises won't take the risk and I've basically found that I'm in the same boat.

Just out of curiosity, how are you syncing with your friend? With such a large volume even a small change will be a lot of data to sync.

My buddy wrote a PowerShell script that we both run that generates a list of all files that we each have. I then sent him my list and he generates two more scripts from the file listings. Both are xcopy batch files, one for what I need to send him, and another for what he needs to send me. We shuffle a 8TB drive back and fourth with the files missing on either side. Granted this is mainly for our media archives, not databases or a bunch of small files.

Been thinking about automating the process and doing it across the Internet (we both have 1G fiber), but he's hesitant as he thinks Verizon FIOS will give him a hard time about the bandwidth usage.
 
My buddy wrote a PowerShell script that we both run that generates a list of all files that we each have. I then sent him my list and he generates two more scripts from the file listings. Both are xcopy batch files, one for what I need to send him, and another for what he needs to send me. We shuffle a 8TB drive back and fourth with the files missing on either side. Granted this is mainly for our media archives, not databases or a bunch of small files.

Been thinking about automating the process and doing it across the Internet (we both have 1G fiber), but he's hesitant as he thinks Verizon FIOS will give him a hard time about the bandwidth usage.
Gotcha. Reminds me of how I would do syncs back in the day on DOS3.3--it always worked well. :)

So here's how I would do replications between the sites based on what I currently do for off-site backup:
  • First you need to set up an IPsec VPN tunnel and make sure both your network configurations are adapted for this change (different subnets, etc). You can even configure the tunnel to only work for the specific nas IP (I've done that for one of my sites).
  • Next, confirm each side can see the other side's volume using normal file access tools like explorer and command prompt. You should even be able to use the powershell script to generate the batch files from either side now. :)
  • One option is to now run the same xcopy batch files, but with the destinations changed to the actual targets over the VPN tunnel vs the 8TB drive you guys are using. Keep in mind that xcopy or any smb operation is quite 'chatty' and will usually not use the complete bandwidth. But now that I think about it, smb3 might be much better at this. I haven't personally tested that.
  • Depending on how well xcopy /d works, you can just simply have an xcopy /f/r/e/s/h/k/d/c run every day from each side using the other as a source. This will copy any new files, but leave old ones alone. This will cause the volumes to grow on both sides since moved/deleted files are not accounted for. When I have space to spare, I like this method as the lack of deletions and moves keeps 'spares' of files that have had action on them where the probability is higher for a mistake than on a static file.
  • You can also use robocopy to do the same type of copies, but robocopy has the added /mt switch that can make the copy much faster, especially if your nas units can handle 10GB traffic, then you can use something like /mt:128 to absolutely pound the nas units for maximum IPsec tunnel bandwidth utilization.
  • Taking robocopy one step further, you could use the /MIR switch, which will delete files from the destination if they don't match the source. The danger with this is that if for some reason something happens to the source, I have seen robocopy interpret that as files being deleted and then it goes on a delete rampage on the destination. This is probably fine for a backup, but I wouldn't want this to happen to my friend's source if it was the destination in my robocopy session.
  • Every few months (years?), run a comparison between the source and destination using winmerge. This way, any corruptions of files can be corrected. You will need a third known good source though to validate which file in a failed comparison is the bad one.
  • There are probably also some good sync agents/programs that can do replication in real-time if just left running on the volumes.
  • As far as bandwidth management, it all depends on how much data you have to sync each month. The max transfer you'll be getting across the link is 100MB/sec one way, so it might not even make sense if it will take half a month 24x7 to sync. Something else to consider.
Hope this gives you some ideas. (y)
 
SED is nice if you want to quickly wipe your data since instead of using your system to do the encryption it does it on the drive itself and holds the key on it so when you tell it to erase itself it just deletes the key and all the data is unreadable. This is vs a traditional system where if someone had the key still from your manual encryption they could recover data still. There may be some way to recover the key off the SED drive I have no idea but that's basically the point of them is it's unrecoverable.
Before we moved to Azure, transparent encryption with self-encrypting drives was a requirement for our data.

The encryption was tied to the host platform’s TPM module coupled with Secure Boot. Meaning if you tried to access the drive without securely booting into the TPM secured host OS the data on the drive was inaccessible.

Just make sure your host, hypervisor and OS support SED before paying more for it. For spinning disks the premium probably isn’t that much but for SSDs we had to pay roughly 30% premium for the feature.
 
Back
Top