Seagate Unveils Massive 10TB Hard Drives To Meet All Storage Needs

This.. RAID-Z3, any three drives in the array. That's the highest that it goes as far as i'm aware.

SnapRAID has up to 6 levels of parity but its not real time. Its better suited for a bunch of large files that don't change.
 
Last edited:
Now you can lose 10 TB in a catastrophic hard drive failure instead of some lesser amount!
 
Just another spin on the Western Digital Color Schema:

Blue=MidGrade useful for desktops
Green=Slow, Energy Saver
Red=NAS grade

I predict they'll be just like WD's drives. All the same actual hardware.
 
Maybe just maybe it'll put a little thump to the HD market. The drives have been stagnant price wise for quite some time. 4TB externals are about $30 cheaper than what they were two years ago. 5TB externals took the prior spot (so price/gig went up).

If you look at the charts (Storage Price Trends - PCPartPicker) you can see that historically, they flat lined around 2014. Which is odd, because SSD's rapidly declined and caught up. You'd think the HDD unit would need to go lower to stay relevant. Hopefully now that there's much larger drives coming out, it'll drive the price/GB down.
 
For you guys that have a lot of archival drive failures, do you guys actually have programs/OS installed on it, or do lots of read/writes?

For me, all that stuff goes on SSD, and then gets copied to an archival drive where its occasionally accessed. Perhaps that's why I never have drive failures. *shrugs*

My cases also have good cooling and vibration reduction often incorporated, and the drives aren't transported nor shut down while running.
 
At work drives (where we have ~200 spinning 24/7) fail at random and have no clear connection to usage or temperature. At our peak 10 to 20 drives were RMA'd in a year however this year I only did 3 RMAs.
 
Last edited:
This.. RAID-Z3, any three drives in the array. That's the highest that it goes as far as i'm aware.

There are higher, just not as resilient as ZFS. FlexRAID can have near infinite parity drives but it is SNAPSHOT based and not real time. But what I indicated earlier about "custom solutions"...backblaze actually "shards" their data (e.g. file is broken up across different machines, 20 to be exact with three of them being parity). They checksum EACH shard as well to tell if it is corrupted and it is scrubbed/checked on a loop. I wouldn't be surprised if they move to 16/4 or some other ratio of shards as drives get bigger. But sharding is really the only way to handle MASSIVE data system.

--> Backblaze Vaults: Zettabyte-Scale Cloud Storage Architecture
 
At 10TB it really needs to be reliable. That's a lot of info to lose.
10TB is a lot of data to lose should the drive fail.
How much data would you like to lose today.
Anybody that runs a 10TB drive by itself is asking to lose tons of information.
Now you can lose 10 TB in a catastrophic hard drive failure instead of some lesser amount!

20MB: "Derp that's a lot of data to loooze"
100MB: "Derp that's a lot of data to loooze"
1GB: "Derp that's a lot of data to loooze"
4GB: "Derp that's a lot of data to loooze"
9GB: "Derp that's a lot of data to loooze"
15GB: "Derp that's a lot of data to loooze"
100GB: "Derp that's a lot of data to loooze"
160GB: "Derp that's a lot of data to loooze"
250GB: "Derp that's a lot of data to loooze"
500GB: "Derp that's a lot of data to loooze"
1TB: "Derp that's a lot of data to loooze"
2TB: "Derp that's a lot of data to loooze"
3TB: "Derp that's a lot of data to loooze"
4TB: "Derp that's a lot of data to loooze"
5TB: "Derp that's a lot of data to loooze"
6TB: "Derp that's a lot of data to loooze"
8TB: "Derp that's a lot of data to loooze"
10TB: "Derp that's a lot of data to loooze"
 
Its worth the mention (for those not in the know) when its a "Seagate" 10TB.
Its going to be a long long time before I can trust them.
 
RAID6 is no longer viable (by the math) either with many drives of this class. Currently file management systems for the consumer are very much behind the times. For big industry...they have fairly good systems but we don't have access to them. Even then, the chance of a URE is pretty much guaranteed with this size drive if reading the "whole thing". We really need file systems that have strong enough hash/encoding systems (e.g. enhanced parity) such that a failed read on a block is repairable by looking at the data around it and not just dumping the whole array.

Don't really agree with you. It depends on the size of the array, the error rate of the drive, and if your controller support read scans to mark out bad blocks.

If you plan to put 12 of them in a raid 6, then you might be pushing your luck, but that's what backups are for.
With Raid 5, you would almost be guarantied to fail the rebuild.

However, even large Raid 5's have their place. I run Disk2Disk2Tape backups at the office.
Initially we used individual drives or Raid 0, since the backup date is duplicated to tape.
However, as the amount of data/drives grew, the possibility of errors also grew.
I now use large Raid 5's (one is 36TB) for the backup data.
If the Raid fails, it's faster to just replace the failed drive, blank the array and start the backups again.
 
Seagate green drive vs Western Digital green. Not a good combo. I think they should of went orange for 7200 desktop drive color. Green is the color for low power and low spin and not really low price. I would be willing to give Seagate a chance as long as the price was lower than Western Digital.
 
Do Seagate drives really have failure rates that bad? I haven't had to buy a hard drive in quite a long time, but I remember them being extremely reliable once.
They're not as bad as they're made out to be. I've got 48 of the "bad" 2TB drives up and running right now, and one sitting around as a spare. Power-on time is a couple of years, IIRC. We had several hundred of them at work. I think I replaced one or two in the year I was working with them.
 
Lost me at Seagate.

Currently using Toshiba 6TB's that I shucked out of USB enclosures ($25 cheaper that way vs a bare drive, I swear I don't get it) running 4 drive raid 10 (12tb double mirror). Honestly I would use raid 5 but my stupid mobo raid controller doesn't support it.
 
Ever heard of backups? Or are you the one that cries because data loss will never happen to me?
I make backups(2 actually, less then 2 tb), but if it dies in a raid, it has to rebuild. That takes time?
Also if its in a work environment it could end up costing money just from downtime.
Maybe they should work on reliability first?
 
I have 2 Seagate Backup Plus 4TB Desktop drive that I used for backup. I brought one on a trip to Ecuador after making a duplicate copy to one that I left at home. A year and a half of light usage there, the drive began to hang when reading data and then unmount from the system. It repeatedly does this. When I came back to the US last month, I thought I was safe with the other drive. The stored and unused drive began the copy fine but then after a couple of gigabytes, it started to have read errors... *sigh* The sad part was that I purchased another 4TB Seagate Expansion drive to replace the failed drive that I took to Ecuador. I hope the new drive lasts longer than two years... but I'm afraid I don't have good luck with Seagate.
 
The Barracuda Pro and the IronWolf are available now at the suggested prices of $534.99 (£406, AU$714 converted) and 469.99 (£357, AU$627), respectively.

fuck that.

I want people to move prices down. I got a 3TB about 5 years ago and prices haven't moved that much since then.


Say what you will about Nvidia or the GPU game as a whole, but at least the newer GPUs shifted the performance up or prices down.

also weird thing is that these have only a 64MB cache whereas the older 8TB models have 256MB and 6TB have 128MB



also: i think seagate has even more new drives on the market, these are sold by amazon and have 0 reviews, so it seems new? also haven't heard of the firecuda line

Amazon.com: Seagate 2TB Firecuda (Solid State Hybrid) SATA 6GB/s 64MB Cache 3.5" Internal Bare Drive ST2000DX002: Computers & Accessories
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Do Seagate drives really have failure rates that bad? I haven't had to buy a hard drive in quite a long time, but I remember them being extremely reliable once.

The 4TB are very reliable, the 3TB not so much according to Blackblaze. I use both 3TB and 4TB in my arrays w/o issue for years. I've had more problems with an old areca 12 port dropping disks. At the time I blamed the drives till I finally switched that pos card out and have not dropped a drive since the switch.
 
For all of you referencing the Backblaze stats, remember this: Backblaze is taking consumer hard drives designed to be used in a desktop environment, and putting them into a datacenter. Backblaze usage is not even close to normal, intended usage for these drives. For your usage, which is probably quite different than how Backblaze is using them, the drives that Backblaze reports high failure rates may very well be extremely reliable.
 
For all of you referencing the Backblaze stats, remember this: Backblaze is taking consumer hard drives designed to be used in a desktop environment, and putting them into a datacenter. Backblaze usage is not even close to normal, intended usage for these drives. For your usage, which is probably quite different than how Backblaze is using them, the drives that Backblaze reports high failure rates may very well be extremely reliable.

That is true, but it also goes to show just how reliable the drives are that fare significantly better. With most of the HGST/Hitachi kicking ass in those environments, people feel that much safer using them at home.
 
Don't really agree with you. It depends on the size of the array, the error rate of the drive, and if your controller support read scans to mark out bad blocks.

If you plan to put 12 of them in a raid 6, then you might be pushing your luck, but that's what backups are for.
With Raid 5, you would almost be guarantied to fail the rebuild.

However, even large Raid 5's have their place. I run Disk2Disk2Tape backups at the office.
Initially we used individual drives or Raid 0, since the backup date is duplicated to tape.
However, as the amount of data/drives grew, the possibility of errors also grew.
I now use large Raid 5's (one is 36TB) for the backup data.
If the Raid fails, it's faster to just replace the failed drive, blank the array and start the backups again.
The problem with blanking your backups is you then have no backups -- well maybe your double backup scheme helps with that.
On my backup storage, I'm using Raid 10. Much quicker and less error prone rebuilds should a drive go bad and drives are relatively cheap to counteract the loss in storage space.
 
For all of you referencing the Backblaze stats, remember this: Backblaze is taking consumer hard drives designed to be used in a desktop environment, and putting them into a datacenter. Backblaze usage is not even close to normal, intended usage for these drives. For your usage, which is probably quite different than how Backblaze is using them, the drives that Backblaze reports high failure rates may very well be extremely reliable.
My machine is on 24/7 with no drive power saving.
The lag waiting for a drive to power up ruins the experience of a very fast PC. So they are permanently spinning.
Their testing fits my use pattern very well and in some ways is more extreme.
I'm more than happy to recommend drives based on Backblaze results until there is a consensus that it doesnt follow the trend of the general user experience.

I also go by reported experiences on forums by respectable folk.
Generally, HGST are very good, Seagate have a lot more issues.
There isnt a need to research further atm but I keep an eye out for new issues.
 
I dunno what all the fuss is about?

I can fill up 10tb really fast, with uncompressed raw video footage.
 
  • Like
Reactions: N4CR
like this
Reliability is solved by buying the fuckers in pairs or more. I have 2x8tb for this reason alone. Had no issues with Seagate over 16 years but had first 3tb fail. Got most off it though and had most backed up.. but yeah, went and got another 8tb after that one lol.
 
I dunno what all the fuss is about?

I can fill up 10tb really fast, with uncompressed raw video footage.

I'm looking at a raw professional cam too... poor storage system!!
 
Just a small sample size but here at our company, since we've switched from WD Purples to Seagate Surveillance, we've notice a substantial decrease in RMAs. I'd like to get our hands on these but our customers don't need that kind of storage.
 
mnewxcv said: 10TB is a lot of porn to lose should the drive fail.
How long would it take to watch 10TB of porn?

Just another spin on the Western Digital Color Schema:
Blue=MidGrade useful for desktops Green=Slow, Energy Saver Red=NAS grade
hey heY HEY!!! BLACK drives matter!

They were the Deathstars of another generation.
Seems every manufacturer has it's bad streak. Wasn't IBM's whole disk division sold to HGST? Which is now the best choice?
What goes around, comes around.
 
Firecuda SSHD still rocking a pathetic 8GB NAND. They made sure not to mention that anywhere except the buried spec sheet PDF.
 
Still many years till a 5TB SSD has a sub $200 price tag.

Microcenter sells 5TB Toshiba drives for $149. 6TB for $189. I do not recommend them for a NAS however as they run hot. I was hoping that the archive drives would lower them further, but them and $450 for an IronWolf killed that dream.
 
Do Seagate drives really have failure rates that bad? I haven't had to buy a hard drive in quite a long time, but I remember them being extremely reliable once.

Seagate went to shit right around the Barracuda 8 or 9 series. I lost quite a few in one year after years of perfect service. I used to build pre-press Mac's up with Barracuda array's in the mid 90's. Would not use any other drive. Then it all went bad.
 
Various top review sites state the pro is helium based but no mention at all on Seagate's own site or the PDF.
The enterprise 10TB does mention it's helium based.

What is correct?
 
Back
Top