Seagate's Roadmap Includes 14TB, 16TB Hard Drives Within 18 Months

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
…and within the next few years, we may even be getting 20TB hard drives from the company. I am liking this news a lot as someone who is looking for a way to keep all of his backups in one place (no, that’s not as stupid as it sounds, since I already have a pretty stable cloud solution). These drives should also be good for future 4K rips; I see that the biggest release, Batman v. Superman, weighs in at 89 GB with a 63 Mbps bitrate.

Seagate is getting closer to reaching its goal of making 20TB hard drives by 2020. Over the next 18 months, the company plans to ship 14TB and 16TB hard drives, company executives said on an earnings call this week. Seagate's hard drive capacity today tops out at 10TB. A 12TB drive based on helium technology is being tested, and the feedback is positive, said Stephen Luczo, the company's CEO. The demand for high-capacity drives is mostly in enterprises and for consumers who can afford the drives. The drives are mostly used in NAS configurations and storage arrays. Seagate is also rolling out more 10TB hard drives that are priced starting at around US $400.
 
I'll stick with my 6X4TB RAID set. My personal data hasn't grown in 7 years now, and even my OS and game installs haven't grown in the last 3 years. It looks like Microsoft finally got a leash on their runaway bloat. Windows installation requirements haven't grown since Windows Vista (16GB) and Office hasn't grown at all since 2010, (3GB) and only grew by 1GB between 2007 and 2010. Games have grown, but most not on a level to cause concern. Is there even a reason to want more than 512GB for the OS/App/Games drive and 2TB for document storage?
 
Good god, don't archive Batman vs Superman, the sooner that travesty of a movie is lost to history, the better.
 
Seagate wants to release 20TB hard drives in the next three years


36 months is hella long and dont see 10tb dropping in price to 200.00 for another 12 months
 
I was thinking about just going 5TB for steam games. 14 TB seems like a big risk even for a datacenter. Why doesn't Seagate work on the fail rate of the drives before going bigger.
 
I was thinking about just going 5TB for steam games. 14 TB seems like a big risk even for a datacenter. Why doesn't Seagate work on the fail rate of the drives before going bigger.

Because their failure rate is incredibly low given the speed they spin, the data density, and the mechanical clearances they operate with. Not to mention they survive rough shipping and handling from manufacture to install. They're pretty damn reliable given all those factors, and the engineering required to significantly change that number would be astronomical. (or actually impossible, and would require changing to all solid state instead)
 
I just wish that 8TB archive drives would drop well under $200. I've been wanting to upgrade my storage server with those for over a year, but I still can't justify paying $500 for two of them. Space has increased over the past two years but for the most part prices have been stagnant.
 
Because their failure rate is incredibly low given the speed they spin, the data density, and the mechanical clearances they operate with. Not to mention they survive rough shipping and handling from manufacture to install. They're pretty damn reliable given all those factors, and the engineering required to significantly change that number would be astronomical. (or actually impossible, and would require changing to all solid state instead)

"But-but I read the report on Backblazes and they said they were shit! They buy external drives from Craigslist and crack the cases open to perform unscientific reliable tests -- they're the last word in hard drive statisticals!"
 
Last edited:
"But-but I read the report on Backblazes and they said they were shit! They buy external drives from Craigslist and crack the cases open to perform unscientific reliable tests -- they're the last word in hard drive statisticals!"
Do you mean this one?
blog-table-q1-2016-cumulative-2.jpg
drive-stats-2016-q1-failure-by-mfg.jpg

Why would this info be crap?
 
Do you mean this one?
blog-table-q1-2016-cumulative-2.jpg
drive-stats-2016-q1-failure-by-mfg.jpg

Why would this info be crap?

Because the variables change constantly in that report.

It doesn't tell you that their original storage pods had really bad vibration issues, that they fixed up as time went on. Notice how all the failure rates are decreasing? Its because of better designed enclosures.

Look, bad drives happen, but all the manufactures are on par with each other in reliability. Are there bad runs? Sure, but they all have them. I have 7 Seagates, 3 WDs, 2 HGSTs and a Toshiba all running happily, and I believe their all beyond warranty. Luck has a lot to do with it.
 
I don't know, you tell us...is there?

Actually, I have barely scratched the surface of that 6X4TB RAID 10 set. I have 1.3Tb of personal data and ~1TB of VMs that I use for training, so far. The rest is yet to be used, and I've had them for over 18 months. That's when I realized I really have no use for that much storage.
 
Because the variables change constantly in that report.

It doesn't tell you that their original storage pods had really bad vibration issues, that they fixed up as time went on. Notice how all the failure rates are decreasing? Its because of better designed enclosures.

Look, bad drives happen, but all the manufactures are on par with each other in reliability. Are there bad runs? Sure, but they all have them. I have 7 Seagates, 3 WDs, 2 HGSTs and a Toshiba all running happily, and I believe their all beyond warranty. Luck has a lot to do with it.
If all the pods had bad vibration issues, then all the drive's reliability increased over time, that still doesn't change the overall differences between manufacturers.

The take away i get from this is without looking at the minute details, seagate is the worst and hgst is the best.
 
Do you mean this one?<snip>Why would this info be crap?

Because the casual observer looks at the conflated graphs and concludes "Seagate is unreliable", when in reality there was one model - the ST3000DM0001 - that was a defective product and dragged all the other models down. Not to excuse Seagate, as they knew about it and did nothing, and probably should have eaten a class action for it. But Backblaze admitted that after that one model of problematic Seagate, they've been running them with a relatively normal failure rate.
 
Do you mean this one?

Why would this info be crap?

There is basically zero consistency with any of the data. Even in the same model the fail rates vary greatly, as do the quantity tested. Then just trying to lump all of the data together into one bar is laughable at best. There are certain models that were < 1% failure rate lumped with other models that were > 20% failure rate. Just average that out and obviously that means that Brand A is better than brand B. While we're at it we'll just throw brand C in there with no data whatsoever.


Hypergreatthing: That's basically what the they want you to take away from that. Yet both manufacturers had drives with < 1% failure rate and other drives with > 20% failure rate. If they simply had flipped the quantity they purchased for The ST4000 with the HD5S 3TB the "outcome" would look a lot different than it does. One of the few patterns that I can see with their data is that the greater the quantity, it usually shows a smaller failure rate. In the same model drive, they show less failures when they have more of them. That leads me to believe their confidence within the data is still pretty low even with a few hundred drives. Once you get a few thousand or more, it irons out issues likely related to batches and where they came from. (The same drive can be made at different locations)

About the only consistent data I see in when they approach > 2,000 drives. The failed percentage is reasonably consistent and much lower than everything else that has a smaller quantity of drives tested.
 
If all the pods had bad vibration issues, then all the drive's reliability increased over time, that still doesn't change the overall differences between manufacturers.

The take away i get from this is without looking at the minute details, seagate is the worst and hgst is the best.

I understand your view, but the 1.5TB Seagates were the only drives in the gen 1 boxes, everything else went in Gen 2 or higher, if I'm not mistaken. They were also predominantly shucked drives, bought as externals and pulled out of their casing to be used in the servers. Who knows what kind of damage may have occurred in that process.

The easiest way to look at the data would be to lop off anything smaller than 3tb. That would eliminate all of Gen1 and some of Gen2. You'd notice all the failure rates to be within a couple percent of each other. Cept for HGST, they'd still be pretty low. HGST still ends around 1%, Toshiba around 3%, Seagate around 4% and WD around 5%. Frankly, since this is still not scientific, it's all in the noise.
 
If all the pods had bad vibration issues, then all the drive's reliability increased over time, that still doesn't change the overall differences between manufacturers.

The take away i get from this is without looking at the minute details, seagate is the worst and hgst is the best.

I've got many bad memories of Maxtor & WD drives... Seagate seems to hold so far but got a few failed.. HGST on the other hand, 0 failure out of 4 drives so far (Yes sample is small but seems to match their finding).
 
But that's how a brand works. It's consistency across all model of units being sold.
If you put your sticker on something and brand it as yours, you own up to the reliability of it.
All hard drives have moving parts and are guaranteed to fail at some point in time. The trick is selecting one that you believe won't be a door stop asap.

The reason that some models have a high rate of failure is that the number of drives uses isn't very high thus making them to be statistical noise that looks high, but when averaged into the total drives makes the % low.

Hence the last chart that shows the different manufacturers compared to everyone else is a good statistical reference for comparison to everyone else based on the brand and not the unit.

I don't really get the external drive reference. It's still manufacturered by the company, it's their drive, they own it. I know the warranty on these drives is typically less than other internal drives, but why exactly would that excuse them from generating a decent product?
 
Until you're the person who just assumed that all HGST drives were fine and bought a HM5S B version of the 4TB drive in 2014. You're going to be in for a bad time. The trick is to KNOW that a Pontiac Aztec and a Corvette are actually different vehicles, despite the fact they can both be lumped under the GM brand.
 
Until you're the person who just assumed that all HGST drives were fine and bought a HM5S B version of the 4TB drive in 2014. You're going to be in for a bad time. The trick is to KNOW that a Pontiac Aztec and a Corvette are actually different vehicles, despite the fact they can both be lumped under the GM brand.
The only issue i have with this analogy is that there really aren't that many metrics to compare different hdd models as there are sources to based reliability with car models.
I've never seen manufacturers publish real failure rates (maybe due to warranty submissions, etc) for their drives. I only see some studies based on data center reports where this comes to light.
 
The only issue i have with this analogy is that there really aren't that many metrics to compare different hdd models as there are sources to based reliability with car models.
I've never seen manufacturers publish real failure rates (maybe due to warranty submissions, etc) for their drives. I only see some studies based on data center reports where this comes to light.

Yup I would agree. But I also wouldn't want anyone to takeaway from this data that HGST rules and Seagate drools, and go buy a laptop hdd based upon this data without realizing that it has absolutely no bearing on what they are purchasing. That's the danger of just lumping everything together by a brand name. The best data you can get is from reading reviews after people have had the drives for a while. It was well known that certain 7200.11 Seagates were terrible drives, as are some WD Greens. You might be able to have a slightly higher confidence that purchasing an HGST might be fine, but you need to know the specifics. If it's the first generation of a new technology, then it's anyone's guess as to whether or not it will be reliable.
 
  • Like
Reactions: DPI
like this
I was thinking about just going 5TB for steam games. 14 TB seems like a big risk even for a datacenter. Why doesn't Seagate work on the fail rate of the drives before going bigger.

Not a big risk for a data center that'd buy this type of drive. They'd likely be used in highly redundant raid arrays, so a drive failure won't hurt them, yet they save massively on $/Gb, power usage, and physical space.

Backup storage places like Backblaze built their business model on deploying the consumer version of drives and not the expensive enterprise versions. It allows them to put massive amounts of redundant storage into their cloud arrays for way less than a proper enterprise solution.

Just like with graphics cards, you have a good, better, best product line. Sometimes a lower quality part (passes most tests, fails others) with be binned to the lowest tier. Other times the yields will be so good that all three product lines will only be different by the sticker and warranty they come with (hence why a lot of GPU's can be firmware unlocked). It's also why you usually see product launches of the low/mid (good/better) tiers first, because the manufacturing hasn't matured enough to have sufficient high(best) yields.
 
Last edited:
G'damn. And my homeboy was mocking my suggestion of a T20 Dell home server because it can only hold four 3.5" drives... pfft, 64 TB.
 
And I am sitting here with a .5TB M.2 SSD and a 1TB spinner giving zero fucks about bigger hard drives....
Don't you people know how to delete and uninstall shit you don't need anymore?
 
I want bigger/faster 2.5" drives. I can cram 4-6 in the space of 1 single drive giving me ridiculous performance. I've stopped buying 3.5" drives and moved to 2.5" drives wherever feasible.

The new 5TB seagates are 7200rpm 6gpbs and were $170. Phenomenal performance in such a little drive.
 
I want bigger/faster 2.5" drives. I can cram 4-6 in the space of 1 single drive giving me ridiculous performance.
I don't think that 2.5" drives beat the data density of these 3.5" ones all said and done though. The biggest 2.5" drive I've seen is 5TB, whereas here they are talking about 20TB. You also have to factor in the cost and cabling for having so many drives and a motherboard or another sata controller PCIe card for them.

Most of these mega drives are for archival purposes, and for example my four 8TB drives in RAID0 already more than saturates my gigabit LAN transfer speed for large files. For OS/app drives, a single SSD typically suffices, and allows the spinning platters to idle down when not in use.
 
I dont know if I want a HHD over 3 or 4 TB. Afraid to lose everything if it dies.
 
I dont know if I want a HHD over 3 or 4 TB. Afraid to lose everything if it dies.
If you bought a 2TB drive, and it died, would you be concerned? 2TB is a lot of family memory pictures and important scanned documents and the like.

Point is, no matter what size drive you get, you still always have redundancy.

If your goal is to have 4TB of storage, then buy 2x4TB and you're good. If your goal is to have 20TB of storage, then buy 2x20TB instead of 20x2TB drives. Now, granted I just use RAID0, the least safe option, but I have redundant servers that back each other up so if one explodes into a ball of flame its a non-issue, and the really important stuff is also on older retired drives that are in ziplock bags w/ desiccant in a fireproof safe. No matter what size drive you get, you need backups one way or another, so that's a bit moot. Just pick the drive size that coincides with how much data you have.
 
I don't think that 2.5" drives beat the data density of these 3.5" ones all said and done though. The biggest 2.5" drive I've seen is 5TB, whereas here they are talking about 20TB. You also have to factor in the cost and cabling for having so many drives and a motherboard or another sata controller PCIe card for them.

Most of these mega drives are for archival purposes, and for example my four 8TB drives in RAID0 already more than saturates my gigabit LAN transfer speed for large files. For OS/app drives, a single SSD typically suffices, and allows the spinning platters to idle down when not in use.

All of which I"m quite ok with having redundant smaller drives with everything you said. The amount of data that I and other folks I know use benefit from redundancy over massive size.


Yes, 10TB+ will be great for PB archival sets. I would be hesistant to have my primary array be comprised of 2-4 large disks as the rebuild time would be through the roof.


SSD's are the future, but we aren't there just yet. 2020 I can honestly see the 2/4TB drives coming down to the 2-300$ range.


All comes down to requirements and needs of your project.
 
Looks nice. I have about 24TB across two NAS and 10 disks. If i can hold that in two disks, and add two in for redundancy I would be a happy camper. Right now I have RAID-Z going on (yeah yeah, but i have backups, so go sit on a tack) but would be nice to have a full redundant setup.

The cost though....how much will it cost?
 
Yes, 10TB+ will be great for PB archival sets.
Ultimately though, drive technology really isn't designed for us end consumers anyway.

Its the Youtube, Vimeo, Megas, Netflixes and the like of the internet that drive the market, and they can certainly benefit from inexpensive 20TB drives, as that's less rack space, less cost, and less power draw and heat. We may just incidentally be able to also enjoy the fruits of their enterprises at home, but all definitely made for those guys.
 
Hmmm.... I use SSDs now instead of hard drives in all of my systems now (except one) because of the speed. Having said that, the only system that I still have is a FreeNAS box that I built last year with eight 5TB drives in a RAIDZ2 configuration simply because of the storage space. Right now, I still have 18.6 TB free, so while I like the larger hard drives, I have no need for them.
 
SSD's are the future, but we aren't there just yet. 2020 I can honestly see the 2/4TB drives coming down to the 2-300$ range.

Maybe, but seems like I've been hearing this for 10 years. "LOL magnetic disks, SSD's will replace them". Wont happen any time soon, if ever. NAND doesn't just keep scaling linearly forever.
 
i remember back in 1998 someone mentioned in a new article that crystals would hold 400gb

I think seagate and IBM and all these companies are just milking the consumer

kinda like what intel does now with processor advancements
 
Maybe, but seems like I've been hearing this for 10 years. "LOL magnetic disks, SSD's will replace them". Wont happen any time soon, if ever. NAND doesn't just keep scaling linearly forever.


Nand has been scaling much faster than hard disks so far. The largest ssds are larger than the largest hdds already. AFAIK largest hdds now are 10tb and SSDs are at 15tb. If you consider that nand flash is being used in massive amounts of devices that won't use hdds and that keeps demand on the industry that is pretty good. As capacity keeps ramping up and scaling continues SSDs are just going to push hdds manufactures into too much financial trouble for them to keep investing in scaling. Even if nand stopped scaling today as factories matured at the top end production and more were brought up to speed it would probably be cheaper to make nand than hdds as the investment was paid off.

http://www.computerworld.com/articl...ity-surpasses-hard-drives-for-first-time.html
 
Magnetic disk can't scale forever either.

The high capacity drives are getting increasingly expensive. As they try to cramp more bits onto each disk, they had to come up with new technology that only adds to the cost of HDD.

Even today, 10TB disk are ridiculously expensive, and we see no sign of HDD getting cheaper. Whatever fancy capacity Seagate is trying to come up with will probably always be beyond what consumer can afford.
 
If all the pods had bad vibration issues, then all the drive's reliability increased over time, that still doesn't change the overall differences between manufacturers.

The take away i get from this is without looking at the minute details, seagate is the worst and hgst is the best.

I got Seagate drives that are over five years old and still ran fine when I retired them from active service and have had several Toshiba drives that failed or started developing errors within six months. I can turn that around and say all Toshiba drives are shit, but that isn't telling the whole story in how their drives were actually a bad choice for the enclosure they were put in and they tended to overheat while there. I still buy them, but I know they aren't good for that server I was using. No doubt everyone including HGST has had bad batches and models but life-cycles and reliability can be subjective and affected by a ton of variables.

i remember back in 1998 someone mentioned in a new article that crystals would hold 400gb

I think seagate and IBM and all these companies are just milking the consumer

kinda like what intel does now with processor advancements

Magnetic areal density is kind of like Moore's law in that several times everyone thought it was coming to a end a breakthrough happened to keep it going a few more years. And while holograms could potentially hold more almost no one has made a lot of progress in getting write speeds up to hard drives levels. What good is in holding 400GB on a DVD sized platter if you can only write to it at 20Mb/sec?
 
Back
Top