Western Digital Announces 15TB Hard Drive

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Western Digital introduced the world’s largest hard drive this week, the 15TB Ultrastar DC HC620, which, like its predecessors, utilizes shingled magnetic recording technology for high storage density. Seagate said in 2017 they would have a 16TB drive ready this year, but the timetable has presumably changed.

A 1TB gain in the same 3.5-inch form factor is not only significant, but it’s especially compelling for rack-scale TCO. For example, at the 15TB HDD capacity point, the raw storage for a 4U 60-unit HDD SAS or SATA enclosure equates to 900TB. That’s an additional 60TB per 4U of rack space increase from the previous 14TB SMR HDD capacity point.
 
Wasn't SanDisk and other major manufacturers touting 16TB mainstream SSDs about 2 years ago?
 
They need to work on reliability more so then anything. I had drives that are over 10 years old that still work. Five years is common for failer on modern drives. I can't blame th m sinc modern drives have so many tiny moving parts nowadays it is not surprising they Are more likely to fail.
 
WD 15TB Drive: Now bundled with Win10 Space Reclamation Edition. Lose more of your data all in one place!

;-)
 
That Shingled tech behind this sounds annoying. Reminds me too much of interleaving. I'll avoid it I think.
 
So I could put 12 of these in my 2U server, giving me a 165TB Raid 5, 150TB Raid 6, or a 90 TB Raid 10.

Based on the drive error rate, having a raid 5 this size successfully rebuild would be like winning the lottery :eek:
Even a Raid 6 rebuild would be questionable.
In either case, it would probably take a full week. Too much data on a single drive with a limited transfer rate.

That leaves Raid 10 (or something more exotic) as the only reasonable usage.

Largest drives I have are 8TB on my backup server for D2D2T backups.
Once on the local drives, the backups are copied to tape, so a drive failure isn't a big deal.

Out side of my backup server, I've never used anything large than a 4TB drive on any of my servers, and these 1-4 TB drives are now being replaced with SSD's.
Doubt I'll ever need anything this big.
 
So I could put 12 of these in my 2U server, giving me a 165TB Raid 5, 150TB Raid 6, or a 90 TB Raid 10.

Based on the drive error rate, having a raid 5 this size successfully rebuild would be like winning the lottery :eek:
Even a Raid 6 rebuild would be questionable.
In either case, it would probably take a full week. Too much data on a single drive with a limited transfer rate.

That leaves Raid 10 (or something more exotic) as the only reasonable usage.

Largest drives I have are 8TB on my backup server for D2D2T backups.
Once on the local drives, the backups are copied to tape, so a drive failure isn't a big deal.

Out side of my backup server, I've never used anything large than a 4TB drive on any of my servers, and these 1-4 TB drives are now being replaced with SSD's.
Doubt I'll ever need anything this big.
These in RAID 10 (or zfs equivalent for me) only. High failure rate assured.
 
That Shingled tech behind this sounds annoying. Reminds me too much of interleaving. I'll avoid it I think.

Its so much worse.

Interleaving was just making sure the next bit of data was just starting to pass under the head with the controller was ready to read it.

SMR takes it the other direction and makes you rewrite an entire track of data just to change one sector. But as long as you dont overrun the cache. you'll probably never notice how horribly slow these drives are.
 
Its stupid, but I love my HDDs as much as most people love their GPUs.

The more moving parts the more excited I get :woot: Looking forward to those multiactuated Seagate HDDs on the horizon. Double the risk, double the reward.

LOL Seagate

 
So I could put 12 of these in my 2U server, giving me a 165TB Raid 5, 150TB Raid 6, or a 90 TB Raid 10.

Based on the drive error rate, having a raid 5 this size successfully rebuild would be like winning the lottery :eek:
Even a Raid 6 rebuild would be questionable.
In either case, it would probably take a full week. Too much data on a single drive with a limited transfer rate.

That leaves Raid 10 (or something more exotic) as the only reasonable usage.

Largest drives I have are 8TB on my backup server for D2D2T backups.
Once on the local drives, the backups are copied to tape, so a drive failure isn't a big deal.

Out side of my backup server, I've never used anything large than a 4TB drive on any of my servers, and these 1-4 TB drives are now being replaced with SSD's.
Doubt I'll ever need anything this big.

8TB enterprise SSD’s are reasonably priced and go forever. 16TB’s are currently too costly but now that the major guys have announced their 32 GB SSD’s combined with the upcoming price drop on memory is can only get better. These big mechanical drives are basically archival drives at this point, I put them in a decent NAS and have them running on the other side of the building in a secured closet to function as the OMFG backup which Gods willing I will never have to use.

—Edit—-
After reading more about the drive it is designed to store large amounts of data that is accessed frequently but rarely changes. So perfect for archives or large DB’s.
 
Last edited:
8TB enterprise SSD’s are reasonably priced and go forever. 16TB’s are currently too costly but now that the major guys have announced their 32 GB SSD’s combined with the upcoming price drop on memory is can only get better.

I guess our definition of "reasonably priced" is different :p

A 2TB Intel SSD is still a bit of a tough sell at $700 each, when I already have spinners in the server.

Dell's server SSD prices are downright criminal. $1,500 for a 1TB drive, about 4 times the price of the same Intel labeled drive.
 
I guess our definition of "reasonably priced" is different :p

A 2TB Intel SSD is still a bit of a tough sell at $700 each, when I already have spinners in the server.

Dell's server SSD prices are downright criminal. $1,500 for a 1TB drive, about 4 times the price of the same Intel labeled drive.
Maybe but that Dell 1TB is actually a 1.5 TB that runs with its own internal protections, most commercial SSD’s run a redundancy check internally so if sectors start to fail there is an internal backup so you can continue working while the replacement arrives. Intel and Samsung drives don’t offer that at the consumer level. Not to mention their 5 year NBD replacement warranty that comes standard with them. If you want to complain about over cost SSD’s look at HPE and Cisco.... Jesus those bastards are brutal. But hey I know people that have paid upwards of $60,000 for data recovery from trying to save a few thousand on HDD’s (WD Red’s), so the definition of expensive for storage all comes down to what it would cost to retrieve or replace the data. In my case I have digital records going back to the 60’s that were entered into what ever computer system then and we recently digitized the older records going back to 1907, at this point the system is siting pretty at ~9TB and we have it insured for more money than I will see in my lifetime. I literally can’t afford to cheap out on those drives.
 
Last edited:
If the drive is empty, you should get decent write speeds, right? When does it start to slow down, around 75% full maybe?
I am talking a continuous write, like if I were copying media from my plex server to it.
 
So I could put 12 of these in my 2U server, giving me a 165TB Raid 5, 150TB Raid 6, or a 90 TB Raid 10.

Based on the drive error rate, having a raid 5 this size successfully rebuild would be like winning the lottery :eek:
Even a Raid 6 rebuild would be questionable.
In either case, it would probably take a full week. Too much data on a single drive with a limited transfer rate.

That leaves Raid 10 (or something more exotic) as the only reasonable usage.

Largest drives I have are 8TB on my backup server for D2D2T backups.
Once on the local drives, the backups are copied to tape, so a drive failure isn't a big deal.

Out side of my backup server, I've never used anything large than a 4TB drive on any of my servers, and these 1-4 TB drives are now being replaced with SSD's.
Doubt I'll ever need anything this big.

Who uses raid anymore???? Lmao

ZFS ftw
 
Spinners are dead to me. Never going back

For clients? Sure

Unless you are swimming in cash, however, spinners are still the way to go for mass storage, like in a NAS.

I have 12x 10TB Seagate Enterprise drives in mine, and I am very happy with them.
 
ZFS is a form of RAID.

But it's not anywhere close to the shit bucket failure of raid. Raid was great back when 1tb was the big boy. But with literally petabytes in the colo raid is an absolute disaster.

Zfs is nearly indestructable. No batteries needed or included. No funky ass 50 bajillion dollar raid controllers needed. Etc...
 
But it's not anywhere close to the shit bucket failure of raid. Raid was great back when 1tb was the big boy. But with literally petabytes in the colo raid is an absolute disaster.

Zfs is nearly indestructable. No batteries needed or included. No funky ass 50 bajillion dollar raid controllers needed. Etc...


You keep using the term RAID as if it were different from ZFS. It is not.

RAID is just a concept of using some form of redundancy based on checksumming.

There are many RAID implementations, some in hardware, some in software. ZFS is one of them. ZFS is RAID.

I think you are speaking I'll of traditional hardware RAID, and if that is the case I agree. It is less reliable than ZFS (but also a lot less demanding on system resources).

ZFS and other modern software raid implementations based on the copy on write principle (including BTRFS) are superior to hardware RAID for many reasons, but they are still all in the RAID family.
 
You keep using the term RAID as if it were different from ZFS. It is not.

RAID is just a concept of using some form of redundancy based on checksumming.

There are many RAID implementations, some in hardware, some in software. ZFS is one of them. ZFS is RAID.

I think you are speaking I'll of traditional hardware RAID, and if that is the case I agree. It is less reliable than ZFS (but also a lot less demanding on system resources).

ZFS and other modern software raid implementations based on the copy on write principle (including BTRFS) are superior to hardware RAID for many reasons, but they are still all in the RAID family.

are you saying there is no point in me trying to use this card anymore?
IMG_0437.JPG
 
are you saying there is no point in me trying to use this card anymore?
View attachment 115404

I'm not familliar with that particular LSI RAID card, but there are always applications for everything.

In general, for most applications I would argue a software copy on write based solution is probably superior to any hardware RAID implementation, but it depends.

ZFS is more failure resistant, has the recovery benefit (you can recover your pool using any system that can run Linux/Unix, you don't need special hardware), and has many feature benefits, including the excellent snapshot function, as well as send/recv functions for backup purposes.

There are trade-offs though. If using ZFS you'll need to budget for more RAM and CPU. That said, any basic SAS HBA will do the trick, so you are saving on the RAID controller.

I can't think of a single application where I would prefer a traditional RAID controller over software based ZFS or BTRFS, but that doesn't mean that one doesn't exist.
 
Home users won't likely need this any time soon (ever). I work in a datacenter, and I welcome this. Sadly it's not like I can just buy 3 pallets of these and replace all the XIV & V7K drives, they have to have special firmware from IBM which means cost x 500
 
Home users won't likely need this any time soon (ever). I work in a datacenter, and I welcome this. Sadly it's not like I can just buy 3 pallets of these and replace all the XIV & V7K drives, they have to have special firmware from IBM which means cost x 500

Most home users, I agree. Some of us enthusiasts are thrilled about the advancement here, just nervously wondering how much they are going to cost.

In my NAS disk upgrade I swapped out my 12x 4TB WD Reds for 12x 10TB Seagate Enterprise drives. I'm very happy thus far. I always need more storage.

What about that IBM solution requires custom firmware? I'm no enterprise user, but I have never heard of anything like this before!
 
Most home users, I agree. Some of us enthusiasts are thrilled about the advancement here, just nervously wondering how much they are going to cost.

In my NAS disk upgrade I swapped out my 12x 4TB WD Reds for 12x 10TB Seagate Enterprise drives. I'm very happy thus far. I always need more storage.

What about that IBM solution requires custom firmware? I'm no enterprise user, but I have never heard of anything like this before!

It's probably just a string embedded in the details reported by the firmware that tells the controller that the drives came from IBM and are allowed to be used with it..
 
Home users won't likely need this any time soon (ever). I work in a datacenter, and I welcome this. Sadly it's not like I can just buy 3 pallets of these and replace all the XIV & V7K drives, they have to have special firmware from IBM which means cost x 500

Home users won't be able to use these drives at all.
"Regular consumers will not be able to buy the new Ultrastar Hs14 HDDs and will not be able to take advantage of it until mainstream operating systems learn how to 'host-manage' SMR drives."

this was on a review of the previous gen, which probably carries over to this one as well.
Does that mean the drive won't work at all if you hooked it up to a Windows 10 machine?
 
Most home users, I agree. Some of us enthusiasts are thrilled about the advancement here, just nervously wondering how much they are going to cost.

In my NAS disk upgrade I swapped out my 12x 4TB WD Reds for 12x 10TB Seagate Enterprise drives. I'm very happy thus far. I always need more storage.

What about that IBM solution requires custom firmware? I'm no enterprise user, but I have never heard of anything like this before!

I looked at one site yesterday and the previous model 14TB was a little over $1000 and the 12TB was a little over $400.
 
I looked at one site yesterday and the previous model 14TB was a little over $1000 and the 12TB was a little over $400.

Yeah, $1000 isn't really worth it unless you absolutely need the storage density.

The value of a drive like this is that eventually it drives down the pricing of the 12TB models and make them affordable for us who are doing mass storage on our NAS systems.
 
Back
Top