Seagate 8TB Internal HDD

nbat58

Limp Gawd
Joined
Oct 12, 2009
Messages
170
Will the Seagate 8tb HDD work in a NAS serving media? I am contemplating to use this drive for my next upgrade, any comments?
 
What do you mean 'work'? Streaming media is a perfect usage scenario for this drive. I intend to add one into my media server which already has a single 3TB drive, 2x 4TB and 2x6TB when I run out of room (soon). I don't use raid.
 
If you are referring to the 8TB archive drive then no, its not designed for 'active' data, its not designed to work in RAID, its meant for data that is seldom accessed hence the 'archive' part of its name.

http://www.storagereview.com/seagate_archive_hdd_review_8tb

Would you consider a drive full to capacity with 15-40GB mkv files as 'active' data? The data would be wrote once and read only when I want to watch a movie. Like an archive, if you will.
 
I was hoping to use this drive in ZFS raidz 1 or 2, why does it not work, shouldn't archived data not be accessed when required?
 
For data which you are going to mostly read and much less often/seldom update these drives would be fine.
 
These archive disks have only one advantage: they are cheap.
For a typical NAS use, they are crap.

I would not use them for any performance sensitive use cases, not even as archive disks
as archive means long term storage where you must do scrubs from
time to time and where you must expect a failed disk with a disk replacement -

both can be really slow as they are slow on any random writes.
 
Last edited:
Would you consider a drive full to capacity with 15-40GB mkv files as 'active' data? The data would be wrote once and read only when I want to watch a movie. Like an archive, if you will.

No. If you want someplace to SERVE media from, DO NOT use these drives. It's not that you CANNOT implement them this way. It's just that it isn't what they're built/spec'ed for.

These drives are built as backup devices. You write data to them and leave them alone except in cases where you need to recover something off the disk. You don't want to be constantly streaming data off them.
 
Thank you all for your help, this has helped with having a lot of headaches in the future, so currently the highest internal HDD for NAS use it the 6tb.
 
Pretty much, yeah. Maybe later this year we'll see bigger. But for right now, 6TB units for NAS configs is where it's at.
 
HGST HUH728080ALE600 8 TB
A real NAS disk, but more than 2x the price of the Seagate
 
No. If you want someplace to SERVE media from, DO NOT use these drives. It's not that you CANNOT implement them this way. It's just that it isn't what they're built/spec'ed for.

These drives are built as backup devices. You write data to them and leave them alone except in cases where you need to recover something off the disk. You don't want to be constantly streaming data off them.

Absolute horseshit :D
 
" Ultimately the Seagate Archive 8TB HDD has a lot of legs in very specific use cases. As a single drive it's fine, if the use case can tolerate slower sustained writes. With burst writes and reads, the drive performs very well. In pooled storage, the drive really belongs in a more sophisticated object store. Traditional software or hardware RAID is simply not recommended due to the sustained write penalty that occurs during rebuild. Admins can also get creative, like our Veeam backup test. Using 8 drives we managed to get 64TB raw backup target, with RAID1-style parity. It would be easy to get even more sophisticated for additional data protection. In such cases where cost/TB is a big driver in the decision process, the Archive drive comes in very handy. "

http://www.storagereview.com/seagate_archive_hdd_review_8tb
 
These are shingled storage. Essentially, they're write once. If you need to re-write anything on the drive, you've got to re-record the whole drive. What commercial file systems support such drives?
 
No. If you want someplace to SERVE media from, DO NOT use these drives. It's not that you CANNOT implement them this way. It's just that it isn't what they're built/spec'ed for.

These drives are built as backup devices. You write data to them and leave them alone except in cases where you need to recover something off the disk. You don't want to be constantly streaming data off them.

Did you even look at the datasheet?

http://www.seagate.com/www-content/...dd/en-us/docs/archive-hdd-dS1834-3-1411us.pdf

"Engineered for 24×7 workloads of 180TB per year"

How does a media server fall outside this use case?

Even if you were streaming 40Mbit BluRay every second of every day for an entire year, you would only be reading 157.7TB which is below the spec.
 
Did you even look at the datasheet?

http://www.seagate.com/www-content/...dd/en-us/docs/archive-hdd-dS1834-3-1411us.pdf

"Engineered for 24×7 workloads of 180TB per year"

How does a media server fall outside this use case?

Even if you were streaming 40Mbit BluRay every second of every day for an entire year, you would only be reading 157.7TB which is below the spec.

Game, Set and Match Chas. Just accept that you have no idea what you are talking about and we shall all move on.
 
I have 6 of these and love them. I have them in one large drivepool. Yes, write speeds are a bit slow, but I dont care. I start copying and go to something else. No big deal.

I really do love these drives. Seagate hit it out of the park with this drive.
 
Is this true? From what I've read it will re-write the band, not the platter or whole drive. Unless the 8TB drive is different...
Yes, you're correct. Have you heard anything about the band sizes the production drives are using?
 
Yes, you're correct. Have you heard anything about the band sizes the production drives are using?

Not sure, this PDF for the 5Tb drive states 36MiB at the outer cylinder.

https://www.usenix.org/sites/default/files/conference/protected-files/fast15_slides_aghayev.pdf

This thread suggests 36MB on outer cylinders for the 5TB drive.

http://lime-technology.com/forum/index.php?topic=36749.165

I assume its somewhat larger but in the same ballpark. Pure conjecture on my point!
 
I have 6 in Raid 5.

Rebuild time (I did a Verify with Fix, rebuild without a failed drive) and it took a bit less than 5 days compared to 8-12 hours on my other 4/8 (4TBs) drive arrays. In this time I could happily watch TV shows (720p) without any issues, if I wrote to the disk at the same time (so rebuild, read and write) everything would tank.

So yes, in a enterprise environment where you need constant high performance they would be terrible. For Media storage they are fine.

These are shingled storage. Essentially, they're write once. If you need to re-write anything on the drive, you've got to re-record the whole drive. What commercial file systems support such drives?

If that was true any write no matter how small would take like 4 days, I have done several writes to my drives (a build writes the whole drive, plus filling it up ect) and that was never the case.

Even if you were streaming 40Mbit BluRay every second of every day for an entire year, you would only be reading 157.7TB which is below the spec.

And if you have them in an array that is spread across multiple drives . My Array would allow for 800TB of access. (Pretty sure I would not do that much in the next 5-10 years)


PS, to all of those Raid 5 is dead people.

In past 3 weeks i have done a rebuild on a 4X4TB array, 2 on a 8X4TB (went to a Lan and some cables got a bit loose and caused issues) and a Verified+fix on a 6X8TB array and I have yet to have a non recoverable error. In all of the articles people link they mention a 50%+ chance of failure on a 12TB array ( I have done 108Tb of rebuilding) . I should play lotto I guess.
 
In past 3 weeks i have done a rebuild on a 4X4TB array, 2 on a 8X4TB (went to a Lan and some cables got a bit loose and caused issues) and a Verified+fix on a 6X8TB array and I have yet to have a non recoverable error. In all of the articles people link they mention a 50%+ chance of failure on a 12TB array ( I have done 108Tb of rebuilding) . I should play lotto I guess.

I can tell you from experience scrubbing 70TB of data each week for years that the guaranteed 1 unrecoverable bit error in every 12TB of data read is complete BS. For me a drive will have a much lower URE rate (several orders of magnitude) when it is tested / working and a much higher rate when it is about to die. This is what I see in the mix of Hitachi Deskstars, WDC black, Toshiba Enterprise and Seagate Enterprise drives I have in my linux software / zfs raid arrays at work.
 
Last edited:
If that was true any write no matter how small would take like 4 days, I have done several writes to my drives (a build writes the whole drive, plus filling it up ect) and that was never the case.


Indeed, it's not true (See posts #22 and #23 above.) In the application where I used these drives, my assertion was much closer to true; but that was an effect of the application and not the drives themselves.
 
I see applications where it can be useful but knowing my usage, often shifting data around, it would be a nightmare.

Now if they were cheaper, around the price of 4TB drives, I could adapt.
 
No. If you want someplace to SERVE media from, DO NOT use these drives. It's not that you CANNOT implement them this way. It's just that it isn't what they're built/spec'ed for.

Wow, was a load of BS. SMR is perfectly fine for serving data constantly. The "shingled" aspect only comes into play when writing.

These are shingled storage. Essentially, they're write once. If you need to re-write anything on the drive, you've got to re-record the whole drive. What commercial file systems support such drives?
Nooooooooooooooooooooooooooooooooooo. Only the individual band needs to be re-written, and it's handled all on the drive's firmware.

If anything, the inherent nature of home media NAS's is write-infrequently/read-frequently, which lends itself extremely well to shingled drives. RAID is probably not a good idea but software based pooling would be just fine.
 
Nooooooooooooooooooooooooooooooooooo. Only the individual band needs to be re-written, and it's handled all on the drive's firmware.
Indeed, only the band is rewritten. (See posts #22 and #23 above.)

The drive's firmware handles a less-than-band write by reading, modifying, and rewriting the band, which is terribly slow compared to simply writing the desired block(s). "operating system support" would be an operating system that implements a file system that has access patterns which are friendly to this performance characteristic.

A mostly-sequential logged file (for example), would be a start--then, you'd need a garbage collection implementation that was aware of bands when cleaning up and compacting.

Are you aware of any such file system implementations?
 
Wow, was a load of BS. SMR is perfectly fine for serving data constantly. The "shingled" aspect only comes into play when writing.

Nooooooooooooooooooooooooooooooooooo. Only the individual band needs to be re-written, and it's handled all on the drive's firmware.

If anything, the inherent nature of home media NAS's is write-infrequently/read-frequently, which lends itself extremely well to shingled drives. RAID is probably not a good idea but software based pooling would be just fine.

So much misinformation early on in this thread. This guy is right - these drives are great for home media servers where most data sits unchanged and is only read / accessed.

I've got 10 of these drives on order for this exact purpose.
 
I guess time will tell. What raid controllers are people using with these? I am still leery of investing in 12 of these for my Synology NAS, but I wonder if my Areca ARC-1261 would support these?
 
I was thinking whether these SMR drives would be better in RAID4 where the parity stripe is stored on a non-SMR drive (or stored on multiple drives in RAID0 because you can't yet get a single 8TB non-SMR drive). With RAID5 the parity writes are distributed so all of the drives are going to suffer band rewrite penalty during rebuild, but with a RAID4 the bulk of the write operations will be on the parity drives instead where there is no penalty. Writes to the data stripe drives will still suffer the penalty but it should happen less often because there would be fewer writes to those drives.
 
@eess

I think you may have a point. In fact you could build a topology where the RAID 4 parity drive is in fact a RAID 1 pair; I think these drives would be OK in RAID 1. Heh, RAID 4 might make a comeback. :p EDIT: having read the SR article, maybe RAID 1 isn't such a good idea. RAID 4 sounds good, assuming you can rebuild the parity drive if it fails (I assume so, but I know little of RAID 4). Also, the Hitachi He8 is an 8TB PMR unit.

These drives (well, SMR drives in general) also look like they'd be useful for things like Virtual Tape Libraries, or drive pooling technologies like Greyhole.

@ashman

What size drives do you have connected to that Areca right now?
 
Last edited:
Most of my recent media data is in a snapraid / stablebit pooling combo - 6 drives + 2 for parity. I'm debating whether to get 8 to replicate this setup going forward or a frankstein 6 x 8TB shingle drive + 4 x 4 TB "normal" drive in 2 x RAID-0's for the parity drives. Or just stick with the 6TB drives for now!
 
@eess

I think you may have a point. In fact you could build a topology where the RAID 4 parity drive is in fact a RAID 1 pair; I think these drives would be OK in RAID 1. Heh, RAID 4 might make a comeback. :p EDIT: having read the SR article, maybe RAID 1 isn't such a good idea. RAID 4 sounds good, assuming you can rebuild the parity drive if it fails (I assume so, but I know little of RAID 4). Also, the Hitachi He8 is an 8TB PMR unit.

I'm not sure if Raid4 would help, sure if the Non SMR drive failed and was replaced with a non SMR drive the writes to it would be fast however if a SMR drive failed you would need to write to it. (its not like SMR drives never fail but PMR ones do)

When writing normally then you have to write to all of the drives so the PMR would be bottle necked by the SMR.

What raid controllers are people using with these?

I'm using an Adaptec 51645

Not sure, this PDF for the 5Tb drive states 36MiB at the outer cylinder.

I have a feeling that they don't even have to rewrite the whole band.

From all of the stuff I have seen the singles overlap the the data below (I'm assuming above as well) so you would only have to write the in that band that was in that column (assuming Rows are in circles and columns are perpendicular.)
 
I guess time will tell. What raid controllers are people using with these? I am still leery of investing in 12 of these for my Synology NAS, but I wonder if my Areca ARC-1261 would support these?

You don't want to run these in any striping based raid - meaning obviously hardware raid controllers or zfs. JBOD + SNAPRAID is the answer for SMR drives due to the dedicated parity.

But that's just for parity protection. For the storage strategy you want employ some type of tiering, like an SSD or regular 7200rpm spinner to act as landing area, and then dump the data to SMR drives off hours at intervals or when the landing drive(s) are x% full. Stablebit drivepool has a couple plug-ins to do exactly that. Or you could use scheduled task and robocopy to move files between tiers.
 
Last edited:
I have 10 in a raidz2 on an res2sv240 expander connected to an lsi 9211 hba.

I'm using them for storage of non-changing data. So far they have behaved very well, no problems with expander/hba compatibility. Speeds are good enough for me, I can transfer data at around 500 MB/s over 10GbE in both directions. Resilvering is a bit slow, but I have backups anyway so I'm not too concerned.

For a general purpose NAS I'd choose other drives though.
 
@ashman

I think the 12xx series is too old to support drives greater than 2TB. I'm sure others can confirm this though.

@Butcher9_9

I'm not sure if Raid4 would help, sure if the Non SMR drive failed and was replaced with a non SMR drive the writes to it would be fast however if a SMR drive failed you would need to write to it. (its not like SMR drives never fail but PMR ones do)

From the SR article
SMR drives are designed to work well in short burst write activity. Sustained write performance in this case is a weakness that we see throughout the rest of our tests.

So the obvious question is: is a parity write considered to be a burst write?
 
Last edited:
I'm pretty sure I got confirmation from someone on here before that the 12x series supports at least 3TB, but beyond that, you are probably right.
 
Back
Top