Seagate 8TB Internal HDD

So the obvious question is: is a parity write considered to be a burst write?

Parity data is the same volume as the per drive volume of data (roughly) so if the data drives are being taxed then so is the parity drive. If the data you are writing is in bursts then the parity is in bursts, if the data is a constant stream so is the parity.
 
Parity data is the same volume as the per drive volume of data (roughly) so if the data drives are being taxed then so is the parity drive. If the data you are writing is in bursts then the parity is in bursts, if the data is a constant stream so is the parity.

Only true with real-time raid, not with snapshot raid. And you probably don't want to be running these in realtime/striped raid.
 
@Butcher9_9,@DPI

Cheers for that. I think that eess's suggestion might have merit, SMR drives in RAID 4 with a PMR drive/array in RAID 0 (or even RAID 10). That plus a big enough cache on the RAID controller should alleviate most of the bottlenecking inherent to RAID 4.

The only issue is that such a hybrid topology would have to be implemented in software RAID.
 
So as a backup drive only will these be OK set up in raidz1 using OmniOs?, and if there is a lot of data will it take too long to restore let's say 40TB+?
 
Well presumably a user would fill these drives up slowly as they acquired data.

Like send the changed from the main array to these backup disks every night. Might only be like say 50GB of changes per day or something that get snapshotted and send to these SMR disks nightly.

That would mean it would take 160 days for you to fill them up (at 50GB/day) and writing speeds are not a big deal.

But then it's nice that you can read fast from them should you need to restore.
 
@thread

Just throwing this out there - these drives appear well suited to glacier-type data storage applications. Do you think that SMR will become a contender to tape technologies like LTO, assuming appropriate storage protection (i.e. shock protection) was in place? Or MAID storage architectures?
 
Disk storage will never beat tape for price or density. It's more convenient because no manual intervention is required, so SMR beats density above regular drives.
 
Last edited:
Disk storage will never beat tape for price or density. It's more convenient because no manual intervention is required, so SMR beats density above regular drives.

The reason why I put that out there was based on the data capacity of LTO-6, which is 2.5TB raw, as opposed to 8TB for this drive. Also the SMR drive itself is $290-300, where as an LTO-6 drive is ~$2100; LTO-6 cartridges run to ~$35/tape.

I've been out of the tape backup management environment for a over a decade, but assuming a 5-day rotation on tapes, the initial outlay would be ~$2275. Assuming two SMR drives, and a correct replication setup, the drive cost would be ~$600 for a minimum of two drives, and even if you wanted five full LTO-6 sized backups (12.5TB), the drive cost would be ~$1200 for four SMR drives.

Now admittedly, a hard drive will never be as physically robust as a tape; tape cartridges can be taken off-site, or placed in a fire-proof safe. Off-site hard drive backups have to be pre-sited in a DC or other secure, Internet-connected facility.

However, from my own perspective, these SMR drives are worth a look as a platform for storing backups.
 
Once price of these come down a bit to be competitive in GB/$ with the 5TB's then I'd consider standardizing on them, use SNAPRAID for parity, and then either buy a couple of HGST 8TB's to be the parity disks (expensive but fast writes), or put pairs of old 4TB's in RAID0 volumes, something like 3 pairs of 4TB's for triple parity.

I know Seagate is prepping 10TB SMR (1.66TB platters) and probably already have them out in their preview program under NDA, I might wait for those to standardize. But in any case SMR definitely has its usage scenarios and with some creativity and a tiering strategy, the slow write speed can be mitigated or at least worked around.
 
Last edited:
The reason why I put that out there was based on the data capacity of LTO-6, which is 2.5TB raw, as opposed to 8TB for this drive. Also the SMR drive itself is $290-300, where as an LTO-6 drive is ~$2100; LTO-6 cartridges run to ~$35/tape.

Tape is $35 per 2.5TB, which is $14 per terabyte. The 8TB SMR drive is $37.50 per terabyte. SMR disk is 2.67 times the cost of tape.

The tape drive costs a couple grand, but amortizes out for meaningful library volumes. If you're only backing up a few tens of terabytes, SMR might be more competitive because of the cost of the drive. But we'd also have to assume you're moving the drives around; if you're keeping them in a chassis, we'd have to consider those costs, too.

The performance question is a bit different. The tapes, IIRC, can only be written to at about 150 megs/second. I'm sure the SMR drives can keep up with that, though they might struggle when using a file system that's SMR-ignorant.
 
Last edited:
Tape is $35 per 2.5TB, which is $14 per terabyte. The 8TB SMR drive is $37.50 per terabyte. SMR disk is 2.67 times the cost of tape.

The tape drive costs a couple grand, but amortizes out for meaningful library volumes. If you're only backing up a few tens of terabytes, SMR might be more competitive because of the cost of the drive. But we'd also have to assume you're moving the drives around; if you're keeping them in a chassis, we'd have to consider those costs, too.

The performance question is a bit different. The tapes, IIRC, can only be written to at about 150 megs/second. I'm sure the SMR drives can keep up with that, though they might struggle when using a file system that's SMR-ignorant.

HDDs are getting cheaper all the time though.

These 5TB disks are $28/TB.
http://www.microcenter.com/product/...s_35_Desktop_Internal_Hard_Drive_PH3500U-1I72

I've seen disks that are $25/TB as the lowest so far.

The break-even point for LTO6 vs HDD for now seems to be about 150TB.
 
I was thinking whether these SMR drives would be better in RAID4 where the parity stripe is stored on a non-SMR drive (or stored on multiple drives in RAID0 because you can't yet get a single 8TB non-SMR drive). With RAID5 the parity writes are distributed so all of the drives are going to suffer band rewrite penalty during rebuild, but with a RAID4 the bulk of the write operations will be on the parity drives instead where there is no penalty. Writes to the data stripe drives will still suffer the penalty but it should happen less often because there would be fewer writes to those drives.

During a rebuild, you're reading from your remaining drives, to write to a new drive. There should be no SMR penalty.
 
During a rebuild, you're reading from your remaining drives, to write to a new drive. There should be no SMR penalty.
The SMR penalty happens when writes aren't sequential and continuous, aren't aligned to the start of bands, or aren't band-sized. RAID rebuilds don't guarantee these things: the controller may decide to rebuild in any order, write housekeeping information, and might not have alignment and sizing that matches the drive.
 
Probably true with hardware RAID. I'm pretty sure ZFS will write continuously to the drive.
 
These are shingled storage. Essentially, they're write once. If you need to re-write anything on the drive, you've got to re-record the whole drive. What commercial file systems support such drives?

Not the whole drive, just a few tracks in the set. Since the drive needs to first read those tracks into memory and then erase and write them, random writes are very slow and chances of data loss during those periods as they take much longer than normal writes. I dont see how these drives can ever be consumer anything. Even for backup the chancs of data loss would be high. They are already high using regular drives let alone drives which take extra effort.
 
Is this true? From what I've read it will re-write the band, not the platter or whole drive. Unless the 8TB drive is different...

See http://www.anandtech.com/show/7290/seagate-to-ship-5tb-hdd-in-2014-using-shingled-magnetic-recording

They are developing new NCQ standards to use these kinds of drives so they are more efficient. I have not seen any drives with them out yet but everyone right now uses slightly different methods and they want to make it a standard that works better than they have now to prevent data loss. In the future we might start to see this in all drives if thats the case which fills me with dread. They can get twice the data onto the same area and like SSD's, using TLC nand would be similar..
 
In terms of using these Seagate Archive Drives in a software raid (specifically snapraid) what was the consensus conclusion here? I heard:

1) These drives are good candidate to be "data drives" in a situation for media & other data that is written infrequently and read / streamed frequently.

2) I wasn't clear if these drives are good candidated for the parity drives in a SnapRaid setup. Should "Normal" or PMR drives be used for the parity drive?

Thanks
 
In terms of using these Seagate Archive Drives in a software raid (specifically snapraid) what was the consensus conclusion here? I heard:

1) These drives are good candidate to be "data drives" in a situation for media & other data that is written infrequently and read / streamed frequently.

2) I wasn't clear if these drives are good candidated for the parity drives in a SnapRaid setup. Should "Normal" or PMR drives be used for the parity drive?

Thanks

Why would the requirements for the parity drive be different? It gets the same amount of writes as the data drives.
 
Back
Top