Seagate Skyhawk 10TB Review

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
There is a review of the Seagate Skyhawk 10TB hard drive posted today at Overclockers Club. I like how this review points out all the things you could use the extra storage space for...without mentioning porn. ;)

You'll now probably be asking... what's all this cost me? Amazingly, this 10 TB monster of a drive will set you back just barely over four cents per GB stored! That's right, for essentially $400, you can have 10 TB of reasonably fast and reliable storage for your DVR, surveillance system, NAS, or gaming rig. Even compared to its immediate predecessors, that's a step up in storage capacity per dollar.
 
Lol same here. Very tempted to stray from my SSD-ONLY mantra for some of these affordable 8+TB offerings popping up though, for local backup purposes.
 
Wow, that would be amazing but I would be afraid of losing data. Too many eggs in that basket, I would want 2 and do raid for protection with that.
 
Wow, that would be amazing but I would be afraid of losing data. Too many eggs in that basket, I would want 2 and do raid for protection with that.
I'm leaning towards at least 4. RAID 10

RAID 6 at the least. 5 is suicide at those sizes.
 
I would want 2 and do raid for protection with that.

I would want 3 and do proper backups instead of raid.

With that said I'm not sure I would ever use a surveillance drive in a PC.
 
Wow, that would be amazing but I would be afraid of losing data. Too many eggs in that basket, I would want 2 and do raid for protection with that.

It's all perspective. 10tb is small depending on your array size and when you're at 30tb and higher... 10tb is not that large.
 
It was ~$3300 US for the drives (10X HGST 7200 RPM 6TB) + 16 port LSI SAS card + a few extras like additional RAM (for 2 servers) and SAS cables. The server with 20 hot swap bays already existed but had 2TB drives in it.
 
Last edited:
Looks like I should just throw the 3 8tb hdd I bought last week in the trash :p I wish these weren't so expensive, but since these are all for archival purposes these higher RPM aren't for me...yet.
 
This week I created a 42TB array at work..

I have a 54TB Raid 5 at the office :eek:

Total of 10 WD red 5400 RPM Drives + a mirrored pair of 1TB drives for the OS, all installed in a Dell server with 12 drive slots.

Some would say I'm crazy to depend on a Raid 5 that large, because if a drive failed, the raid would likely fail the rebuild.
However, it doesn't mater. This is a backup server. All the other servers in the office are backed up to the local raid on this server, and then it is all duplicated to tape.
If I have a drive fail, it's actually faster to blank the raid and run a script to re-build all the data from the original servers than it is to wait for the raid to rebuild.

Plus, I trust the data integrity of a Raid 5 more than I'd trust several Raid 0's. (I have to stripe across at least 3 drives to keep up with the LTO 6 tape drive).

I'll likely have to upgrade to 8 TB drives for a 72TB raid 5, or maybe even 90TB using the 10TB drives if they keep adding more data :nailbiting:
 
Why not RAID 6? FFS, you lose one more drive per array. A lot for the average Joe who may not be running a huge storage server, but small beans for a business when dealing with rebuild times, man hours lost in case of URE during rebuild and ultimate peace of mind for not having to rely on more customized solutions (or people capable of running them competently).
 
I am using triple parity. And when I need a spare I will use a second array for the spare.
 
It was ~$3300 US for the drives (10X HGST 7200 RPM 6TB) + 16 port LSI SAS card + a few extras like additional RAM (for 2 servers) and SAS cables. The server with 20 hot swap bays already existed but had 2TB drives in it.

Godamnit+again+seroiusly+im+running+_4172b6e2563a428d68e73c922745d716.jpg
 
Why not RAID 6? FFS, you lose one more drive per array. A lot for the average Joe who may not be running a huge storage server, but small beans for a business when dealing with rebuild times, man hours lost in case of URE during rebuild and ultimate peace of mind for not having to rely on more customized solutions (or people capable of running them competently).


I use Raid 1, Raid 6 or Raid 10 on all the data/apps servers, because I can't afford the down time if a raid fails.

But that doesn't mater on my backup server. If only 1 drive failed, I can let it finish copying to tape, replace the drive and create a new raid. Takes about 1-2 days to rerun the backups and rebuild the data on Raid. If I need to do a restore during that time, I have it on tape.

Raid 5 is still better than raid 0 or using JBOD.

Even if the backup server was completely down all day it wouldn't be that big of a deal, as I rarely do any restores.

Besides, I'd have to buy larger drives to use raid 6, and there is a bigger hit on write performance with Raid 6, which could slow down some of the backups, especially with the slower 5400 RPM Drives. That means a lot more $$

Of course if money wasn't an issue, and I had an extra $20K+ to spend, I'd have an external Raid with 2 dozen drives in raid 6 or better, but it's a small company and I'd rather have some of that extra money go to my next raise :p
 
I use Raid 1, Raid 6 or Raid 10 on all the data/apps servers, because I can't afford the down time if a raid fails.

But that doesn't mater on my backup server. If only 1 drive failed, I can let it finish copying to tape, replace the drive and create a new raid. Takes about 1-2 days to rerun the backups and rebuild the data on Raid. If I need to do a restore during that time, I have it on tape.

Raid 5 is still better than raid 0 or using JBOD.

Even if the backup server was completely down all day it wouldn't be that big of a deal, as I rarely do any restores.

Besides, I'd have to buy larger drives to use raid 6, and there is a bigger hit on write performance with Raid 6, which could slow down some of the backups, especially with the slower 5400 RPM Drives. That means a lot more $$

Of course if money wasn't an issue, and I had an extra $20K+ to spend, I'd have an external Raid with 2 dozen drives in raid 6 or better, but it's a small company and I'd rather have some of that extra money go to my next raise :p

Man, you must have a slow RAID card paired with those slower 5400 RPM drives!

My RAID 6 with eight WD Black 5TB 7200 RPM drives on an Adaptec 8805 writes at around 900-950 MB/s. I verified that by transferring some 28GB files from my 950 Pro over to the array. Read speeds are a little lower, about 850 to 900 MB/s on average.

It does take a long time (~2 days) to rebuild, and performance is degraded while the rebuild happens. So that much I can understand from a time savings viewpoint.
 
All solid state here. They'll fail eventually too, but probably a bit less randomly. :D


I dunno about that -- I had a Samsung 950 Pro 512MB take a shit on me last week, random crash and reboot, suddenly OS won't load. Turns out it had some bad sectors pop up and the automatic re-provisioning never worked. Had to send it of for warranty repair. (couldn't format it, couldn't load anything on it)

Was a stupid fast drive, and love that it pops right into the motherboard. Did a surface scan and found the bad areas, even though Samsung Magician says Health == good. I sure has hell hope they replace it instead of sending it back with a note saying "no problem found".
 
All solid state here. They'll fail eventually too, but probably a bit less randomly. :D

Its insanely expensive to use only SSDs for a NVR running 30 to 40 1080p cameras recording 24/7. Not to mention the wear that would be put on them.
 
Would love to upgrade to something like this. Currently running a mixed collection of 10 disks to get 20TB (useable).

Would love to consolidate down to just a couple disks....waiting for sequential spin up on 10 disks in kinda lulz.
 
Back
Top