HGST's 10TB drive uses custom software to access shingled platters

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,894
HGST first revealed its 10TB cold storage drive back in September. The 3.5" mechanical unit combines helium-filled internals with Shingled Magnetic Recording (SMR) to hit a new capacity milestone, and it's finally ready for prime time. Customers will start getting production drives in a couple of weeks. The Ultrastar Archive Ha10 won't be available through conventional channels, so don't expect it to hit online retailers. HGST's host-managed SMR implementation requires substantial customization on the software side.

http://techreport.com/news/28426/hgst-10tb-drive-uses-custom-software-to-access-shingled-platters
 
They are working on a standard for SMR drives like NCQ so the software must be using that. It is uspposed to be in the drive firmware itself but here they talk about a device driver for the computer.

SMR crams more data onto the platters by overlapping the individual tracks like rows of shingles on a roof. This layering is typically managed by the drive, without host-level intervention. That approach is great for compatibility, but according to HGST, it can result in inconsistent performance and dramatic slowdowns over time. The firm claims its host-managed solution can smooth out those wrinkles with custom software.

So they do admit there is a substantial penalty for SMR as well as inconsistant performance.. Which seagate denies exists.

Compared to He8's 205MB/s sequential specs, the Ha10's 157MB/s read rate and 68MB/s write speed are plodding at best and crawling at worst.
 
Last edited:
Compared to He8's 205MB/s sequential specs, the Ha10's 157MB/s read rate and 68MB/s write speed are plodding at best and crawling at worst.

68MB write speeds are still decent, considering 10TB. Once a week/month/year backups are the targeted use case here and I think it's perfect.
 
I'd rather have a slow array of 2X capacity than a fast array of capacity X given 1U of room for my backup/archive solution.
 
68MB write speeds are still decent, considering 10TB. Once a week/month/year backups are the targeted use case here and I think it's perfect.
I'd sweat rebuild times, if nothing else. Of course, like any new tech, initial users will really need the density & also have deep pockets.
 
68MB write speeds are still decent, considering 10TB.

Actually, it is the opposite. Larger storage devices should ideally have higher speed than smaller ones, so that time to fully write it does not increase too much as capacity increases.

At that speed, it would take 41 hours to write the entire HDD. And it is not clear if that is top speed or average speed. It is probably top speed, in which case it may take 30 to 50% longer than 41 hours to write its full capacity.

It does not bode well for future capacity increases that there are now two manufacturers doing SMR. Since SMR is so undesirable for performance and complexity reasons, the HDD manufacturers must be having a very difficult time increasing capacity if they are willing to resort to SMR.
 
I think he meant more it was a reasonable tradeoff to make. Its still fast enough, and you get 10TB of space.

If it was cheap $/GB I'd be fine with that tradeoff.
 
If it was cheap $/GB I'd be fine with that tradeoff.

Not me. I'll stick with conventional drives. Now if only there were a consumer 6TB drive less than $200. The 6TB WD Green is still around $230, and Toshiba's largest consumer drive is only 5TB (but I have seen it in the $150-160 range).

Of course, it would be nice if the 8TB HGST He drive (PMR) would come down to a reasonable price, but I doubt it will ever be as cheap per TB as consumer drives.
 
Well if they want some adoption they have to. Still I doubt it will be used on ZFS.
 
They've open sourced the driver for this thing too!
https://github.com/hgst/libzbc

That is great!!!!! I doubt the prices are affordable for home users just yet. But being able to program your own write sequence is fantastic and the days of th eapple II and commodore 64 comes to mind where people were able to do some amazing things with the drives as they had direct control of the write sequences. Unlike the WD drives where it is their own programmers who are doing this, here we have the community doing it and I am sure people will find ways to do thigns much better.. I can also see where these 10TB drives can be made into 12-14TB write once drives.. You can erase the drive and reuse it but not change random sectors.. You could technically create bands of 1TB or so, where you have to re-write that entire 1TB or specified band size to change any random data in there and lose some more storage.. But a lot of things are possible..
 
HGST 8TB Drives – Helium Makes Them Fly

The amount of Helium in a drive is not published, but it is tracked as SMART attribute 22, with both the raw and normalized values starting at 100. During our period of observation, the value of these attributes did not change.

We filled Storage Pod 902 with 45 HGST 8TB drives. That created our very first 360TB Storage Pod. If we built out a rack of 10 Pods we would have 3.6 Petabytes of storage in one full height rack. That would be awesome, but it won’t happen yet – more on that later in this post.

For the moment let’s look at how the HGST 8TB drives in Storage Pod 902 performed versus our current data load test champ, the Toshiba 5TB drives in Storage Pod 909. As always, we tracked the Storage Pods until they were 80% full of data.

A Storage Pod with Toshiba 5TB drives took 26 days to reach 80% capacity and loaded data at an average rate of 5.46 TB per day. The Storage Pod with HGST 8TB drives took 40 days to reach 80% capacity and loaded data at a rate of 5.66 TB per day. Given both Pods are similarly configured and both drives are 7200 rpm drives, perhaps there is something to using helium after all. Regardless, the HGST 8TB drives are our current data load test champion as Storage Pod 902 loaded on average 5.66 Terabytes of data per day.

There are many things to like about the HGST 8TB drives; the low power requirements, high storage capacity, and 5-year warranty lead the list. Currently the lowest price street price we’ve found is $547.99 on Amazon with some sites asking for as much as $795 for each drive. Using street prices, let’s compare the cost per GB of the HGST to the different drives we currently use most often:



Street Price of Hard Drives Used at Backblaze

Drive Type/Size
Street Price
Cost/GB

HGST 8TB $547.99 $0.068
Seagate 6TB $239.99 $0.040
Seagate 4TB $152.00 $0.038
HGST 4TB $162.72 $0.041


https://www.backblaze.com/blog/hgst-8tb-drives-helium-makes-them-fly/
 
Last edited:
I could not disagree more. I would not buy or recommend these drives to anyone for any purpose.

The speeds are 70MB only for a fresh drive. Once the drive becomes full, deleting old files and writing new data would make the drive performance inconsistent, which no one seems to look at. It has to re-write the entire band and not just that track. So it will have to read and write like 8 adjacent tracks. This is why they need special NCQ formats which are not finalized yet. I think it will also have logical lookup tables like SSD's do. So the drive can do garbage collection when not in use. Having control of the writes from the computer will help you optimize all this. I hope they still leave it in once the standards are finalized for SMR drives. That way those with linux and such will still be able to use their own optimizations rather than what the default drive firmware does. Overall I dont see this being in any mainstream storage as the speeds will be really inconsistent and users wont know why. In a raid rack type environment, with constant writes all the drives will be busy all the time. I know NTFS is supposed to write files in sequence but even with a partially full drive it seems to write data all over the place if the file size is unknown rather than trying to write them in sequence. Multitasking will kill the drive as changing multiple bands on the drive will slow it down to a crawl. Because of the hype most people have not realized the pitfalls of this type drive. The prices are not all that much lower either.. Without a 50% lower price it just wont be worth the trouble. Even for home use the garbage collection penalty would be high as the drives have to stay awake longer and wake up during idle times. Another thing would be if you have 4K sectors which are not properly aligned. For the strangest reason even drives from the factory were not upto spec on this. The one I got had a starting sector of 63 which made the drive very slow. They can use MBR on drives larger than 2TB using 4TB sectors for drives upto 16TB but the least they can do is verify that it is properly aligned. Companies have been very tight lipped about SMR. Mostly they dont answer any questions or interact with users who found problems. Re formating the drive with properly aligned partitions made the drive write at 120MB instead of the under 50 I was getting. So it works for just as a backup drive not for frequent use. You will have to manually align the partition to 4K sector boundaries. Using the auto alignment did not partition the drive properly. Windows is supposed to be 4K aware now as well. All this would not be so confusing if companies were forthcoming about the problems and pitfalls and how to do things correctly.. They could at the very least also ship properly working drives. I thought I had a bad drive when I first got it, seeing how my other drives were 3-4 times faster. And it took a long time to figure things out.
 
I wonder if an actual market for commercial disk defrag software will come back into existence. It seems like they'd need to play Towers of Hanoi with the bands. Maybe the OS can do filesystem profiling to put frequently changed data into different bands to minimize band rewrites. But do any filesystems count # of times a file has changed? I only see when files were last changed vs. created, but I think some saturating counter of times changed in the inode would be useful....

Somewhere on the net I read that early pre-production SMR drives wouldn't let you write to more than one drive per rack due to vibration. I wonder if this has really been solved in the commercially available drives and/or if they have mounting requirements, like, drive needs to be attached to a frame with enough mass to buffer vibration through inertia. Maybe attach a cinder block to the drive for better performance?
 
Are we sure the 68MB/s write speed is the best case speed ? I surely hope it's the average or some similar measure. SMR sucks (in my opinion) but should be normally fast in the best case scenario, especially with all that cache. A bit like tape.
 
They are normal speeds when blank and properly partitioned. You can copy files to them like a normal drive just as fast. It is when you go for normal usage that the speeds drop drastically. I am not sure how to do that ie keep windows from fragmenting the files. As windows itself will write a file all over the place and once it gets that way, no way to go back easily.This is where it will work well for companies who use their own programs and control. They wont notice the hit as they control the writes and how it does it. They will map out over written sectors to a logical sector so as to not have to fix the banding problem. Doing a disk backup will be fast on a blank drive but once it had data on it, it slows down a lot. Many backup programs make incremental backups and once a month make a full backup and then deleted older backups and such and this drive will be slow for doing that. But if you want to data for ever it is great.

Maybe the new standard for NCQ and logical sector addressing like that used in SSD's but for SMR drives will make these work better.. Since microsoft is on the standards body I am sure they will have some input on user habits.

We are missing the main criteria here, The PRICES have to be far lower for this to be worth it. Why pay almost as much when the usage criteria is dumbed down to just cold storage and worthless for anything else.
 
Exactly, if we were talking about cheap drives, then I would simply have two backups of everything, and wipe the drives before making the new alternate backup, SMR would not be a problem then.
 
Back
Top