Seagate’s Second Gen Mach.2 Drives Are as Fast as SATA SSDs

SSD's are certainly the long term future for almost all applications.

Medium term - however - hard drives will still be around for mass storage for some time. Maybe another decade or two.

Thankfully the days of being forced to boot and run your OS off of a hard drive are mostly over though.
The all SSD future can't come fast enough. However if the silicon shortage has showed us anything, when push comes to shove there isn't enough manufacturing space to pump out all the devices that everyone needs.
Its been well over a decade at this point and still we don't have consumer class drives that are 4TB that don't cost more than some laptops. Let alone 16+TB density. It's not for the inability to do it, it's obviously due to the insane cost to put that many chips on a single NVME or SAS drive. It's a thing in the high end rack world, but the trickle down on that is so far away.

I think your assessment of closer to two decades is more likely than the shorter. As much as I hate the very idea of that. The only way that will reverse is if all of these different fabs spin up and all the channels get inundated with product driving SSD prices into the ground. But I highly doubt that is going to happen.
 
Uh...no. Seek time doesn't change. It's not two actuators scrubbing the same platters; each actuator is dedicated to half the platters.

View attachment 528167

So, seek time is the same, but you get double the IOPS and throughput. At the expense, of course, of segmenting your storage in half and risking weird and novel failure modes. Would you trust data to a drive that appears half-crashed and half-functional?

Ah, well that's not as much fun, but makes sense. I had it in my head they were going to put the second actuator pivoting from the other corner of the disk for some reason. But top platters vs bottom platters is a lot simpler.

From experience with dieing harddrives, if it's broken in one place, it's going to get broken in the whole thing in only a matter of time. Head crashes produce debris which ruins the whole thing over not a lot of time. I'd be surprised if the top platters could be dead and the bottom functional for long.
 
The all SSD future can't come fast enough. However if the silicon shortage has showed us anything, when push comes to shove there isn't enough manufacturing space to pump out all the devices that everyone needs.
The problem with SSD's is cost, but recently the prices have dropped by nearly half. I picked up a 1TB SSD from MicroCenter for only $50. But a 4TB mechanical drive is also $50. The silicon shortage isn't effecting SSD pricing.
I think your assessment of closer to two decades is more likely than the shorter. As much as I hate the very idea of that. The only way that will reverse is if all of these different fabs spin up and all the channels get inundated with product driving SSD prices into the ground. But I highly doubt that is going to happen.
Kinda already is. The fall of BitCoin and the coming recession is certainly going to help drive down prices. You'll be surprised what a market correction can do.

amazon.png
 
As others have mentioned, Seagate DOES have an SSD line.

They are even part owners of Kioxia, which makes their flash memory for them. SandForce is also a wholly owned subsidiary of Seagate these days, and they used to be the premiere SSD controller about a decade ago. They also have access to controllers through Kioxia.

This puts them in a class above most SSD brands which are just rebranded stuff.

The very top of the SSD market is easily in the hands of Samsung and to a lesser extent Intel's old SSD division which is now owned by SK Hynix.

I'd consider Seagats SSD's in a second tier just below these two, but above most "gamer" "badge engineered" SSD's. Others in this second tier category include WD, Crucial, and Micron, and even to a lesser extent Dell.

Most people seem happy with their 3rd tier "gamer brand rebadge" drives though.


SSD's are certainly the long term future for almost all applications.

Medium term - however - hard drives will still be around for mass storage for some time. Maybe another decade or two.

Thankfully the days of being forced to boot and run your OS off of a hard drive are mostly over though.
Thanks, I could have phrased my point better. I am aware Seagate started making SSDs.

However, they waited far too late to get into the market and they still have very few options. They need to start developing more drives and marketing the hell out of them. Slash resources on traditional magneto-optical drives and repurpose to SSD. Magneto-optical WILL lose the race to SSDs eventually, for most consumer applications.

If possible, leverage their brand name to rebadge an Intel/Samsung product - people will buy it. They could have done that early on. Now Samsung/Intel have much less incentive to partner with them. WD was smart to get on board early.

Was not aware they bought SandForce - that's cool.
 
Last edited:
I could see myself using these double actuator nearline SAS drives however my HBA is old and outdated. I'd have to upgrade to make use of these. But I'm thinking 8 4TB disks at these speeds would saturate a 10gBit ethernet and yawn.
 
Thanks, I could have phrased my point better. I am aware Seagate started making SSDs.

However, they waited far too late to get into the market and they still have very few options. They need to start developing more drives and marketing the hell out of them. Slash resources on traditional magneto-optical drives and repurpose to SSD. Magneto-optical WILL lose the race to SSDs eventually, for most consumer applications.

If possible, leverage their brand name to rebadge an Intel/Samsung product - people will buy it. They could have done that early on. Now Samsung/Intel have much less incentive to partner with them. WD was smart to get on board early.

Was not aware they bought SandForce - that's cool.
Seagate isn't worried about making money from a gamer. If every gamer in the world bought an ssd at the same time all from Seagate that would amount to a fraction of a percentile compared to thier real customer - entrrprises.
 
Last edited:
I could see myself using these double actuator nearline SAS drives however my HBA is old and outdated. I'd have to upgrade to make use of these. But I'm thinking 8 4TB disks at these speeds would saturate a 10gBit ethernet and yawn.
But things get more interesting when you start looking at these in raid 5, 6, 50, and 60 where write speeds are limited to the speed of the single slowest drive in the array.
 
But things get more interesting when you start looking at these in raid 5, 6, 50, and 60 where write speeds are limited to the speed of the single slowest drive in the array.
...and then less interesting when you realize that you'd actually be running a RAID 05, 06, 050, or 060. The further down that list I get, the more Lovecraftian these things sound.

71itn8.gif


Storage topologies are not supposed to have a Lament Configuration.
 
I could see myself using these double actuator nearline SAS drives however my HBA is old and outdated. I'd have to upgrade to make use of these. But I'm thinking 8 4TB disks at these speeds would saturate a 10gBit ethernet and yawn.

Server pull HBA's are dirt cheap on eBay, and I've never had a problem with the ones I've bought.

Back in the day I used to buy old IBM Modem M1015's and crossflash them into some sort of LSI 92XX-8i. I have a few of those. They are actually fine because unless you have drives (or a SAS expander) that can take advantage of 12GB/s newer ones won't make a difference anyway.

More lately I've been using a few LSI 9300-8i's. Two in my backup server and one in my testbench machine

My main server uses an LSI 9305-24i.

The 9300-8i in my testbench machine after years of use recently started randomly dropping drives. I was using it with an Intel expander. Not sure if it was the HBA or the expander that went bad.

I hopped on eBay to see what was out there and at what price. Wound up getting a 9300-16i for only $95, which is great. No more expander, which is simpler, and simpler is usually better and more reliable.

That said, apparently unlike the 9305-16i where everyone is in one chip, the 9300-16i is just two 9300-8i's behind a PLX chip, and runs pretty hot... (It actually has a 6-pin PCIe power connector on it for supplemental power!)

Maybe not "simpler" after all, but the cheapest 9305-16i's are about 4x the price, so I'll deal for now.

I decided to strap a little fan to it using zip ties just to keep temps in check.
 
  • Like
Reactions: kac77
like this
But things get more interesting when you start looking at these in raid 5, 6, 50, and 60 where write speeds are limited to the speed of the single slowest drive in the array.

That's why in 2022 you don't run hardware RAID anymore :p
 
  • Like
Reactions: kac77
like this
But things get more interesting when you start looking at these in raid 5, 6, 50, and 60 where write speeds are limited to the speed of the single slowest drive in the array.
Those actually can, to some extent, get away from that limitation because they stripe the parity, though these help there. One area I think of that IS limited and they'd be nice for is RAID TEC from NetApp. They do dedicated parity disks because it makes rebuilds much faster and less likely cause further failures in the array. It does mean that single disk speed limits writes though (they overcome that with lots of caching and batching writes). These would be great for that. I'll have to keep an eye on it and see if NetApp starts using them. We are set for storage for awhile here so won't buy them but it'll be interesting.

I think too many people here just have never dealt with REALLY big data storage. All the "just use SSD" folks are thinking small. Yes, for 1TB an SSD isn't much more than an HDD. $50 gets you a cheap 1TB SSD, $40 gets you a 1TB HDD. Go up to the capacity of these drives and go enterprise, and the SSD is a lot more expensive. A Micron 9300 15TB SSD is going to run you $3k, a Segate Xeos 16TB SAS HDD is going to run you $375. Maybe still worth it, you say, since the Micron drive is 10x the speed of the Seagate. But then you have a setup where you need note 15TB of storage, but 500TB or a PB, and it needs to be reliable so you have 3 drives out of every 18-20 for parity. You have 50, 100, even more disks. Now the difference in disk cost could be $300k vs $37k, never mind the cost of the controllers/processors needed to sustain the high speeds of the SSDs. You could easily end up with an SSD solution that is over a million dolalrs all said and done, or an HDD solution that is under $100k. Perhaps you decide that since you don't need the speed for your given application, you'd like that $900k for some other use.


Don't get me wrong, flash storage is GREAT. It is all I use in my desktop system. It is all we buy for desktops at work. However, sometimes you have big data storage needs. It doesn't need to be lightning fast, it doesn't need high IOPs but it does need to hold lots of shit. Magnetic drives are still king there, and not by a little bit. For even bigger, but less often needed, good old tape is still on top.

That's why in 2022 you don't run hardware RAID anymore :p
Plenty of reasons to still run it, and it is still a thing. Dell/LSI just rolled out a new generation of hardware RAID controllers for their servers, because customers use them.
 
For small things, home or small business ZFS is fine… But hardware raid still has its place. My Smallest NAS units are running 48TB of storage in Raid 6. My largest is 144TB in Raid 50. Each taking multiple 10G SFP+ connections direct from multiple Host systems. I quite literally can’t afford those numbers in solid state drives.
Small business? ZFS / NetApp is everywhere especially in Enterprise. I couldn't have been happier when I moved off of Hardware RAID.
 
Server pull HBA's are dirt cheap on eBay, and I've never had a problem with the ones I've bought.

Back in the day I used to buy old IBM Modem M1015's and crossflash them into some sort of LSI 92XX-8i. I have a few of those. They are actually fine because unless you have drives (or a SAS expander) that can take advantage of 12GB/s newer ones won't make a difference anyway.

More lately I've been using a few LSI 9300-8i's. Two in my backup server and one in my testbench machine

My main server uses an LSI 9305-24i.

The 9300-8i in my testbench machine after years of use recently started randomly dropping drives. I was using it with an Intel expander. Not sure if it was the HBA or the expander that went bad.

I hopped on eBay to see what was out there and at what price. Wound up getting a 9300-16i for only $95, which is great. No more expander, which is simpler, and simpler is usually better and more reliable.

That said, apparently unlike the 9305-16i where everyone is in one chip, the 9300-16i is just two 9300-8i's behind a PLX chip, and runs pretty hot... (It actually has a 6-pin PCIe power connector on it for supplemental power!)

Maybe not "simpler" after all, but the cheapest 9305-16i's are about 4x the price, so I'll deal for now.

I decided to strap a little fan to it using zip ties just to keep temps in check.

And if anyone is curious, this is what an LSI 9300-16i looks like with a Noctua NF-A9x14 HS-PWMzip-tied to it.

1668992494597.png


1668992627442.png


It's a little ghetto, but it almost reminds me of an old school video card.

1668992886930.png


It does block the neighboring slot which is too bad, but that was only a 4x slot I was previously using to hold the Intel SAS Expander, so it isn't strictly necessary.

The port I plugged it in to seems to like a PWM duty cycle of about 45% when idle, which results in about 1200rpm. This takes the controller from heat that is bordering on the pain threshold when touched (though it didn't result in a blister) to a more reasonable lukewarm temp, and is inaudible unless you are shoving your ears inside your case.
 
Last edited:
Plenty of reasons to still run it, and it is still a thing. Dell/LSI just rolled out a new generation of hardware RAID controllers for their servers, because customers use them.

I guess I just got tired of troubleshooting proprietary RAID formats.

I prefer just being able to attach my drives to a machine using any interface available (on board SATA, SAS HBA, heck, even USB) in order to rescue data or otherwise deal with the drives.

That, and ZFS has great performance, great flexibility, and overwhelmingly superior protection against bit rot compared to hardware RAID.

The only thing I can think of that is an advantage of hardware RAID is the offloading of the RAM and CPU load needs with software solutions.
 
  • Like
Reactions: kac77
like this
Those actually can, to some extent, get away from that limitation because they stripe the parity, though these help there. One area I think of that IS limited and they'd be nice for is RAID TEC from NetApp. They do dedicated parity disks because it makes rebuilds much faster and less likely cause further failures in the array. It does mean that single disk speed limits writes though (they overcome that with lots of caching and batching writes). These would be great for that. I'll have to keep an eye on it and see if NetApp starts using them. We are set for storage for awhile here so won't buy them but it'll be interesting.

Shouldn't effect the rebuild time, as that should be limited by the write speed of the spare. And shouldn't be any less likely to fail rebuild, as you're pulling no less data from no fewer drives. (Okay; maybe one fewer, if the NetApp appliance doesn't have to hit the second parity drive.) Only real advantage is not having to restripe when expanding the array. (Though yes, restriping kinda sucks...)

I think too many people here just have never dealt with REALLY big data storage. All the "just use SSD" folks are thinking small. Yes, for 1TB an SSD isn't much more than an HDD. $50 gets you a cheap 1TB SSD, $40 gets you a 1TB HDD. Go up to the capacity of these drives and go enterprise, and the SSD is a lot more expensive. A Micron 9300 15TB SSD is going to run you $3k, a Segate Xeos 16TB SAS HDD is going to run you $375. Maybe still worth it, you say, since the Micron drive is 10x the speed of the Seagate. But then you have a setup where you need note 15TB of storage, but 500TB or a PB, and it needs to be reliable so you have 3 drives out of every 18-20 for parity. You have 50, 100, even more disks. Now the difference in disk cost could be $300k vs $37k, never mind the cost of the controllers/processors needed to sustain the high speeds of the SSDs. You could easily end up with an SSD solution that is over a million dolalrs all said and done, or an HDD solution that is under $100k. Perhaps you decide that since you don't need the speed for your given application, you'd like that $900k for some other use.

If your throughput needs are low enough that spinning rust will do the job, you don't need to throw the better part of a million dollars at controllers to keep the SSDs fed. A middling-tier HBA or two in a bog-standard server will suffice. It'd cost pretty much the same to feed and house the drives.

Don't get me wrong, flash storage is GREAT. It is all I use in my desktop system. It is all we buy for desktops at work. However, sometimes you have big data storage needs. It doesn't need to be lightning fast, it doesn't need high IOPs but it does need to hold lots of shit. Magnetic drives are still king there, and not by a little bit. For even bigger, but less often needed, good old tape is still on top.


Plenty of reasons to still run it, and it is still a thing. Dell/LSI just rolled out a new generation of hardware RAID controllers for their servers, because customers use them.

There is a window between "I need a HDD for my desktop" and "how many bits can I cram in a server rack" where hardware RAID controllers are still useful. But once you've grown beyond filling all the drive sleds in the one-box-does-everything server sitting in the back room, hardware RAID kinda falls by the wayside.


And if anyone is curious, this is what an LSI 9300-16i looks like with a Noctua NF-A9x14 HS-PWMzip-tied to it looks like.

View attachment 528298

View attachment 528299

It's a little ghetto, but it almost reminds me of an old school video card.

View attachment 528302

It does block the neighboring slot which is too bad, , but that was only a 4x slot I was previously using to hold the Intel SAS Expander, so it isn't strictly necessary.

The port I plugged it in to seems to like a PWM duty cycle of about 45% when idle, which results in about 1200rpm. This takes the controller from heat that is bordering on the pain threshold when touched (though it didn't result in a blister) to a more reasonable lukewarm temp, and is inaudible unless you are shoving your ears inside your case.

Ooh. Zip ties. Aren't we fancy...

Don't ask about the 80mm case fan free-standing in front of the opened geriatric HP Microserver in the basement. It's not the only thing keeping it's P222 RAID controller from throttling. Everything's fine.

I should do something about that, someday...
 
Ooh. Zip ties. Aren't we fancy...

Don't ask about the 80mm case fan free-standing in front of the opened geriatric HP Microserver in the basement. It's not the only thing keeping it's P222 RAID controller from throttling. Everything's fine.

I should do something about that, someday...

Lol.

Did we have a "ghetto mod" thread for things like that somewhere? I can't remember.

Reminds me of ~1997 when I stole a couple of my moms hair ties (they happened to be the right size) and used them and some matchsticks to attach the HSF from my old 486 to my 6mb Voodoo1 and wound up with insane overclocks :p
 
Back
Top