136TB of NAND flash in a 1U storage array

evilsofa

[H]F Junkie
Joined
Jan 1, 2007
Messages
10,078
Skyera has released its skyHawk FS line, which packs up to 136TB of NAND flash in a 1U storage array:

Skyera_skyHawkFS_left_web.png


They come in capacities of 16, 32, 68 and 136TB. But don't reach for your credit card quite so fast; at $2.99 per GB, that's $47,840 for the 16TB model and $406,640 for the 136TB model.
 
Probably the Gubbament will be ordering these.. as they love overpaying for shit. NSA and others will snatch them up eagerly no doubt.
 
Why didn't I think of this. Buy ssd's for under 50 cents per GB and charge 6 times in a fancy package.
 
Don't forget to add the yearly licensing and support fees on top of the purchase price.
 
I doubt it's that much money. I remember when we were buying our EMC San the price was ludicrous but once we started talking to rep and the haggling began it quick fell down to earth.

I wonder how they fit a 136tb into a 1u? I wish I could see the inside of this thing.
 
It would cost you around 88400 USD to buy 136TB-worth of 1TB Samsung Evo Pro 2.5" SSDs. Assuming you could find the necessary RAID controllers and other components to fit it all into one box with 11600 USD, you could build an equivalent appliance for 1/4 the cost. However, it probably wouldn't be 1U form factor and I don't know if there are RAID controllers that can handle 136 SSDs under the same motherboard/physical box.

You might be able to fit 136 of the mSATA Samsung Evo drives, but you would be playing with fire since these SSDs aren't the Pro version.
 
It would cost you around 88400 USD to buy 136TB-worth of 1TB Samsung Evo Pro 2.5" SSDs. Assuming you could find the necessary RAID controllers and other components to fit it all into one box with 11600 USD, you could build an equivalent appliance for 1/4 the cost. However, it probably wouldn't be 1U form factor and I don't know if there are RAID controllers that can handle 136 SSDs under the same motherboard/physical box.

You might be able to fit 136 of the mSATA Samsung Evo drives, but you would be playing with fire since these SSDs aren't the Pro version.

You would thrust TLC in your production San?
 
You can bet your ass they're using expanders in there. Seems that most RAID controller companies, that I'm aware of at least, are moving away from high port count controllers. Want to say the highest in 16 native ports now, maybe 24. We looked at a box from AIC that would support 64 or 92 (can't quite recall) 3.5" drives; however due to all the expanders being used the performance was abysmal for our needs. Know they're getting better, but IMHO expanders = bad.

Oh, and I love how they have anything close to price parity with HDDs. :p
 
Another worthless company cropping up in the Virtual Everything-Cloud Computing-SAN this NAS that-Something As A Service Era.
 
anyone can string together a bunch of ssd's, some expander cards, some Linux distro, and use some chumpy zfs solution. what makes or break a company in this space is uptime, reliability, expandability, and support when it takes a dump.

While yes there are plenty of companies that are coming out with all flash arrays. They will all be swallowed up by the players in that space that matter. EMC, HDS, Netapp.
 
While yes there are plenty of companies that are coming out with all flash arrays. They will all be swallowed up by the players in that space that matter. EMC, HDS, Netapp.
You forget that you can only pick two out of fast, reliable and cheap. And there's always demand for cheap.
 
Everyone in this thread is fucking retarded.

Why didn't I think of this. Buy ssd's for under 50 cents per GB and charge 6 times in a fancy package.

You think you can get 136 TB into 1U with SSDs? Even the densest systems aren't stuffing more than about 10 2.5" drives into 1U. This is being done with DIMMs.

It would cost you around 88400 USD to buy 136TB-worth of 1TB Samsung Evo Pro 2.5" SSDs.

Yeah just go ahead and toss some consumer drives into an array, it'll totally perform the same as DIMMs. Do you have any storage experience outside of the desktop space? Enterprise SSDs behave entirely differently when it comes to handling contention and higher Queue Depth.

You can bet your ass they're using expanders in there. Seems that most RAID controller companies, that I'm aware of at least, are moving away from high port count controllers. Want to say the highest in 16 native ports now, maybe 24. We looked at a box from AIC that would support 64 or 92 (can't quite recall) 3.5" drives; however due to all the expanders being used the performance was abysmal for our needs. Know they're getting better, but IMHO expanders = bad.

Oh, and I love how they have anything close to price parity with HDDs. :p

There's not a single SAS interface inside the chassis, but I'm guessing you didn't do even a second of research before rendering your opinion. And yeah, expanders=bad...that's why they can be found in 7 figure enterprise arrays from NetApp, EMC, etc.

anyone can string together a bunch of ssd's, some expander cards, some Linux distro, and use some chumpy zfs solution. what makes or break a company in this space is uptime, reliability, expandability, and support when it takes a dump.

While yes there are plenty of companies that are coming out with all flash arrays. They will all be swallowed up by the players in that space that matter. EMC, HDS, Netapp.

I'm done...jesus christ why even bother to innovate anymore since obviously EMC and NetApp have this one in the bag. Yep, no reason to even try anymore. This box doesn't have SSDs, doesn't use expanders, and doesn't use ZFS.

[H] really doesn't know what it's talking about in the enterprise space since the most you kids do is RAID-0 a pair of SSDs to reduce those map load times. Ya'll need to stop.
 
Everyone in this thread is fucking retarded.

There's not a single SAS interface inside the chassis, but I'm guessing you didn't do even a second of research before rendering your opinion. And yeah, expanders=bad...that's why they can be found in 7 figure enterprise arrays from NetApp, EMC, etc.

Butt hurt much there, buddy? You work for one of these outfits perhaps?

Actually, I do know a thing or two about storage. Don't claim to be a flat out expert though. Will say we use absolute shit loads of it, custom built, that you would jizz in your shorts for. Petabyte? *pfft* We're beyond that. Here's the thing about expanders: yes, they're great for making lots and lots of space; but they suck for performance. We care about performance. As in 4-GigaBYTES per second sustained write. Expanders aren't your friend in this department.

So, in both of our cases we're right and wrong at the same time. You're right that expanders have their use for making big ass arrays... like the 90+ drive boxes I have sitting around. On the other hand I'm right in that they're fucking awful for performance.

BTW, in my industry NetApp/EMC is pocket change compared to what we sell, and we do use NetApp in quite a few of our product for run of the mill crap. You act like they're somehow so very, very special. They're not.

In the end, chill out. Jesus H. Christ.
 
ram box.. thats nuts

but now with 1.8" SSD drives in the likes of Dell 730* storage is growing and growing!
 
Everyone in this thread is fucking retarded.

<snipped for brevity>

[H] really doesn't know what it's talking about in the enterprise space since the most you kids do is RAID-0 a pair of SSDs to reduce those map load times. Ya'll need to stop.

"Everyone" ??
So if I go through this thread and form a list of names it would be a list of
"fucking retarded" people???

Well heck ..that's why I posted here ... I was wondering where all my online friends went!!
I just wanted to be included.

And what is with this "map load times" stuff??

:rolleyes:

Seriously though ... it's cool...but I think it is retro tech, no ???
Didn't mainframe systems do something similar?
 
Everyone in this thread is fucking retarded.
I'm done...jesus christ why even bother to innovate anymore since obviously EMC and NetApp have this one in the bag. Yep, no reason to even try anymore. This box doesn't have SSDs, doesn't use expanders, and doesn't use ZFS.

[H] really doesn't know what it's talking about in the enterprise space since the most you kids do is RAID-0 a pair of SSDs to reduce those map load times. Ya'll need to stop.

*clap clap clap*

I didn't say not to innovate.. I said all these smaller companies will be swallowed up by the companies that matter. XtremeIO is a perfect example of a standalone company that was bough by EMC. Yadayada, is another.. Its what became of EMC's Vplex technology.

I stand by my original statement..

While yes there are plenty of companies that are coming out with all flash arrays. They will all be swallowed up by the players in that space that matter. EMC, HDS, Netapp.

As for cheap flash arrays? You get what you pay for. Unless you have a great backup solution or you run a hot/cold setup, you're just setting yourself up for a catastrophic failure. I mean hey.. who doesn't love a good exercise in resume writing right?

edit.. and I love how when you mention zfs and a chumpy solution in the same sentence.. people go bananas.
 
Last edited:
but now with 1.8" SSD drives in the likes of Dell 730* storage is growing and growing!

You mean like this thing sitting at arms reach just to my left? ;) (Bonus R630 thrown in for good measure)

R630-R730xd.jpg



Only five of the 1.8" SSDs @ 200GB though. :(

1.8-200GB-SSD.jpg
 
That is only $3,000 per TB which is pretty damn cheap. I think NetApp full SSD shelves run about $10,000 - $15,000 per TB.

My only fear would be having all that capacity in a single 1U device.
 
My only fear would be having all that capacity in a single 1U device.

Filling a 42U rack with this would get you nearly 6 PB, if you can imagine a business putting $55 million into a single rack. That would be a lot of eggs in one basket.
 
*clap clap clap*

I didn't say not to innovate.. I said all these smaller companies will be swallowed up by the companies that matter. XtremeIO is a perfect example of a standalone company that was bough by EMC. Yadayada, is another.. Its what became of EMC's Vplex technology.

I stand by my original statement..

Well if they got swallowed up (meaning the founders became very rich) it's because they were doing something right, don't you think ?
 
You mean like this thing sitting at arms reach just to my left? ;) (Bonus R630 thrown in for good measure)

R630-R730xd.jpg



Only five of the 1.8" SSDs @ 200GB though. :(

Jealous, when i saw them mention them i was like deum! I want 2 for now!

But it seems lite-on is the only one doing the 1.8 SSD drives so far and i never thought of Lite-On as an SSD maker...

(got 2 x R620's bit with 8 x 1.2T 10k drives each and dual 12 Core Xeon's in em, right now not doing much)

That is only $3,000 per TB which is pretty damn cheap. I think NetApp full SSD shelves run about $10,000 - $15,000 per TB.

My only fear would be having all that capacity in a single 1U device.



If you are looking at a product like this you are looking, i would hope, at 2 or have a backup plan in place already.
 
That's insane, but even more insane is the price. Personally if I ran a company instead of using the traditional political reasons to buying expensive gear and having someone to blame if it goes down I'd go all DIY and just make sure there's tons of redundancy built into the system. Could probably build two redundant DIY systems like this for much cheaper.

The thing with this unit too is I wonder how easy it is to change the drives/flash or is it considered all one unit? So when you hit the write limitation of a portion of it, is the whole thing a dud? Typically enterprise gear has 3 year warranty and is very proprietary so after 3 years is up, you're on your own, and it would be well after the 3 year mark where you'd start to run into flash usage issues I would think.
 
But it seems lite-on is the only one doing the 1.8 SSD drives so far and i never thought of Lite-On as an SSD maker...

uh, the picture of the SSD clearly says SanDisk

not that I have any love for them either

That's insane, but even more insane is the price. Personally if I ran a company instead of using the traditional political reasons to buying expensive gear and having someone to blame if it goes down I'd go all DIY and just make sure there's tons of redundancy built into the system. Could probably build two redundant DIY systems like this for much cheaper.

The thing with this unit too is I wonder how easy it is to change the drives/flash or is it considered all one unit? So when you hit the write limitation of a portion of it, is the whole thing a dud? Typically enterprise gear has 3 year warranty and is very proprietary so after 3 years is up, you're on your own, and it would be well after the 3 year mark where you'd start to run into flash usage issues I would think.

you don't DIY in companies because when the box blows up you have someone to sue
 
uh, the picture of the SSD clearly says SanDisk

not that I have any love for them either



you don't DIY in companies because when the box blows up you have someone to sue

Yeah I understand that but I find that concept ridiculous. Spend 10x more just so you can have someone to sue. Personally, I rather have something that works, and something that can easily be repaired even 10 years down the line. Enterprise stuff tends to be on a 3-5 year replacement cycle to stay within warranty, which is pretty insane and wasteful considering the cost. Then again the company would just sue their own IT staff. It's sad that companies always have to play the blame game instead of accepting that sometimes things happen.
 
Well it's expensive because it's bleeding edge. If you need bleeding edge, you're never going to make it last 10 years, so that doesn't matter.

As for DIY, the problem is that you can't beat a team of engineers that have imagined and conceived this, made the software, designed and applied a battery of tests...
 
You would thrust TLC in your production San?
Depending on the usage, I would. :)
There are lots of read heavy deployments out there. In many cases, data is literally written once, never deleted, and then read often.

...
Yeah just go ahead and toss some consumer drives into an array, it'll totally perform the same as DIMMs. Do you have any storage experience outside of the desktop space? Enterprise SSDs behave entirely differently when it comes to handling contention and higher Queue Depth.
...

Well, let's get some benchmark before assuming they have achieved practical performance not possible with consumer disks.
If Amazon and Google have taught us nothing else, cheap commodity hardware is in! ;)

The SSD allocated to your "enterprise" AWS account is none other than the cheap commodity SSD you are criticizing.
 
Yeah I understand that but I find that concept ridiculous. Spend 10x more just so you can have someone to sue. Personally, I rather have something that works, and something that can easily be repaired even 10 years down the line. Enterprise stuff tends to be on a 3-5 year replacement cycle to stay within warranty, which is pretty insane and wasteful considering the cost. Then again the company would just sue their own IT staff. It's sad that companies always have to play the blame game instead of accepting that sometimes things happen.

it's reality man, I had the same mentality, but that's long gone... it's nice having someone to blame

Well, let's get some benchmark before assuming they have achieved practical performance not possible with consumer disks.
If Amazon and Google have taught us nothing else, cheap commodity hardware is in! ;)

The SSD allocated to your "enterprise" AWS account is none other than the cheap commodity SSD you are criticizing.

pretty much, I have plenty of "consumer" SSDs in production even OCZ drives that have been getting hammered for years... every now and then one will drop out, get reflashed, and reinserted
 
Well if they got swallowed up (meaning the founders became very rich) it's because they were doing something right, don't you think ?

That would be part of it. Really any SSD based array that actually is a quality product will be swallowed up.

That's why I steer very clear of any of these shops that just pop up and prefer to watch. I dunno about you, but to have one of these things go tits up and then the company vanishes? No thanks.

Say what you want about Emc, Netapp or HDS. They are not going away anytime soon.
 
Filling a 42U rack with this would get you nearly 6 PB, if you can imagine a business putting $55 million into a single rack. That would be a lot of eggs in one basket.

If you have that kind of coin, you are not using one of these solutions. You are using a solution with a proven 5 9's of uptime and a company who has been around for a very long time.
 
That's insane, but even more insane is the price. Personally if I ran a company instead of using the traditional political reasons to buying expensive gear and having someone to blame if it goes down I'd go all DIY and just make sure there's tons of redundancy built into the system. Could probably build two redundant DIY systems like this for much cheaper.

The thing with this unit too is I wonder how easy it is to change the drives/flash or is it considered all one unit? So when you hit the write limitation of a portion of it, is the whole thing a dud? Typically enterprise gear has 3 year warranty and is very proprietary so after 3 years is up, you're on your own, and it would be well after the 3 year mark where you'd start to run into flash usage issues I would think.

1. DIY systems never ever scale once you cross certain amounts of storage, and certain levels of performance required.

2. DIY systems normally have 1 or 2 people who know the ins and outs of those systems. When those folks leave the company, then its just a huge black box. Then what?

3. I've been buying all my storage lately with 5 years of maintenance up front.

DIY is fine for a startup to a point. But you quickly paint yourself into a corner. If you buy a real solution, and your "Expert" quits.. You can just quickly get a contractor to do it for you until you find someone who knows how to maintain it.
 
Yeah I understand that but I find that concept ridiculous. Spend 10x more just so you can have someone to sue. Personally, I rather have something that works, and something that can easily be repaired even 10 years down the line. Enterprise stuff tends to be on a 3-5 year replacement cycle to stay within warranty, which is pretty insane and wasteful considering the cost. Then again the company would just sue their own IT staff. It's sad that companies always have to play the blame game instead of accepting that sometimes things happen.

You get what you pay for when you buy storage from a real storage vendor. 24x7x365 support, hardware delivered and repaired with in 4 hours, and large knowledge base. When you have a company that makes the money, and require an insane amount of uptime.. spending it on the best infrastructure you can afford makes the most sense.

Anytime my storage has had a problem and it was because of the vendor, EMC has taken full responsibility. HDS has done exactly the same.
 
Also - don't forget that real customers (large enterprises) never pay list price. Discounts of 40-80% off list are typical. The "real" price for this beast pad by typical customers is much, much less than what is being discussed here.
 
Back
Top