Thoughts on high density, expandable enterprise class shared storage

fastgeek

[H]ard|DCOTM x4 aka "That Company"
Joined
Jun 6, 2000
Messages
6,520
Sorry if this comes across a little... out of it... I am damn near asleep at the keyboard and doing this before I forget. At work we're being tasked more and more often with groups needing tons of storage, but we're seriously limited by rack space. Budget? Not so limited. We can justify a lot; but it still has to be justified.

Ideally this conceptual system needs to be:
Shared with Linux, Windows and either ESXi or Hyper-V VMs.
Easily expandable by adding additional storage modules
High density as rack space is at a serious premium
Relatively easy to service/support/train
Robust and "fast"
To the best of my knowledge, our storage needs aren't IOP intensive

Shared connectivity would be via 10Gbit; most likely copper I would presume

We typically use Adaptec RAID controllers in most of our products; but are open to LSI as well.

We just started looking at some interesting 1U boxes that support 12 3.5 drives; such as this SuperMicro and other manufactures that don't seem to be public just yet. These would be completely filled with 6TB or 8TB Hitachi HE drives.

The idea, which I'm sure most of you have guessed, is to start out with one or two of these storage servers. As needs increase, it would be ideal if we can add more nodes in a modular fashion and have the new space "just work" without having to jump through hoops. Also in a perfect world there would be no limit to how often we could add to this.

I've seen Napp-it mentioned and just came across Hadoop; however I'm not sure the latter applies to us. One of the guys at work is playing with.... ummmm.... open something... yeah... that's not useful. I'll find out tomorrow.

Hope that's enough to start with. Sorry if I left some critical points out. I'm really not a hardcore storage guy; but am ready to learn. :)
 
It highly depends on what you need. Your description is all over the place.

If you need something like a hadoop cluster, where you need LOTS of cpu with LOTS of local disk storage, yes, those 1u machines are much a perfect match.

If you need lots of centeralized storage, then it isn't as idea, as you ONLY get 12 disks per U of rackspace.

If your just bandwidth need for the disks, I would go with 2u servers and plug in as many low profile raid cards as you want to use. Then from each of these you can attach to a 90disk 4u storage unit. If you wanted you could put two storage units chained, and still stay under the normal 250disks per port limit of the cards, they might have a higher 500 limit, depending on the model. But if you put in 5 cards, each with 2 sas ports, you can easily connect 10 units to a single head node, and have a completely full rack of just storage.

That would give you 7200tb in just storage pods, plus the 12 disks in the head nodes if you wanted, all on a single system.

If you feel that is too much storage per cpu/redundancy, then you could go with the 72disks model that has a motherboard in it, that would fill 11 per rack, at 6336tb.
 
I apologize for not keeping on top of this; but work is sapping my poor brain like a gluttonous vampire. :p

If I was all over the place, that's for a couple reasons. As mentioned, I was extremely tired writing the OP and, well, welcome to my world of extremely vague systems requirements! This is something we fight with on a regular basis. People come to us and say "We need this" but when we ask about if they need high IOPS, high transfer speeds or anything else they just look at us blankly. Drives me mad. As such, it's damn hard to focus on specifics as we need to keep our options open as we home in on what these teams actually *need*.

One thing that's happening in this discussion here is different definitions of "high density" and that folks are missing a critical point that was clearly mentioned: rack space is at a serious premium. On top of that, these racks go in areas that are jam packed and floor space is extremely valuable. (Just trust me here)

Going from a 2U 3.5x12 bay box to a 1U 3.5x12 bay box is a big deal. It's that whole space thing again. We currently have 4TB drives in said 2U and would most likely be using 8TB in the 1U. Add in another 1U box, perhaps without the MB portion, and we've quadrupled the available raw capacity in the same 2U rack space.

I'm quite familiar with the 4U+ monster capacity storage boxes mentioned. They're rather slow. They're also huge, extremely heavy and would require a complete rack redesign. When we have to provide PBs of space, then yes, we'll have to consider them once more. We'd likely have them modified to eliminate the expanders though. Until then, they're pretty much off the table.

Anyway. As mentioned I'm rather inexperienced with the whole NAS / shared storage concept. Throwing together lots of hardware? That's easy. Do that all the time. :p My main curiosity is: what do folks think the best option is for sharing a large amount of shared data with multiple OS's is? Free is always good; provided the TOS allows for commercial applications. Paid is fine too provided the activation doesn't require Internet access, or if it can be backed up and restored offline. (Internet access is 101% verboten in our products) I still need to research and understand what hadoop is / how it works. Came across it as I was writing the first post and that's about all I know. :p

Please try to bear with me. I appreciate it.
 
Back
Top