Building full redundant RAID array

Wolf-R1

[H]ard|Gawd
Joined
Aug 30, 2004
Messages
2,005
Anyone ever tried to build a RAID array similar to what's sold on the enterprise market? I work for a small(ish) company and we're pricing out RAID array's such as what Stonefly or Dell/EMC sells and the thought occurred to me that it might be possible to build one out with say Supermicro hardware.

What currently have and what we're looking to replace it with is roughly this:
2x 1U controllers (CPU,RAM, SSD for OS, dual PSU/server. I assume some sort of clustering software involved)
1x storage cabinet. All hot swap drives, dual PSU, I assume dual RAID controllers since there's a SAS cable going from each controller into the storage cabinet on what we have now.

Anyone ever tried this successfully?
 
overall its not that hard of an idea to complete.

redundant network paths to the storage as primary. w/e you settle on - freenas as an example - have that setup to dupe the data to another box of the exact same sepcs.

when all done you will have multiple paths to your data and a warm standby if the primary goes down.

this should never get you out of backups.
 
Most enterprise customers are buying those setups from Dell/EMC, Stonefly, HP etc because they need the support level that these companies provide for bugs and down system issues.

We've tried to save money by using our own built devices or something low cost like Synology or Qnap but in the end, they just weren't at the level that we needed, both from a software and a support standpoint. Synology was pretty good for us until I tried to update the firmware on an HA cluster and it tanked. I tried to call support but they are only 8-5 M-F so that was the final nail in that coffin. I did this on a Friday night, so our entire infrastructure would have been down had I not figured out how to get a single node working again. We ran without HA for months because every time we tried to reinstate the HA Cluster, it would bring down the active node. We finally gave in and went with the solution below.

We use an enterprise software product now that uses windows servers with normal disk arrays and can provide storage via iSCSI of FC. It allows for multiple mirrors of the data volumes as well as replication, dedupe and snapshots. Their support is top notch and I have never had to wait for assistance. We have since done several installations elsewhere for other customers in both a standard SAN style implementation with iSCSI as well as hyperconverged installations with vmware.

Keep in mind solutions where you build your own or use your own hardware are typically network based storage and not direct connect like your post mentions your current setup is. I don't have a lot of experience/knowledge on direct connect solutions but I an unaware of anything that would be self built that would provide this type of connectivity. iSCSI and FC allow you to do things like shared storage for vmware and also allow for redundant network paths to multiple storage end nodes allowing for automatic failover. Some of these clustered setups even increase performance with round robin or other path policies that enable data access over multiple connection points. The problem you run into though is that FC and iSCSI can be an expensive venture to get into as well since usually you need switching if you want true redundancy. Direct connect is always an option, but at that point it's not really much of a benefit except for cost.
 
You will never be able to build something that competes with an EMC/Tier 1 array based on a cheap solution you've cobbled together. Its very attractive from a price stand point to do it. But when you cobbled together solution takes a dump.. you get what you paid for. Further more, you are building a completely one off, random home grown solution. A solution that only you know. When you quit, and you will.. Someone else will have to pick up that mess. That mess will be used as an example as to why you should never ever do something like that in a real production environment. My stance is a little more hardcore then most, because Ive been working in the financial sector for around 9 years now. I can't have downtime. If I do, I need to know that my support team will be there with me for however long it takes to fix it. 2 hours to 96.. don't care. EMC will do that for me. A home home grown cobbled solution will only ever get you the bare minimum for support. Further more a home grown solution built on a all sorts of different hardware and software will have vendors pointing finger at each saying the issue isn't them. The only person to suffer is you. At least with an EMC solution, the 1 throat to choke mentality really pays off.

To really drive it home about how picking a storage vendor / solution is so important.. Go look up Tintri and see what happened to them and their customer base. To say its sad is quite an understatement. Zero offense to any Tintri folks here. Ya'll had a face melting product, and I was really rooting for you guys. Hopefully ya'll come out of it.

Just think about how important your data is, and what your service level agreements are. How many 9s of up time do agree to provide? The more 9s of agreed on uptime = a more expensive but reliable, and redundant stack of known interoperable hardware and software you need to implement.

Sorry for the ramble, but hope this helps, and best of luck.
 
I've had a home-built NAS box work for me for multiple years. I use FreeNAS as the OS and love it. As others have said, the speed is not what you would get from something off of the shelf; but that is dependent on your use case. A home media server, you'll be great. Something more I/O demanding may exceed what you can get from personal level components.

IMHO, the most important thing is to be aware of your use case.
 
You definately want to look into a ZFS based system like FreeNAS. You can throw a few drives on an old motherboard and use ZFS as the file system. ZFS is almost uncorruptable. It's super resiliant and you can run it on utter shit hardware. Just get you a motherboard and a half decent proc, roughly 1GB of RAM per TB of storage and a few hard drives and install FreeNAS.

If you have a newer proc available you can even do AES-NI based hardware encryption on the fly so if one of the drives fails you don't have to worry about someone getting your data. I've been using a ZFS software array for CIFS sharing through my house for years and it does the job beautifully.
 
"throw a few drives on an old motherboard" sounds a solid long term solution for any serious business IT.

FreeNAS is for hobbyists.
 
Back
Top