Hardware bind - need some suggestions for our SAN

brianmat

Weaksauce
Joined
Sep 1, 2011
Messages
114
We just had one of our RAID modules die in our MD3000i and it's looking like $1200-$1500 to replace, but getting any sort of 1 year warranty on the parts is looking difficult since the MD3000i is getting old at this point. The MD3000i was our first SAN purchased 2 years ago to get us up and running in a virtual environment.

I see a few options that could possibly keep us near that price point:

  1. Buy the replacement controller and go with a 3 month warranty
  2. Replace the iSCSI modules with the direct connect modules and serve up via an NFS server
  3. Buy some replacement hardware and build out a ZFS server

I would like to go with a big ZFS server using OI and some quad NICs since the MD1000 chassis are still $1k a piece empty and we're still tied to the more expensive Dell firmware flashed drives.

The big question is: how many of you are trusting your production systems to a home built solution? It looks like the guys at www.zfsbuild.com have had no issues with this setup and it would definitely give us more flexibility AND let us step into some SSDs where the Dell hardware will not.

I can get the base hardware minus drives for about $2k, so we are talking about new hardware for just a bit more. We are only looking at 2-3 virtual servers in our environment, but they are 144GB and soon to be a 288GB set of servers, so we run a lot of VMs.

The one thing I hate loosing is the RAID controller redundancy, but I am assuming I could build that same level of redundancy into a similar server as zfsbuild put together. We'd like to go with more of the 3TB SAS drives for our file storage and less critical items. While not as fast as the SAS versions, we could put more space and redundancy per dollar into the machine and we'd still be able to re-purpose our current Dell 15k SAS drives over time.

Our OI/Napp-It server has been working fine for our PHD backups so far (serving NFS to VSphere) even though we did it on a short budget to get us to year end. I figure with good hardware we could see some nice performance.

We still have the one good controller, so we are not completely dead in the water, but now we are down to a single point of failure on the array.

Thoughts?
 
I like the idea of going with ZFS in your case, my worry tho would be downtime in the event that something goes wrong.

So my first question would be, how critical is uptime? if its not (meaning, storage being offline an hour) i'd say go for it. if it is crucial, i'd go with a commercial ZFS implementation and pay for support.

But i have no prob with self builds. You could mitigate downtime by having more than one head unit (or add one later as budget allows) and go with SAS drives and dual expander chip backplanes. gives you two paths to each drive. (or you could get a SAS switch)

If one head unit dies just force an import of the pool into the other head unit and your back in biz.

(or go with a commercial solution that has HA as an option)

You could also go with SAS drives and dual expanders with one head unit and have two HBA's with two paths to each drive. use MPXIO and if one HBA card or cable dies you still have the second path.

thats my .02
 
I like the idea of going with ZFS in your case, my worry tho would be downtime in the event that something goes wrong.

So my first question would be, how critical is uptime? if its not (meaning, storage being offline an hour) i'd say go for it. if it is crucial, i'd go with a commercial ZFS implementation and pay for support.

But i have no prob with self builds. You could mitigate downtime by having more than one head unit (or add one later as budget allows) and go with SAS drives and dual expander chip backplanes. gives you two paths to each drive. (or you could get a SAS switch)

If one head unit dies just force an import of the pool into the other head unit and your back in biz.

(or go with a commercial solution that has HA as an option)

You could also go with SAS drives and dual expanders with one head unit and have two HBA's with two paths to each drive. use MPXIO and if one HBA card or cable dies you still have the second path.

thats my .02

My response probably didnt take into account the existing hardware you already have very well. im not knowledgeable about the MD3000 stuff, but i dont see any issues as long as you have backups and a contingency plan in place (if you replace the other controller, etc) but i wouldnt drop TOO much $$ on it.
 
Thanks for the input. The MD3000i is an older Dell iSCSI device with dual RAID controllers, but they are not active/active from what I understand. Two years ago when we got it the hardware fit into the budget and kept us out of the EMC space and pricing. A lot has changed in the last 2 years.

We would be able to keep the MD3000i running but the ZFS server would really be the redundant setup going forward. The ability to use commodity hardware and perform incremental upgrades in RAM, RAID cards, and CPU would be cheaper than going from hardware platform to hardware platform. Right now the MD3220i (not even the latest) is about $5k on the used market and that will not use our current MD1000 expanders, so that has to be included in the cost as well.

I'm thinking that a Supermicro case with redundant PSUs and expanders would work for what we are doing. We'd be mixing SAS and SATA in the same case, but not within the same pools. I still have a lot to think about with this. Downtime is a critical point of planning, but since we are not required to migrate the data on day 1 we can do VMs over the span of a few weeks and still not run into major downtime since we have some large evening and weekend windows to utilize.

We're not doing any HA with our data storage, offsite replication (just backup to tape), or any other high end stuff so we just need a reliable NFS store for VMWare.

I'm putting together my current wish list and luckily some of the items I would need were already on our end of year planning list, so we're not in horrible shape all things considered.

It's going to be a long week while I work through all of my options.
 
Back
Top