Building a work SAN, need reccomendations

Rison

n00b
Joined
Jun 20, 2011
Messages
51
Hey all,

A bit of a case study here, I hope I have enough detail, but don't bore you to death if you read this..

I'm tasked at work to expand our shared storage for a bunch of servers we host locally. We mainly host in-building file/domain/mail servers on HyperV, and mainly a SBS 2011 and Server 2008 environment. New setups are Server 2012 standard and essentials.

Right now, most of these servers use local storage as they can be financial business' who can't have their data on 3rd party servers. (boo)

That's fine, I guess. I'm not a huge fan of local storage for visualization but it's currently fitting the customers needs. They need 2-3 server instances for various setup reasons and don't need to power 2-3 physical boxes.

We're building a new setup for our selves, and we're looking to at least develop a NAS solutions to which we want live failover. It will host exported vhd files of other customers servers (which we will just physically transfer with usb3 drives on weekends) amongst a bunch of other stuff. We need a solution for 10TB + live expansion.

We currently have two main servers right now, which are Supermicro 5017C-MTRF, with Xeon E3-1230v2 and 32G ECC Kingston memory. We have 500G WD RE4's for hard drives currently and want to setup the NAS via ISCSI interface. These two will host our 2012 essentials, BES and various other servers.

Our first impression, was to goto a Synology RS412xs and throw 10x WD RE4 1TB drives in it. We like the expansion on the chassis, and the ability to drop in a 10G fibre card to develop a low latency backbone later.
Unfortunately, the Synology only supports an rsync type setup so that if the master fails we wouldn't have immediate failover and we have a possibility that the data on the ISCSI targets could be old or possibly corrupt. Not ideal.

I like the idea of building a ZFS server, and have played with openindiana and freenas - however for a live environment i'm not entirely sold on those OS's, plus i'm not aware if they have live failover or clustering. We're looking for some type of solution so if a NAS fails, the other is still live.. or be able to take one down live for maintenance.

We unfortunately don't have a $70k budget like the last super SAN that was built for a bank I used to work at.. we're more in the $20-25k range.

Does anyone have any recommendations as to where we should be looking?

Thanks in advance.
 
EMC VNXe - Should be able to get a solid setup for 20k
 
DRBD + Pacemaker might be an option for you with a 2 node setup. DRBD can do synchronous block level replication, Pacemaker can handle service failover and such. Write performance could end up being too poor for your needs.

RTS OS is a turnkey solution using the DRBD + Pacemaker stack. No experience with it.

This is all just the software side of it, of course.
 
Personally I'd avoid the VNXe. Never had a single smooth deployment and even had a customer rip it out and send it back to EMC they were so unhappy.
 
Take a look at Jetstor.
http://www.acnc.com/

They are really easy to work with and have been around a long time. If you need a contact there I can give you a contact that will treat you well.
 
I have to admit, i'm thoroughly impressed with Nexenta. They had a booth at PEX talked with them for a bit...they really have a decent software SAN. Check out this site for additional info:

www.zfsbuild.com

Of course this is assuming you would go whitebox.

If not, I'm a stickler for EqualLogic, not only because I install it quite a bit, but its so friggen easy and comes with a shitload of features without the BS of licensing costs. My only complaint is firmware upgrade process, it's not active/active controllers so there is some downtime, and for those that say there isn't...stop spreadin the FUD and no, the option of migrating all your data to another node in the cluster is not a valid option because it's painful and should be unnecessary, assuming you have enough storage capacity. Other than that issue, i'm sold. If Dell would address that issue, I would score it very high, 9/10 for the SMB market.
 
Last edited:
Nexenta gets my vote if you're building your own SAN. Version 4 is due out soon and their product works well. Just make sure you load up your system with as much RAM as you can afford and get some enterprise SSDs to use as a mirrored ZIL.
 
Hey guys, I really appreciate your comments - gives me lots to look into and research. If anyone else has any hardware or software recommendations for the SAN, feel free to chime in.

I'll take a look at a few of these listed and re-setup the testlab here. Now all I need is time. :)
Thanks
 
nexenta has major licensing issues ((

3.xx branch is not very impressive

for zfs based anything freebsd is a way to go (linux has zfs in kernel and not user land as well now)

Nexenta gets my vote if you're building your own SAN. Version 4 is due out soon and their product works well. Just make sure you load up your system with as much RAM as you can afford and get some enterprise SSDs to use as a mirrored ZIL.
 
nexenta has major licensing issues ((

3.xx branch is not very impressive

for zfs based anything freebsd is a way to go (linux has zfs in kernel and not user land as well now)

The BSD and Linux implementations of ZFS are, suffice to say, limited... Linux more so than BSD, but still.

It's still the one thing that the solaris kernel does well, especially for a shared storage platform.
 
I was going to recommend EqualLogic and SyncRep, but it is out of your budget, but I use them and they are awesome.

Either way, good luck. It is hard to do this on a small budget.
 
the equallogic's are in that 20-25k range from Dell, PS4100 is what i'm running at the office here, working quite nice.
 
the equallogic's are in that 20-25k range from Dell, PS4100 is what i'm running at the office here, working quite nice.

Good to know. We have PS6100's here, and they are well over 20k. Worth every penny though. Well supported and full featured.
 
nexenta has major licensing issues ((

3.xx branch is not very impressive

for zfs based anything freebsd is a way to go (linux has zfs in kernel and not user land as well now)

Besides, if you have Unix/Linux systems using LDAP service from AD and you want proper uid/gid mappings for NFS service, the integration process is not straight forward to say the least. I was not able to get the idmap to work even with Nexenta's folks help, so we eventually gave up the idea of buying Nexenta for our environment (which is a mix of Windows and Linux systems). IMHO it is not mature enough for mixed NFS/CIFS environments with Windows AD backend. I have not tried it with non-AD based LDAP servers though.
 
Besides, if you have Unix/Linux systems using LDAP service from AD and you want proper uid/gid mappings for NFS service, the integration process is not straight forward to say the least. I was not able to get the idmap to work even with Nexenta's folks help, so we eventually gave up the idea of buying Nexenta for our environment (which is a mix of Windows and Linux systems). IMHO it is not mature enough for mixed NFS/CIFS environments with Windows AD backend. I have not tried it with non-AD based LDAP servers though.

This is not trivial and independent from directory service.

The problem:
Solaris CIFS server acts like Windows. It uses Windows SIDs as creditentials and
creates ephemeral (only valid in current session) Unix uid and sids.

To overcome this you can use a fixed mapping like Winuser:paul = Unixuser: henry or
you can add the Unix extensions to your AD server to deliver Unix Uids from there.

or you can use SAMBA that uses UID/SID for SMB.
But you will loose the extra Windows compatibilty aspects of Solaris CIFS

But in general: NFS and Windows is not a good pair in a mixed Unix/Windows world.
You must expect problems either from Unix or Windows side. Avoid whenever possible,
best is use CIFS always.

ps: if you look for a free and stable Solaris option: Check OmniOS
 
I have to admit, i'm thoroughly impressed with Nexenta. They had a booth at PEX talked with them for a bit...they really have a decent software SAN. Check out this site for additional info:

www.zfsbuild.com

Of course this is assuming you would go whitebox.

If not, I'm a stickler for EqualLogic, not only because I install it quite a bit, but its so friggen easy and comes with a shitload of features without the BS of licensing costs. My only complaint is firmware upgrade process, it's not active/active controllers so there is some downtime, and for those that say there isn't...stop spreadin the FUD and no, the option of migrating all your data to another node in the cluster is not a valid option because it's painful and should be unnecessary, assuming you have enough storage capacity. Other than that issue, i'm sold. If Dell would address that issue, I would score it very high, 9/10 for the SMB market.

Not trying to spread FUD, but I recently did a firmware upgrade on a PS4100 and when the first controller rebooted, the other took over and there was no interruption in data connectivity. I know that the other sits in standby, but the entire process was pretty seamless. I know you deal with them more than me on a daily basis but I'm confused. I would recommend them as well. Personally I love my NetApp 2240-2's in my data center and my 2220's in my remotes, but they are definitely a bit more complex to manage than my Equallogics I have. You best be familiar with the CLI because that's where you live most of the time on NetApp.
 
This is not trivial and independent from directory service.

The problem:
Solaris CIFS server acts like Windows. It uses Windows SIDs as creditentials and
creates ephemeral (only valid in current session) Unix uid and sids.

To overcome this you can use a fixed mapping like Winuser:paul = Unixuser: henry or
you can add the Unix extensions to your AD server to deliver Unix Uids from there.

or you can use SAMBA that uses UID/SID for SMB.
But you will loose the extra Windows compatibilty aspects of Solaris CIFS

But in general: NFS and Windows is not a good pair in a mixed Unix/Windows world.
You must expect problems either from Unix or Windows side. Avoid whenever possible,
best is use CIFS always.

ps: if you look for a free and stable Solaris option: Check OmniOS

Thanks Gea, I do appreciate your point of view.

Actually, winuser=unix mapping option is what we looked into first, and the showstopper (at least for our environment) was inability to use UIDs higher than 64k. And yes, we store them under Unix Attributes in AD, so I'd have to match them against ones for the locally created users since there is no way to redo/renumber them at this point (they are already used enterprise wide on Linux side).

First problem (albeit minor one since I can still use CLI for that) is that Nexenta does not support assigning an arbitrary UID/GID when you create a user in the UI.

Second problem (major one), even if I change the uid/gid manually in passwd and smbpasswd files upon user creation by using CLI, the fact that they are from the range above 64k totally screws up the UI - it just panics. So there is no way to manage them afterwards through the UI.

So it looks like Nexenta (at least vertison 3.x) has a hardcoded limit somewehe on UI side that causes panic if you try to utilize UIDs/GIDs outside 64k range.
Nexenta staff suggested idmap way, but for some reason it did not work well either.

Don't get me wrong, I am not bashing the product, I just don't feel it is 100% ready for mixed environments, even though it is sold as one that should work for both NFS and CIFS.
 
Thanks Gea, I do appreciate your point of view.

Actually, winuser=unix mapping option is what we looked into first, and the showstopper (at least for our environment) was inability to use UIDs higher than 64k. And yes, we store them under Unix Attributes in AD, so I'd have to match them against ones for the locally created users since there is no way to redo/renumber them at this point (they are already used enterprise wide on Linux side).

First problem (albeit minor one since I can still use CLI for that) is that Nexenta does not support assigning an arbitrary UID/GID when you create a user in the UI.

Second problem (major one), even if I change the uid/gid manually in passwd and smbpasswd files upon user creation by using CLI, the fact that they are from the range above 64k totally screws up the UI - it just panics. So there is no way to manage them afterwards through the UI.

So it looks like Nexenta (at least vertison 3.x) has a hardcoded limit somewehe on UI side that causes panic if you try to utilize UIDs/GIDs outside 64k range.
Nexenta staff suggested idmap way, but for some reason it did not work well either.

Don't get me wrong, I am not bashing the product, I just don't feel it is 100% ready for mixed environments, even though it is sold as one that should work for both NFS and CIFS.


I cannot comment NexentaStor
at least with napp-it + OmniOS/OI you can create user > 64k
although I have not used and cannot say if there are other problems with NFS
 
I have to admit, i'm thoroughly impressed with Nexenta. They had a booth at PEX talked with them for a bit...they really have a decent software SAN. Check out this site for additional info:

www.zfsbuild.com

Of course this is assuming you would go whitebox.

If not, I'm a stickler for EqualLogic, not only because I install it quite a bit, but its so friggen easy and comes with a shitload of features without the BS of licensing costs. My only complaint is firmware upgrade process, it's not active/active controllers so there is some downtime, and for those that say there isn't...stop spreadin the FUD and no, the option of migrating all your data to another node in the cluster is not a valid option because it's painful and should be unnecessary, assuming you have enough storage capacity. Other than that issue, i'm sold. If Dell would address that issue, I would score it very high, 9/10 for the SMB market.


We measured our downtime at approximately 3 seconds when moving from firmware 5 to firmware version 6 on both our SANs
 
Yeah..that's great..i've done about 100 firmware upgrades...some work great..others, not so much.
 
Unfortunatly ... i can attest to Vader...

had a couple fail horrible, but then again.. so have some Emc, lefthands, compellents and eva's

so just being a "big name" doesn't do any good when it's Murphy time...
 
Another vote for JetStor ( http://www.acnc.com/ ). I have now been using them for going on three years (iirc). I have a total of 3 units and one expansion, and they are fantastic.

Benefits:
- low acquisition cost
- free tech support, no yearly maintenance fees
- you can insert your own disks, none of this vendor locked disk bullshit

Drawbacks:
- not VAAI capable

My suggestion is to buy extra disks when you buy the unit. I have purchased all of mine with 2 extra disks per unit.

Earlier this year I had one disk go bad. The replacement process was: Email them the log files from the unit to request RMA number. RMA number was issued in minutes during US Central working hours. I shipped the busted disk to them, they shipped me a new one the same day the old one arrived at their place.

Because I had the foresight to put a couple disks on the shelf I didn't feel that the RMA process was an issue for me.

I have done two firmware upgrades on the units I have, which to me means that they are constantly improving their FW and not just "sell and forget".

I wouldn't use an Equallogic if you'd pay me to use it. My experience with the PS5000E I have has been overwhelmingly negative both in terms of performance, EQL support when I placed a ticket about low performance, as well as in terms vendor locked disks (though I hear that FW6+ got rid of the vendor lock, didn't try it myself yet). Then there was the little gem where upgrading my PS5000E with 16 1TB drives bought from Dell would have cost me a cool $40,000 (yes 40k, not a type-o). Screw vendor locked disk requirements.

I couldn't be happier with the JetStor price, performance, and support. For as long as I am in IT I will not be buying any other SAN storage out there.
 
I've supported and worked with HP LeftHand, EMC Clarrion/Celerra/VNX, and EqualLogic. I'm not really supposed to say this but as far as Firmware Upgrades go in my experience, EMC hasn't let me down but the architectures between the three are completely different. It's my belief that the Active/Active Controllers in the EMC Arrays make it a much better experience overall.

As for performance, EMC VNX brings it but at a high price. EqualLogic has solid purpose-built performance and the bolt on nature of it makes it really easy i call it storage Legos.lol. Of course it all comes down to how its installed configured etc. I've seen some pretty shitty installs of all these products lead to horrible performance and sometimes downtime.

On the flipside I've seen software based storage such as Nexenta provide sick performance but again at the cost of high availability etc.

In the end, you need to do your research and look at the overall user/business experience. There is a lot of info out there and a lot of fud, try to find the feedback from actual users and not just the vendor marketing and try to find it for your use case(s). If this will be for Virtualization use cases look for end users and third party testing for those use cases for specific arrays.

If you're admittedly out of your depth then bring in a reputable partner that sells arrays from different vendors. I work for a dell partner and am pushing hard to onboard Netapp to provide another option and the fact the we need a fully supported array with Cisco's UCS architectures.
 
Last edited:
I'm not very happy with the Equallogics I have onsite. Maybe I'm just spoiled by the NetApp's I bought, but they seem to require more care and feeding than my NetApp does. I've had to do quite a few firmware upgrades on the Equallogic controllers.
 
I'm not very happy with the Equallogics I have onsite. Maybe I'm just spoiled by the NetApp's I bought, but they seem to require more care and feeding than my NetApp does. I've had to do quite a few firmware upgrades on the Equallogic controllers

Dell releases Firmware like it's their job (oh, it is their job), yeah..way too often I believe. I don't have experience with NetApp yet, can't wait, though, have heard many great things.
 
Don't get me wrong, I like firmware updates. But it seems to be that Dell doesn't have a true OS like EMC/NetApp. And NetApp ain't all rainbows and unicorns. Kinda sucks in an HA pair controller that you have to dedicate 3 drives to each controller for the ONTAP installation. Which really blows if you have a small setup like a FAS2220. Put in 1TB drives and you just wasted 3TB for a volume that only really needs to be 200GB for the second controller. Factor in a hot spare and two parity drives, and you only get 4.5TB usable on 12 1TB drives.
 
We haven't seen any issues with our EqualLogic units. You DO need to know what you are buying and they are hella expensive. Our PS5000X will push 260MB/s with 6-8ms latency, Our PS6100 will push 480MB/s with 12-20ms latency.


At the OPs price point I'd recommend Jetstor or QNAP's TS-EC879U-RP or TS-EC1279U-RP
 
I think the very first question you need to ask yourself is if you need a SAN or NAS or both. Next question is if you want to build your own or buy an appliance.

I manage over a hundred NetApp, EMC, IBM, Dell, HDS and Nimble at work. My personal preference is NetApp, but even a properly configured entry level FAS-2220 is probably over your budget.

I use Solaris/OI based ZFS at home for my lab and it works well too, but I wouldn't use it for work.
 
I think the very first question you need to ask yourself is if you need a SAN or NAS or both. Next question is if you want to build your own or buy an appliance.

I manage over a hundred NetApp, EMC, IBM, Dell, HDS and Nimble at work. My personal preference is NetApp, but even a properly configured entry level FAS-2220 is probably over your budget.

I use Solaris/OI based ZFS at home for my lab and it works well too, but I wouldn't use it for work.

Given that OI is ok.
What is missing with ZFS - Support?
-> Nexenta, OmniOS or Oracle
 
Back
Top