Openfiler Opinions for Production Env

Joined
Mar 15, 2002
Messages
782
Looking for opinions on Openfiler in a production environment. We are a small county gov so we don't have the budget to really do some of the things we might would like to do sometimes, however we do have money to buy a couple of entry level SANs (Equalogic) but being me I'm always looking for perhaps a better way to do things, and cheaper also.

I've used Openfiler in test environments and it seems adequate but I've never used it in production. We have three vsphere boxes running local storage at one location and we are looking to use HA and vmotion so shared storage is needed.

I've thrown around the idea of maybe buying a Dell R710 filling it with TB disks and then using Openfiler. This setup would save us about 7k compared to an Equalogic solution. We want to have two SANs doing replication and both Equalogic and Openfiler can do this from what I've read.

Basically looking for the following info:
1. Do most folks run two SANs for DR and HA uses?
2. Is Openfiler a real viable solution for production?
3. We have the budget to buy two Equalogic SANs if needed.
4. NFS as opposed to iSCSI which I've used in the past? Pros Cons
5. Does Openfiler do true iSCSI now?
6. Anyone using Openfiler or OpenNAS in production?
 
Instead of OF, how about something like OpenIndiana (free) with napp-it (free) as the GUI? Allow you to run ZFS, which can be a big win...
 
Don't. Go EqualLogic..or look at EMC VNXe. If you roll your own YOU are on the hook. Don't believe OpenFiler is on the vSphere HCL. And I recommend NFS over iSCSI for vSphere.
 
Depends on his budget :) Out of curiosity, could you clarify your last recommendation?
 
Don't. Go EqualLogic..or look at EMC VNXe. If you roll your own YOU are on the hook. Don't believe OpenFiler is on the vSphere HCL. And I recommend NFS over iSCSI for vSphere.


:( I was kind of expecting to hear that but I just wanted to maybe test it out to see if my initial thoughts may be wrong. What about dual SANs for replication for DR? Does everyone do this or most just buy one SAN and rely on backups for DR? I know each place has its own considerations and etc but in general.
 
I agree. Too much risk. Get something that's supported. I'm assuming this is important data..why take the risk?
 
Don't. Go EqualLogic..or look at EMC VNXe. If you roll your own YOU are on the hook. Don't believe OpenFiler is on the vSphere HCL. And I recommend NFS over iSCSI for vSphere.

This.

Openfiler 2.3 is also using an older iteration of IET which doesn't play nice with vSphere. Eventually they're releasing Openfiler 3.0 which uses SCST but don't hold your breath.

As NetJunkie said, when you design things for work to run in production, anything that goes wrong is your fault and your responsibility alone to fix. The EMC VNXe would be an EXCELLENT and inexpensive SAN that would work wonderfully with VMware. Setup with the VNXe couldn't be easier with VMware and I believe you can get one for under $10,000.

Running Openfiler for testing or lab purposes would be a good idea, however.
 
:( I was kind of expecting to hear that but I just wanted to maybe test it out to see if my initial thoughts may be wrong. What about dual SANs for replication for DR? Does everyone do this or most just buy one SAN and rely on backups for DR? I know each place has its own considerations and etc but in general.

A lot depends on your budget and SLAs. What kind of recovery window do you need? 24 hours? 5 minutes? How much money do you have to spend?

If your organization demands a DR solution and you don't have the funds you need to replicate your DC, then you may have to get inventive and consider things like Openfiler for DR only. However, I still wouldn't recommend it. Should things fail and the DR site won't function, every finger will be pointed straight at you.
 
Ah, sorry, I didn't know you referred specifically to the OF iscsi implementation - thought this was a general point. In general, I agree with the points being made. Just saying that if he elects to go the home-rolled way, I'd go with OS/ZFS rather than OF...
 
I also don't get a good feeling about OF. Seems like it's been forever since anything has happened with that platform - kinda makes you (well me at least) wonder...
 
WTF. No GUI? If you're going to roll your own from the CLI, there are a lot better solutions than a box based on rpath linux!
 
I used to be full of praise for OF and I still think that it's a decent project but I also think that it's not really clear who, if anyone, is actually still actively working on it. There were some hints about there being version 3 at some point this year but overall I feel that the dev team (whomever that is) has been rather uncommunicative. This is of course expected if you run the Community Edition but since you do have the option to pay (which you should if you run prod just so you can call someone) I would be hesitant to use OF for anything other than scratch storage.

I would definitely NOT buy an Equallogic or any traditional storage vendor if you are on a budget. I find that the traditional storage vendors are essentially pricing themselves out of small deployments and that they provide no additional value for the 300+% premium they charge per TB for most small deployments.

AceXsmurF recommended JetStor to me. I went ahead and looked into it, compared the features and the cost to my requirements and decided that there is no reason to pay EQL or EMC $2,500/TB raw if I can get the same (or arguably better) storage from JetStor at <$900/TB raw (edu discount applied) without to have to pay thousands of dollars a year in tech support and maintenance fees and without to be locked into buying crazy expensive disks from the vendor if I should ever decide to upgrade.

I had a great conversation with the technical staff at JetStor and test drove the web gui remotely and ended up buying a couple JetStor SAS 616iS 16 TB raw (each) units which come with dual controllers, 4 GbE ports per controller (you can buy 10 GbE if you want), lifetime technical support via phone and web included in purchase price, 3 yr warranty on controllers, 5 years on drives, NBD replacement, will allow foreign disks.

What can EQL or EMC offer me that JetStor does not at the capacity that I am using to justify their 2.5x more expensive solution purchase and thousands per year in support costs? Nothing.

EDIT: The JetStor units are VMware Certified.
 
What can EQL or EMC offer me that JetStor does not at the capacity that I am using to justify their 2.5x more expensive solution purchase and thousands per year in support costs? Nothing.

Better performance. More features. Full replication. Support for like SRM. Heavy vSphere integration. Local support and fast parts replacement. Pretty sure they'll be around in 2 years.

Is all that worth the price? Depends. But things like the VNXe are putting the crunch on these tier 2 and 3 players.
 
Better performance. More features. Full replication. Support for like SRM. Heavy vSphere integration. Local support and fast parts replacement. Pretty sure they'll be around in 2 years.

Is all that worth the price? Depends. But things like the VNXe are putting the crunch on these tier 2 and 3 players.

Performance is something that can be quantified, I'll be happy to run some tests once the units get here. I am inclined to say that the JetStor unit will hands down outperform the EQL PS series, my PS5000 croaks at 2500 iops. Then again, my 30d average production iops are <300, so I don't have the high performance requirement.

The local school district had a VAR come in and do the capacity/migration planning and their iops were less than 200 measured over 30 days.

More features only matter if you need the features, if all you need is SAN storage to run connect a few servers then most all SAN features are pretty meaningless. In fact the JetStor actually provides features that the PS5000E does not in that one can divide the disks into multiple arrays rather than having all disks be part of the same array. This is IMHO true for any of the PS units, they put all the disks into the same array and LUNs are really just for show.

Support/Replacement - stuff is redundant, if a controller or PSU fails on the JetStor I have a new one overnight. I bought two extra SAS disks w/o tray per unit so I can replace them right away should there be a problem. If the backplane fails, I'll be down for the day on that array. I don't have the stats but I would assume backplane failures are statistically rare.

around in 2 years - sure, but let's just say they are out of business by then, I'll still come out way ahead buying a different non-traditional vendor unit in 2 years and surplus the ones I bought today than buying a traditional vendor's unit and pay year after year

I am not saying that there isn't a place for HP, EMC, etc. in the market, obviously there is, but for some deployments (especially smaller ones) it's really unconscionable to pay EMC when buying direct from a non-traditional vendor results in tens of thousands in TCO savings.

It all depends on what your requirements are, all I am saying that if someone is seriously looking at OF for production due to budgetary issues then EMC et. al. is not a viable alternative but folks like JetStor are (as opposed to running of of Supermicro servers and some version of Linux to create NFS shares or an app to provide iSCSI targets).
 
I played with OF to interface with IBM SAN enclosures, it's nice, but for anything more advanced such as expanding a raid, you need to use the command line. what's nice though is that it uses MD raid which is fairly standard accross Linux and is very reliable. I trust that more then proprietary stuff such as equalogic as if something goes on I can most likely google for help. Sure you get support from Dell... for 3 years. After that you're on your own, and proprietary stuff is very hard to find support and parts for.

for a company that has a budget to upgrade every 3 years then sure, an enterprise class solution may be the best bet, but with a lower budget, I would personally just have a DIY SAN. Hell, for twice the price and still cheaper then enterprise class, have two identical ones. I'm sure there's a way you can somehow load balance for extra redundancy.
 
With like a VNXe at $15K or so you can lease that for a very reasonable amount. Over 3 years that's only $5K/yr. If you run your business on your infrastructure that's nothing. I rarely deal with customers this size any more but if you break it down and show what's required for the business usually people see the light.

Everyone says "Oh, it's fine if we're down 8 hours or 16 hours waiting on a tier 2 vendor to get us parts" to which I say, okay..do it. Simulate that outage and call me. Usually they change their mind.

It's up to everyone to look at their requirements and make up their minds but building your own storage or going with a 3rd tier vendor is rarely a smart move. I know people that roll their own massive storage systems and for them it works....they have the staff to manage and maintain it. For most people that's not an option. Do you really want to be the guy that built that array when something goes wrong? People VERY quickly forget about the $5K you saved them when you have a data loss outage or one that takes a day to recover from....
 
Remember, just because you build a custom system does not mean you suddenly stop doing backups, having redundancy and so on. An enterprise class system can also fail. Great if you're within the warrenty period as often there is like a 4 hour support window or w/e but good luck after that. Whether it's custom or not, it can still fail, and you can still get downtime if you don't have spare parts or redundancy.

With the money you save by building it yourself, buy spare parts, and obviously, have a good backup solution in place like you should have anyway.
 
People VERY quickly forget about the $5K you saved them when you have a data loss outage or one that takes a day to recover from....

How very true. But consider this, you have the option of building your own using some ZFS device like OF or Freenas, or continuing to use the company's current data mechanism....USB hard drives plugged in to the back of every server. What would you do then?
 
...or continuing to use the company's current data mechanism....USB hard drives plugged in to the back of every server.

Hey, get the hell out of my server room! :p

If you have the budget, I'd stuck with the main players. It sucks, I know coming from a small company but the headaches you can potentially avoid are worth it in the end to the business.

Dollars saved in these projects can many times be such a fraction of the businesses overall expenses it's just not worth it.
 
Before we continue, let's make sure we all understand that in the OP's case the cost of buying storage is sunk cost. There is no option to recover the cost of computing from clients nor is it part of a for-profit business expense.

JetStor has been around since 1996, that's 5 years before Equallogic was even founded. Since they are privately held we don't know what's happening on the books but they survived the bubble(s) and assumed acquisition offers etc. etc.. What we do know is that their gear is VMware Certified, JetStor spend the money to get the cert and VMware looked their gear over and found it to be acceptable.

I think that regardless of our opinions on traditional storage vendors we can all agree that it would be better to run production storage on a JetStor as opposed to a Supermicro server if for no other reason than the JetStor having dual active controllers. Yes, you could achieve redundancy with the Supermicro but you would need two chassis, taking up twice the space, sucking twice the power. The cost of two Supermicros would probably still be a bit lower than the cost of a single JetStor and I did personally consider this when making my purchasing decision but for me the JetStor came out ahead because all that needs to be done is to plug it in and it just works.

What really did it for me though was that when purchasing traditional vendor gear one loses all one's bargaining power the moment one buys the unit. Upgrade the disks in your existing Equallogic? Pay $2,500 per TB list price, even if you were to get a whopping 50% off that's still $1,250 per TB. Upgrade the disks in a JetStor? Pay whatever you pay Newegg or whomever you buy SAS disks from, or go with SATA if you like.
 
They did just release version 2.99 but it has no GUI IIRC.

It's using the old GUI, which is fully-functional. 2.99 was released so peeps that loved 2.3, except for the fact that you couldn't run it on newer hardware could have their cake and eat it too. That also allowed more focus towards 3.0

I heard that 3.0 was going to use LIO target. 2.99 has LIO target, but it must be configured manually (which I've succesfully configured in our Vsphere 4.1 environment, but not stress-tested yet). They may indeed have changed the plan to go to SCST instead of LIO, but I haven't heard that.

3.0 will implement new features with a new GUI. Supposedly 3.0 is going for VMware storage validation.
 
We use Open-e and their DSS product. The cost is REALLY low and in my opinion it is a far more mature product than OpenFiler. They have good support and 24/7 support for reasonable prices. Highly recommended!

www.open-e.com
 
We use Open-e and their DSS product. The cost is REALLY low and in my opinion it is a far more mature product than OpenFiler. They have good support and 24/7 support for reasonable prices. Highly recommended!

www.open-e.com

How do these guys compare to Nexenta?

Just curious because we are currently looking at Nexenta and I had never heard of Open-E before.
 
Open-e has been around for a long time. I used it long ago when it came on an ide flash disk. Their new products look good and I may try them out for a backup server I am going to build in the near future.

Whenever I get around to installing the trial I will post my thoughts on their new product. The automatic NAS fail over is when I am looking for.
 
1) Two nodes are OK for DR *or* HA. But to have both you need THREE nodes. Two nodes doing sync mirror between each other and thus providing HA and third node used as a destination for async replication over T1+ connection (DR node).

2) Openfiler is not production level software. It's a kludge... Don't use Openfiler for anything except experiments. It's slow, support is expensive and compatibility is close to missing. There are numerous free solutions ages better then it.

3) Go EQL or LH. Both are fine. At least both come with a REAL support.

4) NFS could be faster for some installations but with proper MPIO iSCSI is preferred.

5) No it does not. And with free ZFS-based solutions hanging around probably never will catch up to any reasonable level.

6) Know some cheap guys with a lot of free time. But their "production" is called "test & development" by other people :)

Good luck!

Olga

Looking for opinions on Openfiler in a production environment. We are a small county gov so we don't have the budget to really do some of the things we might would like to do sometimes, however we do have money to buy a couple of entry level SANs (Equalogic) but being me I'm always looking for perhaps a better way to do things, and cheaper also.

I've used Openfiler in test environments and it seems adequate but I've never used it in production. We have three vsphere boxes running local storage at one location and we are looking to use HA and vmotion so shared storage is needed.

I've thrown around the idea of maybe buying a Dell R710 filling it with TB disks and then using Openfiler. This setup would save us about 7k compared to an Equalogic solution. We want to have two SANs doing replication and both Equalogic and Openfiler can do this from what I've read.

Basically looking for the following info:
1. Do most folks run two SANs for DR and HA uses?
2. Is Openfiler a real viable solution for production?
3. We have the budget to buy two Equalogic SANs if needed.
4. NFS as opposed to iSCSI which I've used in the past? Pros Cons
5. Does Openfiler do true iSCSI now?
6. Anyone using Openfiler or OpenNAS in production?
 
Open-e is another thing I hope find dead every morning I wake up.Their product may LOOK good but it's basically the same SCST you could find in a way too many free solution.

Olga

Open-e has been around for a long time. I used it long ago when it came on an ide flash disk. Their new products look good and I may try them out for a backup server I am going to build in the near future.

Whenever I get around to installing the trial I will post my thoughts on their new product. The automatic NAS fail over is when I am looking for.
 
If you really want to save some money and roll your own storage, why dont you look at purchasing the HP Left Hand software from HP?

Otherwise look at VNXe, EQL, or LF systems.

OF just sucks.

Cant speak directly to Jet-stor but im just not a fan of tier 2 or tier 3 storage vendors. For some implementations, shops they may be great, but i dont think the $$ saved is worth it in the long run....For Production Environments.

NFS >iSCSI - that said i would look at the VNXe before the EQL, because they support both NFS & iSCSI, where the EQLs only do iSCSI.
 
We got it through a reseller. I think it might have been buystorage.com. Open-e has had issues no doubt. But we did not have anywhere near the budget to get even an entry level SAN from any of the big boys. Believe me, we got the quotes to prove it. For an on the cheap roll your own but still have good support solution Open-E works for us. And I totally agree about Openfiler not being good for production
It's just not mature enough IMO. That being said I'm sure someone out there is running it in production. It's not impossible at all, just ill advised.
 
Open-e has been around for a long time. I used it long ago when it came on an ide flash disk. Their new products look good and I may try them out for a backup server I am going to build in the near future.

Whenever I get around to installing the trial I will post my thoughts on their new product. The automatic NAS fail over is when I am looking for.

Ours came on USB keys to allow them to connect direct to MB headers. Since then though they have made it possible to install it to anything just about.
 
OK, I see... Our one was OEM.

You can find free solutions for variety of OSes. Including Windows. I just plain don't understand why you need to pay to open-e for repackaged broken SCST code and support level close to being missing? Why not stick with a free solution and pay to contractor if you have issue and want to close support case ASAP. Just interesting to have your opinion here :)

Olga

We got it through a reseller. I think it might have been buystorage.com. Open-e has had issues no doubt. But we did not have anywhere near the budget to get even an entry level SAN from any of the big boys. Believe me, we got the quotes to prove it. For an on the cheap roll your own but still have good support solution Open-E works for us. And I totally agree about Openfiler not being good for production
It's just not mature enough IMO. That being said I'm sure someone out there is running it in production. It's not impossible at all, just ill advised.
 
I also think you should go for a SAN from a trusted vendor.

What I do for most of my smaller clients that cannot afford a SAN is get two hosts and use Veeam Replication (~500 socket) to create replica's on another server. Just in case my production server goes down, I can turn on the VM's in the backup host and be back in business. This might be something you want to think out.
 
A VNXe3100 with 8x300GB 15k disks ends up being about $10,000. If you create a single RAID 5 group and leave 1 disk as a hot spare that's 1.5TB usable. You can add 7 more drives to fill up the DPE and have two 1.5TB RAID groups. The array tops out at 96 drives so you have a good amount of future growth.

I would take that over an Openfiler or Open-E whitebox any day.
 
A VNXe3100 with 8x300GB 15k disks ends up being about $10,000. If you create a single RAID 5 group and leave 1 disk as a hot spare that's 1.5TB usable. You can add 7 more drives to fill up the DPE and have two 1.5TB RAID groups. The array tops out at 96 drives so you have a good amount of future growth.

I know you mean well, but somehow I can't imagine that someone who's looking at OF for production will find $4,100 per raw TB an acceptable solution.

At the end of the day though it comes down to the value proposition. What additional value does the 4k/TB system provide that the 1k/TB system does not. Is that additional value, whatever it may be, worth the cost to the buyer. How does the 4k/TB system compare to the 1k/TB system down the road when it comes to expansion.
 
I know you mean well, but somehow I can't imagine that someone who's looking at OF for production will find $4,100 per raw TB an acceptable solution.

At the end of the day though it comes down to the value proposition. What additional value does the 4k/TB system provide that the 1k/TB system does not. Is that additional value, whatever it may be, worth the cost to the buyer. How does the 4k/TB system compare to the 1k/TB system down the road when it comes to expansion.

It's about future growth and building a solid storage foundation. The VNXe will come with better performance, support, availability, features, etc.

The VNXe supports 300GB and 600GB 15k SAS drives along with 2TB 7.2k NL-SAS and can go up to 96 drives. For a small shop, it has a lot of expansion capabilities.
 
The example above for 10k was using 15k SAS drives....a customer buying that setup isnt interested in TBs, they are interested in IOPS.
Fill it up with 2TB 7k SAS drives and your cost is substantially lower.

I think the issue here that people are getting hung up on is TB/$ and when you are looking at purchasing a SAN to run a prod environment for a small biz, Space is usually not the issue.

I have more storage at home than most of my clients do for their business. Not trying to toot my own horn, but they are interested in scalable storage that has great performance and maintains that performance. Their VM running ADP doesnt need TBs of space it needs 30GB...maybe
They want a solution that when there is an issue, they can call Dell,EMC,IBM, etc. and its going to get fixed ASAP.

I know you might say its ok to go with a Tier2/3 vendor, but until your business is down 8/16/24 business hours you have no idea how its going to affect you.

For example, my dad's small business (4 employees) if he is down for 8 business hours, he will lose approx 12k in revenue (not much to a lot of businesses, but for a small business owner thats a lot, you can change that as it fits your clients revenue). Thats just the revenue, what about all those customers that are now pissed and will leave in the near future? What about those new customers that were unable to be served and went and found another provider?

Savings in Capital Expenditures now is not worth the cost you *WILL* incur later.
 
Back
Top