VSPEX Blue / EVO:Rail / HCI Experiences?

oipunkz

Gawd
Joined
Feb 5, 2003
Messages
537
Anyone out there have any experience with EMC's VSPEX Blue HCI Appliances?

Looking to refresh DC hardware next Q2, and am not totally sold on the VSPEX Blue. I'm leary of the need to buy a whole new appliance just to expand storage capacity (the most likely reason I see for us.) I'm also not able to find much information on real world experiences and performance.

We currently have a standard host/san solution 3 ESXi Hosts, and a VNX5300 that's nearly 4 years old, all running 1Gb iSCSI.

We need a refresh, and specifically much better performance on a few of the VM's (disk latency issues In the Windows VM, not helped by tier 0 caching as we trialed PernixData and did not help us.) The main application runs Rocket Software Universe (giant flat files, multi-value DB) along with the standard MS SQL and Exchange servers in the environment as well.

Just looking for any real world experiences as input.

Thanks
 
I know of two companies that have deployed VSPEX Blue and neither were happy with it.

If you're looking at a hyper-converged solution specifically look at Simplivity or Nutanix over VSPEX Blue or EVO:RAIL. Both are more mature, reliable, and have better integrated features like backups.

But before you jump onto the HCI bandwagon just be aware of the way the environment would scale in the future. You'll either need to expand everything all at once (compute, storage, VMware licensing, etc.) by buying more of the same nodes or purchase independent compute or storage nodes (Simplivity offers the former, Nutanix offers both) but then you're beginning to get away from the main advantage of HCI which is simplicity. HCI comes with a price premium, however, because you're refreshing everything in your datacenter all at once (compute, storage, backups, and so on).

Otherwise you can stick to a traditional server and SAN approach which would work extremely well and allows you to scale compute and storage independently. Cisco UCS, 10Gb networking, and an All flash array (Pure, Tintri, etc.) would perform like crazy, provide great uptime, and be easy to manage and automate.
 
I think about 5 people globally have purchased a EVO:Rail solution. HP almost immediately discontinued them due to horrendous sales and interest. It's not a very good solution.

I agree with the previous post 100%.
 
Sounds like your VNX 5300 may just not be set up ideally. You should have separate LUNs for your SQL data, SQL logs, and possible a storage DRS pool for your Exchange, etc.

In that VNX, what kind of drives do you have (# of each speed and size)? How much raw/usable data is on it? How much warranty/maintenance is left on it?

Do you have any options for fiber to connect the VNX to ESX?

How many VMs are on your three hosts? What version of VMware (Ent, Ent Plus, Cloud, etc.) are you running?


We have four VNX5300's, one VNX5500, one VNX5400 and one VNX5200. We have about 28 ESX hosts (IBM HS23, IBM Flex and Cisco UCS) connected to those VNX, running about 300 server VMs (including several Exchange servers and several SQL servers) and about 650 VDI desktops, all connected to each other through Cisco Nexus 5596's with 4 to 8GB connections. Most of the LUNs are set up with tiers (Fast Tiering) or flash cache (Fast Cache). Fast Tiering LUNs have a mix of slow 2-3TB SATA drives, 600GB or 300GB 15k drives and 100GB flash. Fast Cache LUNs have a mix of fast dives and flash.

I ask warranty, because we found that EMC likes to crank up the cost after 5 years, and it's usually cheaper to buy a new array than to continue paying maintenance.
 
Sounds like your VNX 5300 may just not be set up ideally. You should have separate LUNs for your SQL data, SQL logs, and possible a storage DRS pool for your Exchange, etc.

In that VNX, what kind of drives do you have (# of each speed and size)? How much raw/usable data is on it? How much warranty/maintenance is left on it?

Do you have any options for fiber to connect the VNX to ESX?

How many VMs are on your three hosts? What version of VMware (Ent, Ent Plus, Cloud, etc.) are you running?


We have four VNX5300's, one VNX5500, one VNX5400 and one VNX5200. We have about 28 ESX hosts (IBM HS23, IBM Flex and Cisco UCS) connected to those VNX, running about 300 server VMs (including several Exchange servers and several SQL servers) and about 650 VDI desktops, all connected to each other through Cisco Nexus 5596's with 4 to 8GB connections. Most of the LUNs are set up with tiers (Fast Tiering) or flash cache (Fast Cache). Fast Tiering LUNs have a mix of slow 2-3TB SATA drives, 600GB or 300GB 15k drives and 100GB flash. Fast Cache LUNs have a mix of fast dives and flash.

I ask warranty, because we found that EMC likes to crank up the cost after 5 years, and it's usually cheaper to buy a new array than to continue paying maintenance.

Different IO profile types should be segmented into separate pools with their ownsl Luns on a VNX if you really want your performance to play nice. Reason being is certain types of IO can benefit heavily from Fast cache while others cannot. IE Exchange database/logging or any other database and journaling/logging should be segmented out.

OP I agree with t_ski that your 5300 sounds like its not setup ideally at all. Get someones sales group to come size your hosts IOPS and latency needs for whatever refresh you're doing. Help you make a better informed decision. I've been installing vnx's the past 3 years for customers so if you have questions from that end feel free to post more questions.
I've not played with vspex at all so I'm no use there, not a big fan of it on paper.
 
How can we say the VNX isn't tuned properly when we don't know what he has for spindle counts, FAST cache, FAST VP, Pool/RAID Group layouts, RAID levels, IOPs needs, read/write ratios, etc.? The VNX1 is pretty long in the tooth now and quite slow by today's standards. Plus, his is 4 years old and, as t_ski pointed out, EMC (Dell) loves to jack up maintenance costs past year 4. Even if it isn't laid out correctly, without adding a lot more disks to re-architect the Pool and RAID Group layouts and LUN migrate things around, his hands are tied.

OP, if you have the budget, go for the hardware refresh. There are so many incredible, simple, and powerful storage options out there today, don't cling to 4 year old spinning rust if you can afford something better. Hyper-converged and all flash arrays are great choices. However, I wouldn't give EMC (Dell) any of my money right now. :)
 
How can we say the VNX isn't tuned properly when we don't know what he has for spindle counts, FAST cache, FAST VP, Pool/RAID Group layouts, RAID levels, IOPs needs, read/write ratios, etc.? The VNX1 is pretty long in the tooth now and quite slow by today's standards. Plus, his is 4 years old and, as t_ski pointed out, EMC (Dell) loves to jack up maintenance costs past year 4. Even if it isn't laid out correctly, without adding a lot more disks to re-architect the Pool and RAID Group layouts and LUN migrate things around, his hands are tied.

OP, if you have the budget, go for the hardware refresh. There are so many incredible, simple, and powerful storage options out there today, don't cling to 4 year old spinning rust if you can afford something better. Hyper-converged and all flash arrays are great choices. However, I wouldn't give EMC (Dell) any of my money right now. :)

I completely agree they should go for the hardware refresh. Just didn't come across well when I said have someones sales come size out his exact needs :). Doesn't make sense to go for buying year 5 maintenance on a vnx1 when they will gouge you for as much as they can get out of ya.

Check out a variety of solutions and get your needs sized properly so you don't get someone trying to oversell too much or undersell whats actually needed for your business. We have some Tintri and Nutanix guys on these forums that have some great contacts if you want to go check out those products. The storage market is so diverse and changing right now, you really want to make sure you weigh all the options you can and don't get suckered into something that may be 3x overpriced than what you actually need
 
How can we say the VNX isn't tuned properly when we don't know what he has for spindle counts, FAST cache, FAST VP, Pool/RAID Group layouts, RAID levels, IOPs needs, read/write ratios, etc.?
My reason for saying so was based on the thought that we have more hosts and possibly more VMs being run off of ours without issue. That's why I was trying to ask all of my clarifying questions to see if it mattered.
 
We had similar experiences as others did with EVO:RAIL, it is not well executed. That said, that doesn't mean you couldn't find VSAN ready nodes and achieve HCI greatness that way. Most are probably opting that route. We were immediately turned off by the whole "make it stupid simple" UI. It's annoying. Any remotely decent VMware admin will toss that to the side anyway since it isn't just initially simple with a power user ability under the covers -- have to dig in and do things you should not anyway, and if your'e going to do that.... why buy the EVO:RAIL? Just get VSAN ready nodes.

We also have Nutanix gear. Nutanix is the real rockstar of the HCI space. Their solution is very solid, and they put great, I mean nearly Apple-esque, polish on everything. Their dedication to HTML5 UIs and support page are ridiculously awesome. While that stuff is cool; I'm not saying that is any reason you buy it. That said, I would say the whole company is like that, not just the UI related to polish. I'm not going to blow 'em, not everything is amazing, especially their latest spats with VMware. I find all that bs distracting, and while they're definitely looking to do their own thing the constant messaging about it is getting old. We're not moving away from VMware for a good while -- not that inroads aren't going to occur, I think they'd like it to happen now and its irritating.

You can also look at other in-betweener type technologies rather than taking an axe to it. You can win a lot of points and gain some trust perhaps with management if you saved a lot of money and made it last another year or two. Look at pernixdata, atlantis, or Sandisk's flashsoft. Those are all acceleration tiers on the host side that might make that current array last a while longer without blowing the budget. I know pernixdata has their FVP freedom. While that probably wouldn't work for "free" in a production space, you could certainly kick the tires on it. We were blown away with FVP for its elegance in that it just immediately made a huge difference with minimal changes. Use some memory... not even a lot to accelerate a few important VMs and you'll get sub 1ms writes and great read improvements too.
 
I hear what you guys are all saying... and the main reason for VSPEX Blue consideration is the "perceived" DEAL we'd be getting on it, to the point where it really just doesn't seem real. And of course, there is a ridiculously short (almost over now) time constraint on the deal.
I don't like being pressured, and then again don't want to pass on something just because of that. Fun times for me at the moment.

We are a small IT shop, and all of this (servers/infrastructure) falls on me. But I've had ESXi for 6 years, I manage with it just fine. I appreciate simplicity because of my MANY responsibilities, but the best performance for the money without adding any complexity for sure, and any simplicity that can be sprinkled in is much appreciated.

Regarding our current VNX5300, it has FAST CACHE (100GB), then some 15k and 7200k disks. Mostly just a few Large auto-tiered LUNs (a 5TB, and 3 x 2TB) plus a single dedicated 300GB FLASH LUN for our main application VM (our ERP.)

We're not installing until Q2 of '16, but I was hoping to pretty much have the decision made by now. Now the pressure on this particular deal isn't helping. There are too many options in this space now, EMC alone has no less than 4 solutions they've pitched me through different re-sellers.

We don't have huge IO requirements I don't think, so we certainly don't need an all flash solution. I like the idea of moving into the future with something new, and also hate it because who knows how long it will be around or how it will really perform in the long term. Not to mention the EMC/Dell piece of it and not knowing whats going to happen with any of their overlapping product lines.

Ugh, sorry rambling, but my brain is beginning to melt.
 
...... Look at pernixdata, atlantis, or Sandisk's flashsoft. Those are all acceleration tiers on the host side that might make that current array last a while longer without blowing the budget. I know pernixdata has their FVP freedom. While that probably wouldn't work for "free" in a production space, you could certainly kick the tires on it. We were blown away with FVP for its elegance in that it just immediately made a huge difference with minimal changes. Use some memory... not even a lot to accelerate a few important VMs and you'll get sub 1ms writes and great read improvements too.

We actually did a POC with Pernix and it didn't make a difference. Sure on graphs/stats it made a difference, but we couldn't "feel" the difference, so it wasn't worth $10k. They said we didn't have a terrible environment to begin with, so it just didn't make sense for us.
 
I have not used VSPEX, or any other storage devices, from EMC. The only major complaint I have about my VNX are the Control Stations keep having random issues. Other than that, they have been great as long as we don't over-saturate them.

If you like the VNX and are familiar with all the tools to manage them, it should be similar with the other array. Sometimes familiarity is a bonus/selling point when you don't have the time to learn something completely new.
 
VSPEX Blue is just VMware VSAN+vSphere. It's a VMware product with an EMC badge. HP had the same solution with an HP badge until they discontinued it.

No physical array and not much to do with EMC from a technical perspective.
 
You'll either need to expand everything all at once (compute, storage, VMware licensing, etc.) by buying more of the same nodes or purchase independent compute or storage nodes (Simplivity offers the former, Nutanix offers both) but then you're beginning to get away from the main advantage of HCI which is simplicity. HCI comes with a price premium, however, because you're refreshing everything in your datacenter all at once (compute, storage, backups, and so on).

I don't understand the Simplicity comment. You add compute or storage nodes with our solution, the software takes care of everything in the background..nothing changes, add in AHV and now you have a Hypervisor fully integrated and Managed within the same interface with the removal of vSphere licensing costs. Typically AHV is very effective in environments like these, or it's used as a use case like Tier 2/3 or Test/Dev. Oh, btw, let's also talk about having to never forklift again.;)

We also have Nutanix gear. Nutanix is the real rockstar of the HCI space. Their solution is very solid, and they put great, I mean nearly Apple-esque, polish on everything. Their dedication to HTML5 UIs and support page are ridiculously awesome. While that stuff is cool; I'm not saying that is any reason you buy it. That said, I would say the whole company is like that, not just the UI related to polish. I'm not going to blow 'em, not everything is amazing, especially their latest spats with VMware. I find all that bs distracting, and while they're definitely looking to do their own thing the constant messaging about it is getting old. We're not moving away from VMware for a good while -- not that inroads aren't going to occur, I think they'd like it to happen now and its irritating.

This is good feedback. Typically AHV is used in a use case, however smaller environments with little IT staff find it a great solution as well. I can tell you the "social media" spat between Nutanix and VMware was really brought on by misinformation, miscommunication, etc, people problem. As you can see, the person that had that issue at VMware is no longer there and is now touting how great the Oracle solution is, however, internally, we have 2nd number of VCDX who are responsible to validating and certifying our solution with VMware and features, ie VAAI/SRM-SRA, etc. They have outstanding relationships with internal VMware engineering folks.

That's not to say there isn't some COOPETITION, right? I say, let the marketing folks do their thing, that's what they get paid for, but also, let's lay out the facts. We know VMware isn't going anywhere and neither are we, so we have to play nice supporting and validating the total solution for the customer.

AHV is making headway, believe me, however it's only part one of our Application Mobility Strategy. There is much, much more to come.;)
 
Last edited:
I don't understand the Simplicity comment. You add compute or storage nodes with our solution, the software takes care of everything in the background..nothing changes, add in AHV and now you have a Hypervisor fully integrated and Managed within the same interface with the removal of vSphere licensing costs. Typically AHV is very effective in environments like these, or it's used as a use case like Tier 2/3 or Test/Dev. Oh, btw, let's also talk about having to never forklift again.;)

Simplicity as in before you were managing storage and compute together, purchasing nodes that contained both utilizing the same networking north and south, same SKUs, same licensing, etc. Using independent storage or compute nodes adds more complexity. Is it overwhelming, unmanageable complexity? Absolutely not, just moving away from the simplicity that pure HCI (storage and compute in each node) offers.

Not poo-pooing Nutanix at all; it's a phenomenal product.
 
It really is unbelievable how prevalent Nutanix plants are. That alone puts me off them somewhat, honestly. Can't have a conversation anywhere without one showing up offering biased feedback. Nutanix and marketing go hand in hand. It's hard to separate out fluff from facts.
 
Good luck on the IPO though, refreshing to see someone go it alone instead of getting swallowed up.
 
It really is unbelievable how prevalent Nutanix plants are. That alone puts me off them somewhat, honestly. Can't have a conversation anywhere without one showing up offering biased feedback. Nutanix and marketing go hand in hand. It's hard to separate out fluff from facts.

Plants? Funniest shit i've heard all day. I've been a Hardforum member for 13 years.

Statements were made that warranted a response, plain and simple. Did I even mention EVO:RAIL or bash them as the subject of this post was pertaining that solution? I don't typically post here anymore because of just that.
 
Last edited:
Plants the wrong word. End result is the same though. Sorry, I know a lot of nutanix employees and there is a scary cult aura and blindness with most of them. On the flip side I also know many simplivity employees and they are the polar opposite and are completely unable to market and sell their product because of it. They desperately could use some BS generating bloggers, instead they sanitize everything down to nothing. You take the good with the bad.

As for evo:rail, VMware has done a solid enough job of screwing that sure thing themselves. No outside help needed.
 
I have been using vSAN for a while now, grant you i have been using it since Beta1 in the early phases... its definitely gotten a lot better than when i started with it!

One thing to know about vSAN is either u buy the Ready Nodes or u buy the correct Hardware on the HCL or you gonna have a bad time!!

Grant you i am a VMW employee, and i have played with a lot of the other products out there, every one of them has their own advantages and disadvantages... the real goal is to make sure you do your homework and find the right solution that fits your needs for now and 3-5 years down the road!

Not a Sales guy!!!
 
I used to work for EMC in the datacenter space, and moved over to Nutanix early 2015.

I had limited experience with the vspex, but I can answer general questions in the HCI space and alot more on the Nutanix side.
 
Well, just an update on where we landed; we decided on a 4 Host VSAN Solution utilizing Dell Hardware. We've long been a Dell shop, they've always done right by us, and we feel confident and comfortable with the local VAR we are utilizing for the project. They have great engineers and have The Very best VSAN expertise at their disposal.

Although we were presented a spectacular deal on VSPEX Blue, I just don't know where that product will end up, or how expensive it will be to expand should we need to.
 
(Sorry for resurrecting the topic)
Just wondering what was the main factor for buying 4 hosts?
Was it your compute requirements or the minimum recommended number of nodes with VSAN?
 
Back
Top