EMC VNX / VNXe

You won't spend "countless" hours. All the bad units are being returned and new units put in place. We had probably the first one they saw and we lost a day. Now they know..plus, if you order one today you won't have this problem. It was a small number of units at launch.
 
I'm going to see if I can find out what happened. I've been to the assembly facilities..they test these things like crazy so I'm amazed even a couple bad units made it out. Makes me wonder if it's some sort of post-build defect/process.
 
In the case of the 3100 there was an issue where the controllers wouldn't take the sp1 release of unisphere. Given that they were boxed up and ready to ship before the software was released it wasn't caught until they were shipped. In my case the first unit I received was serial number 361 and the one i received the next day was 31. They actually overnighted me the replacement unit and so it showed up before the one that shipped via ground freight. Of course I didn't find this out until I had unracked it and racked the bad one:). The original unit is back in the rack and appears to be fine but I've been busy with other projects so the new toy will probably sit unused in the rack for the next week or so before I start moving machines to it. Still need to do some performance testing to decide if I'm going all NFS or if I'm going to use iSCSI as well. I also need to decide if I just do a giant raid 10 or two raid 10 pools with the disks.
 
That will make me at the very least hold off on ordering on and perhaps look elsewhere. I realize that these are entry level yadda yadda yadda but I have had my fill with tech support and not looking to buy a unit where I need to spend countless hours troubleshooting it right from the start.

Besides the minor issues I'm facing, doesn't sound like everyone else is facing them. Don't let my little rant above steer you away from this SAN. I think it's a great unit for the $$$ and EMC taking excellent care of me at this point.
 
Still need to do some performance testing to decide if I'm going all NFS or if I'm going to use iSCSI as well. I also need to decide if I just do a giant raid 10 or two raid 10 pools with the disks.

For VM's I'm going to go all NFS. I checked with EMC before we even purchased the SAN and they highly recommend it. I'm going to mix ISCSi in that though. We have 6 total ports on our 3100. 4 for NFS bonded and 2 for ISCSi bonded up as well.
 
Your statement about NFS makes me rethink my plan...
With NFS does powerpath VE work? or can I drop it off on mine and save me a few thousand?
 
Your statement about NFS makes me rethink my plan...
With NFS does powerpath VE work? or can I drop it off on mine and save me a few thousand?

Couldn't tell you. We didn't get powerpath. Also Deduplication can happen on NFS and not ISCSI
 
Your statement about NFS makes me rethink my plan...
With NFS does powerpath VE work? or can I drop it off on mine and save me a few thousand?

I may be wrong on this however I think powerpath requires fiber channel.

Edit: I think it works with iSCSI as well.
 
For VM's I'm going to go all NFS. I checked with EMC before we even purchased the SAN and they highly recommend it. I'm going to mix ISCSi in that though. We have 6 total ports on our 3100. 4 for NFS bonded and 2 for ISCSi bonded up as well.

Yes, do NFS. There is NO performance reason to do iSCSI with VMware. That's been covered in a few test studies. Plus, you get far better integration when using NFS on most arrays and especially with EMC arrays. Finally, word is there is a patch coming to boost performance of NFS with VMware. It's currently in beta.

I haven't done an iSCSI/vSphere design in probably 2 years.
 
Well I wanted to give a quick update today about yesterdays happenings on my VNXe 3100.

It would appear that I've barked up far enough in the chain of EMC command as I got one of the upper level sales managers for my region. I have to admit this guy bent over backwards for me. We talked about about 20 minutes about the issues that I've ran into and the support jackass that didn't listen either. He owned up to the support jackass being an idiot and asked me what I wanted to do. I told home honestly that I would feel comfortable having the unit replaced all together and ship him back the one that i currently have racked up. You could tell this dude didn't even flinch... and instantly said yes we'll make it happen. We'll have one to you by at least Tuesday.

Well it didn't stop there. About an hour later the regular sales rep called me back and asked me if the upper level sales manager had met all of my expectations. We again talked about what was going to happen and she proceeded to tell me that the upper level sales manager cc'd like 15 people within EMC about my little issue. Apparently also mentioned that the Army's of EMC are in standby for me to get this thing running, and he is anticipating this thing to be running by end of business Wednesday.

If that isn't customer service at it's finest, I'm not sure what else is.
 
Yes, do NFS. There is NO performance reason to do iSCSI with VMware. That's been covered in a few test studies. Plus, you get far better integration when using NFS on most arrays and especially with EMC arrays. Finally, word is there is a patch coming to boost performance of NFS with VMware. It's currently in beta.

I haven't done an iSCSI/vSphere design in probably 2 years.

I'm going to keep NFS for esx, but I'm going to do iSCSI for RDM. I doubt iSCSI will touch our esx boxes unless something strange comes up. Our backup exec server is still physical and I'm giving it a RDM to backup to disk on the san. Just need to find a good way to replicate that data else where, but that is another topic
 
If that isn't customer service at it's finest, I'm not sure what else is.

Spent two hours on the phone with EMC today. Had to console in to the serial ports to reboot the SP's as the management console died and I was seeing horrendous performance. I figured out the performance issue as I didn't realize one of the storage switches had died and was replaced without jumbo frames being turned on. Still very nervous about the box given that the management console has quit responding on me a couple times.
 
Spent two hours on the phone with EMC today. Had to console in to the serial ports to reboot the SP's as the management console died and I was seeing horrendous performance. I figured out the performance issue as I didn't realize one of the storage switches had died and was replaced without jumbo frames being turned on. Still very nervous about the box given that the management console has quit responding on me a couple times.

That's usually Java on your system. Hate Java.
 
sorry to derail scamp, but can i enable jumbo frames without causing any issues on the switch? I need to do that
 
sorry to derail scamp, but can i enable jumbo frames without causing any issues on the switch? I need to do that
To get proper performance it needs to be enabled on the switch, SAN, and on the vmkernel ports. It shouldn't affect the switch but you need to have the mtu at 9000 on any switch ports used for storage. Some switches it is a global setting which should be fine if you have the storage switches segregated.
 
Depending on the switch you might have to reboot it after enabling jumbo frames.

Glad to hear you're getting everything worked out with your SAN and decided to go NFS, Mr. iSCSI :p
 
Well I think I would like to update everyone hopefully.

Well like in the previous posts I barked up the EMC tree and EMC has really taken care of this situation. They sent me a pretty identical unit and the same engineer back out today to take care of the setup and making sure everything is the way that we want it.
 
Well EMC was here yeterday for about 3 hours and I am now up and running. I have to admit now that I have a unit that function's 100% I'm super excited to get this into production. I have to say that EMC really turned around my thoughts on this unit and have really bent over backwards to make sure my install went as well as it should have. Seems like the bugs that they had in the initial models are now gone too. I would not be afraid to shows this thing to anybody or recommend this to anybody!
 
You'll have to let me know how it goes. So, you doing all four NFS ports in one big team or splitting it into two sets of teams?
 
You'll have to let me know how it goes. So, you doing all four NFS ports in one big team or splitting it into two sets of teams?

I would if you were ever on gtalk long enough to talk. What I would up having to do is bonded up the 4 ports in the flex IO module and I'm going to dedicate those to NFS and the also bonded the 2 onboard ports for iSCSI traffic (Going to still have some RDM to do).
 
^^ Bump

How has the user experience been so far on the VNXe? Do you have any recommendations on features / functionality you would like to see added?

I was one of the original members of EMC's SMB division and have a direct line into the engineering group.

So far we have received very positive feedback. If its something you think is off-topic feel free to PM me.

VMware on NFS seems to be a real sweet on the platform. Enough so that CRN just awarded us top honors for storage for SMB. We consider our-self a startup within an enterprise.
 
Hey xTABx7... I actually have no complaints with the San. It has been been great so far. We have NFS running for our primary data stores in VMWare. Also have some iSCSI out there for RDM. I did some performance testing this last week and it fits up to about where we expected it to be. Would also have to note we aren't using jumbo frames yet and still got some good results.

At this point I still can't say enough good things about this San. Fits the bill pretty well. Now if we could get Data Domain and Networker to fit the bill a little bit better
 
I really have to say EMC is the best positioned company in the business...

From Avamar to Data Domain, Documentum and VMWare, it just fills all the holes found in other competitors... Way to go EMC!!

Also, for those not wanting to spend 2-3million on a system, ask your sales rep to look at a VMAXe...
 
I appreciate the feedback guys, In our space we are really trying to listen to to REAL it users and submit feedback based upon what people would actually use.

the 3100, 3300 and the 5300 have really taken off in the SMB.

In regards to backups, I wouldn't be able to comment on your issues. Most DD customers I work with are pretty happy, and we have some pretty big announcements on the horizon in that area. I do agree that networker is due for some improvements in the SMB space, boost has been great so far and we certainly seem to be received better that BU Exec with the newest revision.

I do find it pretty difficult to differentiate a product in the BU SW market other than things like Veeam and Avamar.

Cloud based BU's and even new companies like Actifio are saturating that space.

I am looking forward to following you blog, it will be great to read about a user experience outside of being at work.
 
I'm not satisfied with the documentation out there for the VNXe. I'm about to deploy my first one in a couple weeks and have had a hard time finding important information about it. For example, it took digging on the forums to find that I can only create one link aggregate per Storage Processor. I'd really like to be able to create multiple LAGGs per SP.

Having worked with block level storage, it's more difficult to plan and build around the VNXe since we're dealing with, essentially, a pair of Data Movers rather than a pair of Storage Processors. This means no cross-SP active/active configurations. This is simply a limitation of the product so I just need to get used to it.
 
I'm working on deploying a VNXe3300 for a customer right now. Here are my thoughts:

1. Why am I limited to two interfaces on a iSCSI server? My goal was to create one iSCSI server with 4 interfaces and 4 IPs and then have my VMware hosts multipath using RR. Since I'm limited to two ports, now I have to create two iSCSI servers and manually load balance my datastores across them.

2. Being able to create more than one link agg per SP would be helpful.

3. Why can't I choose what disks go into a Storage Pool? In order to get the disks I want into my VMware, SQL iSCSI, and Exchange Storage Pools I have to create, delete, and re-create Storage Pools until I finally have them where I want. I don't like that it grabs disks at random.
 
Honest answer? You're doing things outside the target customer for a VNXe.

Yes, being someone who works with storage these complaints aren't going to be very common from the average customer.

However, it's frustrating not being able to get the most out of the hardware, such as being limited to 1 LAGG or 2 iSCSI server ports.
 
Note to self: if using iSCSI to connect the VNXe to a VMware host that has Broadcom NICs, be sure to remove all the unused Broadcom iSCSI IQNs from the host in the VNXe or it will attempt to configure every single one of them when you add storage. :)
 
Great thread, lots of info, thanks for posting.

Is anyone using a VNX or VNXe with MS HyperV? If so, how is performance. Especially with virtualized MSSQL servers. All information is great, how many disks, SAS/SATA, 10k/15k, guests, hosts, clustered, HA, etc, etc.

Looking to upgrade a CX4 120.
 
I can't speak to hyper-v, but it's working well with vmware and our virtual sql server.


We have:

VNXe 3100
2 on board NICs bonded for iSCSI
4 port addon card all bonded for NFS
11x 600 gb sas
7x 1tb Sata Drives

Currently all connected to 2 Powerconnect 5424s, but next year will be connected to stacked 6224s
 
We just currently got a VNXe3100. Right now i'm trying to figure out how to test my disk speed and I am unsure what I should be seeing in performance. Right now when I run a hard drive test on a virtual OS I am seeing about 70MB, which seems low to me. Currently it has the 4 i Gb network cards and my VM box has 4 dedicated networks cards for storage. If anyone has any idea it would be greatly appreciated.
 
How are you testing, I/O Meter? What does the pool consist of, spindles, disk type..etc? What RAID Level..etc?
 
You aren't going to get a whole lot more than 70MB/s. Don't forget..with NFS/iSCSI you're going to go across a single link. If a single VM needs more than a single Gb link's throughput you have to go 10Gb or FC.
 
Back
Top