EMC VNX / VNXe

It very well may have shipped..but they just started a couple days ago and there was a big backlog of orders. The VNXe line may not be as backed up...that I'm not sure. But the VAR really has no idea if it's shipped. These aren't stocked items by anyone. They all come direct out of EMC and most of these will be out of the plant in Apex, NC. I've seen power cables and rack kits show up weeks before the rest of the gear hit.

So...it may be there today but don't be surprised if it's two weeks or more. Cisco is the worst. I've had customers waiting months on some pieces of gear. Luckily, most of that is over and everything is shipping in a more reasonable time frame. If you order EMC gear at the end of December you'll sometimes have it in 2 days as they get it off their dock to recognize the revenue. They build up a large supply in anticipation.

When we purchased our EMC gear at the end of December we had it all within a couple of weeks. This was two NS-480s, 1 Avamar with 4 nodes, and 2 Centeras. We purchased our Cisco gear at the same time and we finally received the last piece at the end of February. Our two Nexus 7ks took about a month to get while the two UCS bundles took about 2 months.
 
What's a ballpark price for a VNXe 3100 with dual controllers and 12 ~500GB NL SAS drives? 15k, 20k, 25k, etc? I am not stuck on the 3100, all I really need is a unit that has dual controllers and about 8 TB of raw storage. I need to replace my EQL PS5000E and the VNXe seems perfect. Though looking at my logs I do a lot more writes than reads though so maybe what I really need is a unit that can split existing disks into multiple arrays with dedicated disks per lun rather than lump all disks into the same array and just use luns for show.
 
Last edited:
What's a ballpark price for a VNXe 3100 with dual controllers and 12 ~500GB NL SAS drives? 15k, 20k, 25k, etc? I am not stuck on the 3100, all I really need is a unit that has dual controllers and about 8 TB of raw storage. I need to replace my EQL PS5000E and the VNXe seems perfect. Though looking at my logs I do a lot more writes than reads though so maybe what I really need is a unit that can split existing disks into multiple arrays with dedicated disks per lun rather than lump all disks into the same array and just use luns for show.

You can still mix and match sata and sas disks in the vnxe line. As for pricing calling a reselling and get the ball rolling. Honestly I doubt you'll regret it
 
You can still mix and match sata and sas disks in the vnxe line. As for pricing calling a reselling and get the ball rolling. Honestly I doubt you'll regret it

I don't want to get the VAR all worked up over this if my boss is just going to say flat out NO. I need a ballpark number so I can take his temperature on spending that money before the end of the fiscal year. I also want to buy a Data Domain DD140 for backups so depending on how much the VNXe is I may only be able to get one or the other.

Just ballpark is fine, if anyone could come up with a number that would be great, PM will work if you don't want to "expose" yourself here. ;)
 
I'll get you a ballpark tonight or tomorrow. Just doing iSCSI or what?

Feel free to inform me if time permits, I need to find a way to get away from this NetcrApp we have. iSCSI, Dual Controller 15x300GB SAS is along the lines of a decent box. :)

I really am not happy with this NetApp. :(
 
Last edited:
What it's for? If just VMware I'd go NFS.

The NetApp? We're running Citrix XenServer. Which runs flawlessly with MPIO on our aging AX150i. The AX150i does not support Jumbo frames though... But it's still a good little box, it's also not expandable.
 
A VNXe3100 w/ (12) 600GB 15K drives and 3-years of 7x24x4 support (I uplifted it to Premium) is around $18,500. That'll do CIFS, NFS, and iSCSI.
 
Is it just me, or these VNXe's super sexy looking as well?

Also thanks for the ballpark, need to send this one up as a feeler as well. :)
 
A VNXe3100 w/ (12) 600GB 15K drives and 3-years of 7x24x4 support (I uplifted it to Premium) is around $18,500. That'll do CIFS, NFS, and iSCSI.

Yeah it's just for VMware. I can't buy support in advance, just the way the rules are here, I'd assume that taking just one year of support will lower the price by a couple of grand maybe. Thanks a bunch for the ballpark. The DD140 will probably run around 12k including a year of support so I'd need at the very most 30k to buy the stuff I'd want. That's not a bad place to start for a non-budgeted capital expense. ;)
 
EMC only does 3 years on arrays. You get the choice of 8x5NBD or 7x24x4. No way to do 1 year on these..it's included in the price at 3.
 
Easy enough then, if that's the only choice then there's no problem. It's only an issue if one year at a time is offered. Thanks again, really helps a lot to have a number to go on.
 
Awesome pics!

That's a really good price on the VNXe...I've got budget for a new netapp soon (2040 probably), but I'm going to have to give the VNXe a look. That's much better on pricing than what Netapp was giving us for a similar config
 
Awesome pics!

That's a really good price on the VNXe...I've got budget for a new netapp soon (2040 probably), but I'm going to have to give the VNXe a look. That's much better on pricing than what Netapp was giving us for a similar config

Ha! We'll gladly sell you our four month old NetApp 2020. lol
 
Looks like I'm going to be installing one of these bad boys for a client next week. A VNX5300 connected to some Cisco UCS servers. Should be fun!
 
Good info here,

I work for EMC, so if you have any questions just shoot me a PM.
 
almost there. Just got my 3100 but not the expansion chassis. Will probably boot it up and get the network config in place and run some tests but won't be moving it into production until the expansion chassis is here.
 
almost there. Just got my 3100 but not the expansion chassis. Will probably boot it up and get the network config in place and run some tests but won't be moving it into production until the expansion chassis is here.

Pics or it didn't happen!
 
http://i.imgur.com/FPWoB.jpg

So in addition to having to wait for the other enclosure the licensing process on powerlink told me i had an invalid Product ID and the licensing folk are out until Ireland shows up for work so I can't really do any thing to test out performance with just a single enclosure. With the 12 600 gb drives thrown in a raid 10 it shows 3TB of usable capacity which is pretty much what I expected given that the first 3 or 4 disks have unisphere installed. With raid 5 it's like 4.2tb with 2 hot spares.

One interesting note is there is no option for raid 6 with sas disks seemingly. With NL-SAS it looks like it will always configure them in raid 6 with a multiple of 6 drives. It does not appear to give you the option through the GUI to use raid 6 for SAS at all. I'll check out the command line options once I get the box licensed but I'm pretty sure that if you could do it from the command line they would let you from the gui. I will likely run everything raid 10 anyway but I did find it interesting that almost every option you chose wants to default you to raid 5 instead of 10 or 6 as I assumed that was pretty much all anyone used on shared storage these days.
 
There is a lot you can do from the CLI that you can't do from the GUI. The drive configurations available from the GUI are just the "standards" and there are other things you can do from the CLI. I rarely ever do RAID6 on anything but SATA so it hasn't been an issue.

RAID10 and 6 are far from standard on shared storage. 98% of what we deploy is on RAID5. RAID6 is used for 1TB and 2TB storage (4+2 or 12+2s usually). RAID10 is used for certain use cases (databases, etc). But the majority of stuff out there is on RAID5.
 
I noticed that when we talked to EMC. It was Raid 5 / 10 for SAS and Raid 5 / 6 for Sata. Apparently EMC has optimized this config this way. NO idea why we don't have any more choices than that.
 
Awesome I thought I bricked the 3100. I downloaded the service pack and started the install thinking maybe that would get the license key pickup to work. Shortly after that I got a response from EMC licensing support that a few of the units shipped with a firmware version that wouldn't take the software/licensing upgrades and they were issuing an RMA. The software update then completed and now the management interface is no longer pingable. Tried doing the connection utility thinking maybe it lost its config but the device was not discovered. Ended up getting it back online by doing the USB flash drive config.
 
Last edited:
Well Mine is in. We just unboxed it like 10 minutes ago. More pictures to come!
 
Finished setting up a VNX5300 Block this week. 100x 300GB 15k drives and 11x 2TB 7.2k drives. Customer is booting 39 UCS blade servers from the SAN and storing some Hyper-V VMs on it.

It was nice to get my feet wet with the VNX. I'm sure I'll be deploying more.
 
I'm part way through setting mine up and getting it all cabled, I'm impressed with how easy this thing is, but it's only out of the box less than 48 hours and something is wrong with both storage processors. EMC Support here we go!
 
I'm part way through setting mine up and getting it all cabled, I'm impressed with how easy this thing is, but it's only out of the box less than 48 hours and something is wrong with both storage processors. EMC Support here we go!

Was afraid of that. We had a problem with our first VNX and from what we've heard there have been issues with a certain number of them. What my engineer and EMC thought was a bad data mover at first turned in to a complete array replacement. Wondering if there was a problem with some on the line...
 
Was afraid of that. We had a problem with our first VNX and from what we've heard there have been issues with a certain number of them. What my engineer and EMC thought was a bad data mover at first turned in to a complete array replacement. Wondering if there was a problem with some on the line...

My EMC rep told me today that area's in the Midwest (IA, IL, and a few other Midwest locations) got the lucky bad batch of VNXe SANs. We had a bad san that was suppose to be shipped out to us that we never received at all. They are cross checking a list of known bad serial numbers for the one that I current have that I've started to setup.

My issues have been all over the board the last 2 days. From it complaining that the batteries are bad in both storage processors, to the flex i/o modules stating that they have gone bad and then 10 minutes later are fine again, to not being able to LAGG 1 port of the 4 port flex i/o module and even had it say that one of the storage processors had gone bad to only have it say it was fine 10 minutes later.

They have some research to do on their end, but I have a feeling we might windup with a new setup again. Which is ok with me because I don't have any production data on it right now.
 
That will make me at the very least hold off on ordering on and perhaps look elsewhere. I realize that these are entry level yadda yadda yadda but I have had my fill with tech support and not looking to buy a unit where I need to spend countless hours troubleshooting it right from the start.
 
Back
Top