emc VNX build, wondering if I'm getting a good deal.

cyr0n_k0r

Supreme [H]ardness
Joined
Mar 30, 2001
Messages
5,360
vnxbuild.gif


Wondering how this looks to the EMC experts.

I told them our needs
iSCSI block level only (dont need CIFS, NFS)
Need about 10TB fast storage
Need about 10TB midline storage

Not sure why they have 14k in "services" I'm going to ask them about that.
So far the current quote seems a little high to me. Any opinions?
 
With 23x600GB drives you'll top out at 8TB if you use 5+1 RAID 5 groups. The first 4x600GB drives will be the vault drives and you can use them, too, but only for static files that need next to no I/O.

Services are for the consultant to install, update, configure, and help you migrate to the VNX, I'm sure.
 
I know when I look back at my install for our 5500 I could've easily done it myself..did most of it anyway...services is where you always get killed. They probably won't support it if its not installed by a certified tech...etc.

I thought it was 4+1 for raid5 Sas and 6+2 for nl-sas raid6 for optimal raid config.

Curious on your iops needs?
 
Last edited:
With 23x600GB drives you'll top out at 8TB if you use 5+1 RAID 5 groups. The first 4x600GB drives will be the vault drives and you can use them, too, but only for static files that need next to no I/O.

Services are for the consultant to install, update, configure, and help you migrate to the VNX, I'm sure.

That's first 5 drives, and you can put files on the, but it's not recommended. They are suppose to be dedicated to os and config.
 
That's first 5 drives, and you can put files on the, but it's not recommended. They are suppose to be dedicated to os and config.
to save money why not just make the first 5 drives cheaper 146 or 300GB 15k SAS drives? If you can't put files on them anyway.. what does it matter how big they are right?

Our IOPS are not high. We need about 2-3TB of fast storage to be dedicated to our VM cluster and SQL/Exchange db's. The rest of the fast storage is for file storage. The midline storage is for backups and security camera footage. Lots of it.
 
You're only going to net around 2000 iops on those sas drives...I'm assuming you looked at what your db's and your vm's are currently using for peek iops?
 
to save money why not just make the first 5 drives cheaper 146 or 300GB 15k SAS drives? If you can't put files on them anyway.. what does it matter how big they are right?

That's what we were taught in class too, speaking of which, I just booked my hotel and plane ticket to Boston emc training for next week..
 
You're only going to net around 2000 iops on those sas drives...I'm assuming you looked at what your db's and your vm's are currently using for peek iops?
I have not. We aren't doing crazy mission critical stuff. It's for a school district.
If you could link to some how-tos on determining max io for stuff like that I'd certainly look into it and report back.

That's what we were taught in class too, speaking of which, I just booked my hotel and plane ticket to Boston emc training for next week..
Then why wouldn't our rep recommend that then? Why allow us to spend money on 600GB drives that we wouldn't even be able to utilize?
 
That's first 5 drives, and you can put files on the, but it's not recommended. They are suppose to be dedicated to os and config.

Nope. Clariion was 5 vault drives, VNX is 4.

You can create a RG with the vault drives and use them, but it is not recommended to out anything that needs any consistent I/O on them. ISO images, VM templates, etc.
 
Assuming these machines are currently running in a physical environment on a windows server os, you can utilize the windows perf counters etc to determine your iops average and peak.

If these are new then I would suggest heading over to the application vendor to get as close to an estimate as you can.

You also may want to consider at least some efd for cache as I see you are getting the fast suite. This would at least give you some breathing room if you have an application vendor who can't answer he iops questions...believe me been there before and it certainly won't be the last.

I know it may seem that I'm a bit overly concerned about this but I'm just trying to save you a headache down the road. Get your statistical data together and revisit and get additional quotes to compare..net app..etc.
 
Last edited:
http://seth.killey.me/?p=355
I'm following this to setup counters.

I will run them for 7 days starting tonight at midnight. It will be good information to have.
If my IOPS are not high at all, would it be more cost effective to just run 2 DAE's full of 2TB drives and spread the load across all those spindles?
 
That's a good guide. As soon as you determine iops and you already know your capacity..go back to your emc account team or VAR and provide them your stats. They should architect it around your statistics with some overhead for growth.
 
Our VAR used the VMware Capacity Planner to help figure out our IOPS. Ran it for a little over a week. Gave us some great data to help size our two VNX's.
 
Yup...that's another good option. Make them work for it..you're the customer, however some may charge for this type of service...it gives more than just iops requirements.
 
Last edited:
Yup...that's another good option. Make them work for it..you're the customer.
I've already had to fire our initial account rep for our area.
K-12 education in Arizona.
It took him 3 weeks just to get me the above initial ball park quote.
His manager called me and apologized and said what I experienced was not typical EMC customer service.
Should I believe him or is this what I can expect from the manager (who has said he will personally handle my account now)
 
That's unfortunate but the reality is there are always sour grapes in the bunch. I don't know about your engagement with this manager but certain questions should be asked. Sometimes you have to push people to do what they are supposed too. EMC is not the only vendor out there and you have a choice here.
 
When you say we want 10TB you need to make sure and tell them usable or raw. You aren't getting what you want here, most likely.

The vault drive pack is 8 drives but only 4 are actually used by the vault. You can absolutely use the vault drives and it doesn't have to be low I/O. The problem comes in should you lose something that causes the array to kill write cache..when that happens write cache gets dumped to the vault..but honestly, you have other things to worry about when that happens. Performance of the array will go through the floor when you lose write cache.

So that gives you 4 drives left from the vault pack, plus another 15 drives. The reason they spec'd 600GB drives for the vault is that you can use those 4 along with the other 15. If they were different sizes then you really couldn't. The problem here is the number. 15 + 4 = 19 - 1 (for hot spare) = 18. So either they are thinking (3) RAID Groups of 4+2 RAID6 or the person doing the config doesn't really know what they are doing. Usually for these we'll do 4+1 RAID5 by default. That's 15...with 3 left over that you aren't using. I suggest having them add two more 600GB drives. That'll be 20 total...four RAID Groups of 4+1. If you do that you'll get 8.4TB of actual usable space...but that's still less than 10.

As for the NL-SAS 2TB drives they have 7. Take one for hotspare and you have 6. I assume they are doing 4+2 RAID6 for drives that size (they should) and that gives you 7.15TB of usable space...less than 10. They need to add more.

That's why specifying raw versus usable is important and when you talk usable you need to talk about RAID types and all that. It's common we see RFPs for "Array must have 100TB of space" and that's it...or they might actually say 100TB usable..but not specify anything that helps us with RAID sizing.

The services is a standard EMC SKU, PS-PKG-IMLPU and PS-PKG-MRUD. PS-PKG-IMLPU is for Snapview/Snapsure..so it's setting up snapshots and maybe Replication Manager (would need to check my EMC PS list). PS-PKG-MRUD is the basic implementation of the VNX array. $14K for those is right. It's services. You aren't required to have it installed by EMC or a partner for support...but don't call EMC support and ask them to walk you through it. The VNX arrays are not considered to be "customer installable" like the VNXe arrays are. The difference is that the VNXe information, documentation, and setup wizards walk you through it. The VNX doesn't. I've installed them using the docs and it's not that bad but I also know the best practices and have setup previous EMC arrays and much of that carried over.

One more thing. I find it odd they quote EMC PS. Any good partner will use their own people and usually give you more services for less money. We do. My guess is that this partner isn't a big EMC reseller and doesn't have people on staff to do it. That would concern me.

If you're worried about price let me know and I'll do an estimate and see how it looks. They should be doing some pre-work to confirm IOPS sizing..but that depends on the customer and the deal. Some people want us to, some don't...and on a $58K revenue deal don't expect them to bend over backwards. But they can easily do a VMware Capacity Planner or have you run PerfMons against the primary servers and then use a tool we have to spit out IOPS sizing data. It's not very hard and doesn't take long. If they don't know how to do that find another partner. And if you're in the southeast you should have been talking to me. ;)
 
I've already had to fire our initial account rep for our area.
K-12 education in Arizona.
It took him 3 weeks just to get me the above initial ball park quote.
His manager called me and apologized and said what I experienced was not typical EMC customer service.
Should I believe him or is this what I can expect from the manager (who has said he will personally handle my account now)

Who gave you the quote? The partner or EMC direct? EMC doesn't have control of its partners. Usually though, if a partner is always like this the EMC reps will shy away from them. 3 weeks for a quote like this is a long time. I don't say we do quotes overnight but a 5300 like this would be 2..maybe 3 days tops..BUT...that assumes a few things. The partner may have had issue with EMC giving them registration on the deal so you get the best price. That happens and that delays quotes.

Every manufacturer has issues like this. Sometimes you end up on the wrong side of bad luck or bad timing but 3 weeks is VERY far from typical.
 
The partner did not have anything to do with the 3 weeks. It was the actual EMC rep who took 3 weeks. Our VAR followed up with me and this EMC tool weekly.
The EMC rep had the balls to tell me he only had 1 engineer working on his team at the moment and that he was given explicit orders not to work on quotes who wouldn't be closing (purchasing) before the end of their fiscal year (March 30th)
Basically he was telling me since we were not going to buy it within 2 or 3 days of getting a quote that we got put at the bottom of the list.
I called my VAR and said either we get a new rep or we walk.

We don't run VMware so there isn't a capacity planning tool inside Hyper-V that I am aware of. I have setup perf counters though that I'm going to run over the next 7 days to have some hard data to be able to give the manager guy for a new quote.

@NetJunkie, sorry we are in the south west (arizona)
Too bad, it really sounds like you know what you are doing too.
 
I'll be honest..as the end of a quarter if you aren't buying right then you get shifted down in priority. It sucks...but it's the way it is. That's not just EMC. That's everybody. Reps are pushed to get things in by end of quarter.

On the flip side...the EMC engineer shouldn't even be involved in this deal. Again, tells me that this VAR doesn't do much EMC and doesn't have the skills in-house to do it. We routinely do $1M and $2M storage deals by ourselves. EMC is there if we need them, but it's not often that we do. On a VNX5300? No way. Even at end of quarter we'd have helped you but that's just our model. We have our own people. Our own capabilities.

You learn a LOT when working on the VAR side. I'll be a really obnoxious customer if I ever move back to the other side.
 
My bad on VMware..I make the mistake of assuming sometimes that everyone is using it.

As far emc or vars I've found it's better to educate yourself so you can make sure that you are asking the right questions etc..and push them to come up with the proper solutions. While this might not be the case for Varrow, the fact of the matter is there are shitty sales reps, account teams both at vars and bigger orgs..ive dealt with it first hand many a time and that sucks because while the product they are selling/manufacturing may be awesome, it can be represented poorly and thats a turn off for the customer.

Of course the flipside to that is also true..but ive learned its a crapshoot sometimes. For example, our Cisco account team is awesome..our HP account team..well they are useless.
 
We don't run VMware so there isn't a capacity planning tool inside Hyper-V that I am aware of. I have setup perf counters though that I'm going to run over the next 7 days to have some hard data to be able to give the manager guy for a new quote.

You don't have to run VMware for the VAR to use the Capacity Planner tool. It just collects all the data using WMI from each VM or Physical Server and uses the tool to figure out the numbers. It didn't interact with our vCenter servers at all.
 
The vault drive pack is 8 drives but only 4 are actually used by the vault. You can absolutely use the vault drives and it doesn't have to be low I/O. The problem comes in should you lose something that causes the array to kill write cache..when that happens write cache gets dumped to the vault..but honestly, you have other things to worry about when that happens. Performance of the array will go through the floor when you lose write cache.

I avoid putting any decent I/O on the vault drives since that's where the write cache flushes to in the event of a power failure. Assuming the network, fabric, and servers will be running on battery backup when that happens, you don't want a server attempting to run I/O on the vault drives when those drives need to be 100% focused on committing the write cache so the array can power down without losing uncommitted writes.

Can someone put high I/O LUNs on the vault drives? Sure. But I'd just as soon not put anything there at all if I can avoid it but when a customer wants to use the space, I recommend putting VMware templates, ISOs, etc. there.
 
Like others have said... Templates, ISOs, Archive space for backups.

But I have read where its a misconception that you cannot use the vault disks for heavy I/O. But if that was the case you would think that they would allow vault drives into storage pools...but they dont.
 
Like others have said... Templates, ISOs, Archive space for backups.

But I have read where its a misconception that you cannot use the vault disks for heavy I/O. But if that was the case you would think that they would allow vault drives into storage pools...but they dont.

The drives already have LUNs and other stuff. Several reasons why they can't be put in a storage pool.
 
Also, if you notice I didn't include vault drives in my usable space calculation. I never do. It's there if the customer wants to use it. Some are "old school" and never touch the vault...others use it like any other drive.
 
Our VAR put the vault space as a separate item when listing out the space calculations and they had the same recommendations, use it for archive space, isos, templates.

I wonder why they expose the vault drives to the user... maybe just for these reasons?
 
The FLARE code does not require the full space of all disks so there is some space left. My best guess is that EMC thought "well lets give it to the customer since its unused anyway". The choice is yours. We do not recommend it, but you can use it.

If a vault drive fails, a Hot Spare is not automatically invoked and if you invoke it, it will only cover the User LUNs. Vault LUNs have their own level of protection.
 
The FLARE code does not require the full space of all disks so there is some space left. My best guess is that EMC thought "well lets give it to the customer since its unused anyway". The choice is yours. We do not recommend it, but you can use it.

If a vault drive fails, a Hot Spare is not automatically invoked and if you invoke it, it will only cover the User LUNs. Vault LUNs have their own level of protection.

Thanks for that info, had no idea they weren't protected by a hotspare.
 
We use Dell EqualLogic SAN's which work great. EMC was way too pricey and the dells have worked great for us for the past two years.
 
14k for EMC PS is about on par.. However that will include setup, and moving 2 or 3 hosts to it..

If youve setup one of these arrays in the past, know unisphere, and can migrate the data yourself, IE PPME, or OpenMigrator then 14k is a ripoff.

@netjunkie.. EMC always quotes PS regardless if they use a VAR or not.. However a VAR will mask it and stick their own people in there. A VNX is a customer installable array. I know, because Im the one who unboxed, racked, powered on, and cut up the storage on my 5700. It was no different then the 340s, 240s, and 480s ive done in the past.

Also.. Watch out for Raid6.. I've personally seen emc internal speed documents that state that raid6 adds a heavier load on the SPs almost cutting the amount of usable spindles by a 3rd due to the over head of the double writes raid 6 creates. You can see it in action by watching SP proc utilization being smashed on low loads, and dumb amounts of cache flushing.

edit.. i took over a cx4-240 18 months ago that had 180 450 FC disks in 100% raid6 raid groups. The array was always 100% maxed out. I had some wiggle room, coverted the entire array to raid5 pools, and processor load dropped like a brick.

best of luck.
 
My issue wasn't EMC quoting it..his proposal came from a partner, not EMC. EMC shouldn't even be involved in a 5300 of that size. YOU may install one, but it's not considered "customer installable" by EMC. That doesn't mean you can't or it won't be supported..it means you'll get a bunch of boxes, some CDs, and lots of luck but don't call EMC to walk you through it.
 
@netjunkie, depends on how much you are buying. if you are buying under 250k a year, then you'll most likely get pushed towards a VAR.

In those boxes, they do include big giant pictures on how to plug stuff in. Once you get the head, and dae000 plugged in the rest is cake walk. exp -> pri rinse and repeate. Check unisphere to make sure the dae came online. Block level is cake, adding file via celerra has its own challenges, but I'm no longer installing the celerra and have moved towards isilon. As for EMC/Powerlink walking you through it.. Yes good luck. Thats why it really pays to be super cool with the local field CEs. Take'em out to lunch, buy'em a beer after hours and those guys will work all sorts of massive magic to take care of you. Those field CEs get run into the dirt, they are just like you and me.. up at 2am fixing stuff so they totally understand.
 
Everything but enterprise goes through a partner, and some enterprise still does. <$250K isn't even close to enterprise. We have customers well in the seven figures of EMC gear per year.

Again, I'm not saying you can't do it...I'm just saying it's not real intuitive on how to initialize it...add the enablers, etc. If you've done it once it's VERY easy..but that once, by yourself, can be a challenge.

If you've never bought EMC gear I doubt you'd have any idea how to even find a local field CE.
 
Thanks for that info, had no idea they weren't protected by a hotspare.

We do give the customer the option to create user luns on the vaults but we assume that they follow best practice and do not use them anyway. The system luns on the vaults are protected on their own.

Thats why it really pays to be super cool with the local field CEs. Take'em out to lunch, buy'em a beer after hours and those guys will work all sorts of massive magic to take care of you. Those field CEs get run into the dirt, they are just like you and me.. up at 2am fixing stuff so they totally understand.

As a CE myself I applaud this :p. But seriously now, there's nothing more satisfying than helping a customer out with things that "technically" aren't for me and getting compliments and trust in return. It's special kind of magic, customer services in IT :p.
 
Everything but enterprise goes through a partner, and some enterprise still does. <$250K isn't even close to enterprise. We have customers well in the seven figures of EMC gear per year.

Gotta be based on your location/city. In my current role, i'm foced to use a VAR. Thats even after buying a vmax, and i have 2 more maybe coming on the horizon. But in my previous roll, I was direct. I see my emc reps on a weekly basis so i have a very good relationship with them.

Again, I'm not saying you can't do it...I'm just saying it's not real intuitive on how to initialize it...add the enablers, etc. If you've done it once it's VERY easy..but that once, by yourself, can be a challenge.

Good point, Unisphere does make it easier, but you are totally right. I've done more flarecode upgrades then I can count. far back as 19, and im running 30 on my NS stuff, and 31 on my VNX stuff. Racked and stacked more trays then I can remember.

If you've never bought EMC gear I doubt you'd have any idea how to even find a local field CE.

another good point. Ive been doing it so long now, that I forget about all those above points these days.
 
I wouldn't call that 7,200rpm stuff "midline" - especially with so few spindles. That's definitely bottom tier storage from an IOPS perspective - static data only IMO. File server LUNs etc.
I've got shelves of SATA in my EMC and they're still doggedly slow, in pools, metaluns, CIFS or whatever. Just sayin', so you're not expecting much out of 'em.
In my world, midline is the 15K stuff - FC or SAS, fast is solid state (Tier 0) , but all relative I guess. You defintiely need to know your IOPS req's and add some healthy headroom.
You don't want this thing tapped out day-one.
 
Again, I'm not saying you can't do it...I'm just saying it's not real intuitive on how to initialize it...add the enablers, etc. If you've done it once it's VERY easy..but that once, by yourself, can be a challenge.

If you've never bought EMC gear I doubt you'd have any idea how to even find a local field CE.

Thats a good point. I have no idea whether things like that are included in the customer VNX Procedure Generator. My best bet is they are. It's not difficult, but it is not that intuitive too when you've never done it. I'll have a quick look at that tomorrow. A VNX is supposed to be customer installable anyway.
 
Back
Top