MSP Virtualization Project

BigD1108

Limp Gawd
Joined
Nov 6, 2008
Messages
165
First, a little background. I work at a small MSP with about 5000 endpoints, and heard today that our investors approved an up-to-$250,000 investment into a virtualization project, more specifically a "hosted datacenter" initiative. We probably wouldn't move a whole lot of our existing client base to this model, but use it to bring in new clients.

Now, being the individual who set up our virtual lab environment (2 ESXi hosts, Openfiler, etc.), and being the "go-to guy" on all things virtualization for our clients that have virtualized servers, I am almost sure that management will ask me to head up this project and manage the infrastructure once it is in place.

This will be a role change for me as I am currently a "network technician", but we're a pretty small team (6 people including the team lead) so I wear a lot of hats, doing everything from 3rd party software support to Windows domain administration, to Cisco network device configuration, to project management, all depending on the day.

Am I unreasonable to ask that:

1) Since it will be such a large change in my role (and far more specialized than my current role) that I be given a raise to reflect my new responsibilities and
2) That the company pay for me to get my VCP so that I am better able to effectively manage this new environment?

Secondly,

Let's talk hardware. The "project" isn't really at that point yet, but I like to think ahead. :) We are a Dell shop, so I had been looking at some Dell R710's with 128GB of RAM. I figured that, to start, it might be overkill, but planning for future expansion wouldn't hurt either.

For storage, I was thinking EMC iSCSI SAN(s). I saw a live demo of some of their products at the VMware Forum 2011 in Chicago, and was pretty impressed.

From a networking standpoint, we will most likely be looking at Cisco switches, and giving each client who hosts their server in our data center their own dedicated VLAN in order to isolate the traffic.

That being said, what do the experts out there think? I'm going to be continually updating this thread as the project progresses, so please feel free to check back often.

Thank you for taking the time to read this long-winded post.
 
Well I'll start on your first part:

1. I don't think its unreasonable, but don't expect much.
2. I think this is perfectly reasonable and should be expected of them. Might want to try this one first and then use it as leverage to get number 1


2nd:

As for hardware we currently rock the r710s and couldn't be happier. We only have 3, but we are a small manufacturing shop and those boxes aren't hardly even being stressed. Beings this is a hosted project maybe you might want to take a look at their blade centers as well. They will support the Cisco Blades Switches in the back that can connect up.

Look at the nexus 1000v switches to control bandwidth and such. Also make sure you have enough bandwidth in and out of your COLO along with the switch density in your racks.

Also consider setting up resource groups now along with QOS policies for your customers before its too late. I hear through the grape vine that the company I use to work for is wishing they would have thought about it to begin with. hmm makes me wonder if they should have listened to me in the first place.
 
You could also consider Cisco UCS w/Unified Fabric as well. Cisco is very competitive right now in the Blade space and on top of that you can implement a Unified architecture which makes things so much better to work with on the cabling front, storage and ip communication w/virtual nics over 10Gbe....very nice indeed.

I'm working on implementing this in August timeframe..can't wait.
 
As someone who has done / is doing this kind of project here are some pointers.

1: If you want certs, and it is believed that this is going to bring in a whole bunch of customers, nag your boss for certs. More of a set it up once right thing then do it using teh googlez.

2: If you are buying VMware / EMC make sure you find a reseller you like. I worked with a few previously that were so bad I ended up becoming a EMC partner just so I didn't have to deal with them anymore. When you call EMC you will get an EMC rep, who is basically just a presales guy who can't actually sell you anything, and then you get a reseller. If you don't like your reseller feel free to tell EMC and they will get you a new one.

3: For hardware, Take your disk estimate and multiply by about 1.5. Also remember that if you are doing GB iSCSI that 15k disks are going to preform similar to 7.2K disks. If you go 10GB iSCSI then the difference matters. Also build redundancy into everything you can. Multiple SANs with smapshot replication and failover would be my recommendations. The VNXe 3500 or 5k series would be the best to look at there. For internal network a good traffic shaping plan is key. For the WAN side start small, but make sure that you can upgrade your service at any time. For instance when we moved into our rack we started with a 10/10 line. After adding some load that changed to 50/50, and now we have a 100/100 and are looking at a 2nd for load balancing. Also remember that you are going to need a minimum of 1 external IP per customer. We are estimating that by probably next year we will need our own /24 and those do not come cheap.

4: VLANs are the way absolutely to control customers. How you break out VLANs will be completely your call. The way we do it is to create a virtual switch for each customer that has its own VLAN ID. Those get tagged on the main switch, and then a virtual interface on our firewall. That ends up creating less configuration work, however there is probably a better way to do it.
 
Thank you for all of the replies.

You could also consider Cisco UCS w/Unified Fabric as well. Cisco is very competitive right now in the Blade space and on top of that you can implement a Unified architecture which makes things so much better to work with on the cabling front, storage and ip communication w/virtual nics over 10Gbe....very nice indeed.

I briefly looked at some information on this, but didn't have a whole lot of luck finding anything but marketing-speak about how Unified Fabric would help me "transform my business and dramatically increase business agility". Is there a good resource for technical information on this?

If you are buying VMware / EMC make sure you find a reseller you like.

Luckily, my CDW rep is a rock star, and I can work with VMware licensing and EMC reps through him. :)

For the WAN side start small, but make sure that you can upgrade your service at any time.

This is a good point. Right now, we have a 20/20 connection, but can be scaled to 100/100 (or anything in between) at any time we like.

Also remember that you are going to need a minimum of 1 external IP per customer. We are estimating that by probably next year we will need our own /24 and those do not come cheap.

Another good thing to know. I hadn't even considered this yet. We currently have 64 external addresses in our block.
 
UCS info:

http://www.cisco.com/en/US/netsol/ns944/index.html

Unified Fabric info:

http://www.cisco.com/en/US/netsol/ns945/index.html

There are some users on these forums that are using UCS/Unified Fabrics and Vblocks as well. If you needed more info, i'm sure they would be happy to help, however I would seriously consider a consultant that could assist with this type of setup. Our Cisco reseller is Ingram Micro, great pricing, not so great on the expertise. Before I started this process, I would've liked to have gotten a consultant to come in but alas, getting things like that at my current employer is a nightmare so I had to do most of the research myself with some help from people I know here and there.
 
I briefly looked at some information on this, but didn't have a whole lot of luck finding anything but marketing-speak about how Unified Fabric would help me "transform my business and dramatically increase business agility". Is there a good resource for technical information on this?

With UCS, you have 2-8 10Gb cables coming from the back of your chassis to a pair of Fabric Interconnects (FI) and 4 power cables. That's it. The FI's can manage as many chassis as you can plug into them. There is a 20 port 6120XP and a 40 port 6140XP. From the FI's you can connect into your corporate LAN for networking and (if needed) fiber network so you can attach storage to the blades. You can even direct connect a SAN to the FI's.

The beauty of the UCS is Service Templates. I can fill a chassis with 8 blade servers, create a single template on what they should have for vNICs, vHBAs, BIOS settings, firmware, boot order, etc. From the template I can deploy profiles to all 8 of my blades and they all get the same settings. Later on, if I decide to change a setting, I can apply the change to the template and that is pushed down to all the blades. If I add a new blade to the chassis, it will automatically update to the latest firmware when it boots.

If I'm booting the blades from SAN and a blade dies, I can easily assign its profile to a spare blade and it boots up just as if it was the dead blade.

There are other selling points, but these are some of the big ones.
 
BTW..I forgot to mention, there is a Cisco UCS Emulator VM that you can setup to try it out. It's actually really neat. They provide 1 blade and 1 standalone server with their emulator to get familiar with the interface.

You can see first hand the Service Templates..etc that Child of Wonder mentioned in their post.
 
BTW..I forgot to mention, there is a Cisco UCS Emulator VM that you can setup to try it out. It's actually really neat. They provide 1 blade and 1 standalone server with their emulator to get familiar with the interface.

You can see first hand the Service Templates..etc that Child of Wonder mentioned in their post.

The current UCS emulator let's you do about any config..not just 1 blade and a chassis. I use it a lot for customer demos when they want to see UCS Manager. I'll model up 4 chassis and several different blade types to show them the single management interface for everything.
 
You could also consider Cisco UCS w/Unified Fabric as well. Cisco is very competitive right now in the Blade space and on top of that you can implement a Unified architecture which makes things so much better to work with on the cabling front, storage and ip communication w/virtual nics over 10Gbe....very nice indeed.

I'm working on implementing this in August timeframe..can't wait.

UCS is more than just very competitive they are beating about everyone on price with the current "starter" bundles they offer. You can get the two interconnects, chassis, and 4 blades w/ 96GB for like $40K. Then just expand from there. That's like getting two 10Gb switches, an entire chassis ready to go, and 4 good vSphere blades all for $40K.

Be careful using CDW. They do very well on pricing but usually have bad professional services...and I expect you'll need some in here. If you do the EMC VNXe it is "customer installable" but I'd have someone ready to help if needed. UCS can be weird if you've never done it...really, any blade configuration is. You have to learn the configuration and management.

I design UCS/EMC/vSphere environments every day. Let me know if I can help.
 
One more thing...You say EMC iSCSI SAN but don't do iSCSI. ;) EMC has excellent tools for vSphere and NFS integration. I haven't done an iSCSI design in probably 18 months. Just no real benefit to doing it when NFS integration is so good. Great plugins for vCenter and you can offload snapshots, compression, etc to the array since it understands the filesystem.
 
See what I mean..you'll get most if not all the answers you need..:)

What version of the UCS Emulator are you working with now Netjunkie? I'd like to get my hands on it..I was using 1.4 beta.
 
I'm using 1.4 as well. The older versions let you adjust the config too, you just had to do it via the text menu thing on the VM. With 1.4 you should see a frame on the left side of the UCSM login page that lets you adjust the config. They made it a lot easier.
 
Thanks again to all of you guys for taking the time to respond to my questions. Also, I may just take you up on your offer of help, NetJunkie, as the project progresses a little bit further.

I downloaded the UCS Platform Emulator and have looked around in the console a little bit. It looks quite daunting. At the same time, though, it looks very cool. That's why I love this industry though....just when you feel like you are on top of things knowledge-wise, you realize that you don't know shit and there's tons of new stuff to learn.
 
Little update for you guys...the planning stage of this little project is progressing along quite well.

Got some pricing from CDW for a starter bundle with 4 UCS blades (dual 6-core Xeons and 48GB of RAM) and 2 Fabric Interconnects, had a couple conference calls from some really cool people at EMC who provided me a nice quote on a VNX 5300 with 8x300GB 15K drives and 6 1TB SATA drives to start.

I've been very impressed with EMC overall. They had a SMB Consultant stop by our office to talk about the quote they gave us, and invited my boss and I to Chicago to visit the Microsoft Lab so we can play with the 5300 they have there first-hand. Very classy.

Also got some licensing out of the way (about 25-30K in VMware licensing and 18K in Microsoft licensing).

Going to speak to my CDW rep and a representative from Tripp Lite tomorrow on power management considerations for the project.

I also found out that my employer is going to be paying for training/certifications (looks like the VCP and the EMCISA at this point) which is pretty awesome as well.
 
Going to speak to my CDW rep and a representative from Tripp Lite tomorrow on power management considerations for the project.

Take a good hard look at APC and TrippLite side by side. We got TrippLite UPS, Battery Packs and PDUs this year and while everything worked like we wanted it too the SNMP Web Cards flake out all the time, and I've had to RMA 2 of the 4 PDUs already. I'm honestly not sure if it was worth the price of admission...
 
3: For hardware, Take your disk estimate and multiply by about 1.5. Also remember that if you are doing GB iSCSI that 15k disks are going to preform similar to 7.2K disks. If you go 10GB iSCSI then the difference matters.
This is horrendously misleading and wrong. Please, do not EVER expect that a 15k disk will act like a 7,2k disk, no matter what your interconnects are, especially in a shared virtual environment.

No offense dude, but that's just wrong, and the absolute 100% most common mistake our customers make on a daily basis (and vendors).

Interconnect helps with payload, disk speeds help with iops. IOPS are king in a virtual environment, and all that you should ever be concerned about. ESX IO is 100% random - IOPS IOPS IOPS. I can't count the number of times I've had to tell people this.
 
Take a good hard look at APC and TrippLite side by side. We got TrippLite UPS, Battery Packs and PDUs this year and while everything worked like we wanted it too the SNMP Web Cards flake out all the time, and I've had to RMA 2 of the 4 PDUs already. I'm honestly not sure if it was worth the price of admission...

Yeah, we generally quote out APC units for our clients, so I'm gonna get pricing on solutions from both APC and TrippLite.
 
One more thing...You say EMC iSCSI SAN but don't do iSCSI. ;) EMC has excellent tools for vSphere and NFS integration. I haven't done an iSCSI design in probably 18 months. Just no real benefit to doing it when NFS integration is so good. Great plugins for vCenter and you can offload snapshots, compression, etc to the array since it understands the filesystem.

I'm still of the opin. that your protocol doesn't really matter... Picking the right vendor for the right tool, though... I'm a sucker for block storage though :)

If you're an NFS shop, buy from the vendors that have NFS as their bread and butter. If you're a fibre shop, buy from the vendors that started out with FC - don't try to shoehorn an NFS company into building fibre.

vis a vis, if you're a netapp shop, buy what they do best. IF you're EMC, buy what they do best. Either pick a technology, and let that lead you to a vendor, or pick a vendor, and buy what their specialty is. Every vendor does everything now - finding what they do best is what matters the most :)

In my experience, of course. But I also know too much :p
 
I'm still of the opin. that your protocol doesn't really matter... Picking the right vendor for the right tool, though... I'm a sucker for block storage though :)

If you're an NFS shop, buy from the vendors that have NFS as their bread and butter. If you're a fibre shop, buy from the vendors that started out with FC - don't try to shoehorn an NFS company into building fibre.

vis a vis, if you're a netapp shop, buy what they do best. IF you're EMC, buy what they do best. Either pick a technology, and let that lead you to a vendor, or pick a vendor, and buy what their specialty is. Every vendor does everything now - finding what they do best is what matters the most :)

In my experience, of course. But I also know too much :p

This is great advice. I am also a block storage fan boi. FC all the way. I guess I just dislike ethernet networks for everything. I like brocades too and rather dislike Cisco all the way around. I let the network nerds work on the cisco switches, so if I can get the storage off of Cisco gear to brocade I consider that a win personally ;).
 
It's not that simple though. Well, it is if your focus is just on the storage you plan to use. But it's almost always more complicated than that. DR, replication, backup, other applications, etc... Those usually dictate choices more than just the vSphere environment. That's where design decisions get fun and interesting. Honestly, these other things are why we almost always win against NetApp on a deal and being in their back yard we fight them all the time.
 
This is great advice. I am also a block storage fan boi. FC all the way. I guess I just dislike ethernet networks for everything. I like brocades too and rather dislike Cisco all the way around. I let the network nerds work on the cisco switches, so if I can get the storage off of Cisco gear to brocade I consider that a win personally ;).

I'll second brocade vs cisco for storage
 
It's not that simple though. Well, it is if your focus is just on the storage you plan to use. But it's almost always more complicated than that. DR, replication, backup, other applications, etc... Those usually dictate choices more than just the vSphere environment. That's where design decisions get fun and interesting. Honestly, these other things are why we almost always win against NetApp on a deal and being in their back yard we fight them all the time.

Yeah, but it's time to move off those platforms and on to vsphere! :p

You're definitely right, which is why I tell people the second part (If you're emc, do what they do best, if you're a netapp shop or hp shop, do what they do best) - if your existing infrastructure is one thing, don't try to make that vendor sell you something they're not good at - you'll just get frustrated. :) If it's not, and you're starting fresh? Pick a technology, and find your vendor, or pick a vendor, and use their technology. :)
 
Oh, I must have missed the announcement where you can virtualize AIX. ;) I'm with you on that...I'm pushing, I'm pushing.

Come on, we do SCO, what more do you want? :p

On that note, I actually had to repro a problem with sco... I had to go bathe afterwards. Made me feel dirty.
 
Back
Top