• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

Cisco UCS blade center, vmware and CUCM

cyr0n_k0r

Supreme [H]ardness
Joined
Mar 30, 2001
Messages
5,360
We are looking into refreshing a lot of our server infrastructure in the next few months and a vendor has suggested the Cisco UCS platform.

Currently we are a Hyper-V shop 100% with no plans to change. We were originally looking at 3 HP DL380 G7's with 128GB ram each to replace our 6 DL380 G6's with 32GB of ram. We would like to move to doing a Hyper-V cluster with these new servers.

At the same time we are looking at possibly upgrading our aging DL380 G5 CUCM servers to something newer. The vendor suggested moving to the UCS platform because we could combine the CUCM VM's on the UCS the same as our Hyper-V cluster. We like this idea, but I have learned that CUCM will only support VMware.

I suppose we could run 3 128GB ram hyper-v blades, and run 1 dedicated 32/64GB ram VMware blade for CUCM and 1 or 2 other VM's we have on the horizon that I know are going to need ESXi, but wanted to see if anyone has any experience with Call mananger on the UCS platform and running on Vmware. I realize running the publisher, subscriber and unity voicemail servers all on 1 blade is not ideal, but the VMware requirement limits our options. I can't justify the cost of 2 separate VMware blades to management while also ordering 3 dedicated Hyper-V blades. 1 blade I might be able to swing though. Any thoughts?
We are also looking at finally migrating to a true SAN with this possible blade center upgrade. We are leaning towards the Dell Equallogic's just FYI. Although I'd like to keep this particular discussion focused more on the virtual UCS/CUCM side of things and less so on our storage decisions.
 
It works. Customers are doing it. The nice thing is that if your VMware blade fails you can move the Service Profile to another blade and boot it right up..so maybe cut down on some Hyper-V stuff and re-use that blade until a replacement arrives.

You can do a lot with UCS as far as QoS and flow policies for realtime apps and voice/video. I'm all about the UCS. ;)
 
The nice thing is that if your VMware blade fails you can move the Service Profile to another blade and boot it right up..so maybe cut down on some Hyper-V stuff and re-use that blade until a replacement arrives.
can you explain this a little more?
If I have 1 ESXi blade and 3 Hyper-V blades how can I move the CUCM load from the ESXi to the Hyper-V blades?
 
I have done quite a few deployments using C series and B series (blade chassis) UCS deployments with CUCM, and related Apps. To give you an idea, version 8.62 (latest available) is supported on esxi 4.1
It will run fine on 5, but beware as of the time I am writing this they do not support 5.X yet.

To answer your question, you need to think about your hardware and blades differently with UCS.

When you say you have 1 ESXi blade and 3 hyper V, that means you have 1 ESXi server profile and 3 HyperV server profiles

Server profiles literally make your hardware stateless to the OS on them (in a boot from SAN environment) If you want you can move the CUCM (esxi) profile to another blade and swap them around all day long just for fun, basically if you buy identical blade hardware it does not matter where you put things in UCS.
 
I have used the UCS platform and it is great. The stateless blade profiles are excellent. Put some thought into migrating to VMWare, and going NFS or iSCSI on a NetApp.
 
To answer your question, you need to think about your hardware and blades differently with UCS.

When you say you have 1 ESXi blade and 3 hyper V, that means you have 1 ESXi server profile and 3 HyperV server profiles

Server profiles literally make your hardware stateless to the OS on them (in a boot from SAN environment) If you want you can move the CUCM (esxi) profile to another blade and swap them around all day long just for fun, basically if you buy identical blade hardware it does not matter where you put things in UCS.
Interesting. Would this be similar to how VMware makes "pools" of resources in a cluster environment?
So Blade 1 is not necessarily independent from blades 2-4 as it relates to physical hardware? Is the UCS platform doing some kind of pooled hardware that you install ESXi or Hyper-V onto?
If that is the case, do you really need blades dedicated to a specific virtualization product? Would I be able to accomplish the following?
3 Physical blades, running 3 Hyper-V profiles, and 2 ESXi profiles all sharing the hardware of the 3 blades? And I assume those server profiles would look like a physical piece of hardware to Hyper-V? IE, the server profile would be the same as thinking of each profile as a separate physical "server" to the Hyper-V OS?
 
Interesting. Would this be similar to how VMware makes "pools" of resources in a cluster environment?
So Blade 1 is not necessarily independent from blades 2-4 as it relates to physical hardware? Is the UCS platform doing some kind of pooled hardware that you install ESXi or Hyper-V onto?
If that is the case, do you really need blades dedicated to a specific virtualization product? Would I be able to accomplish the following?
3 Physical blades, running 3 Hyper-V profiles, and 2 ESXi profiles all sharing the hardware of the 3 blades? And I assume those server profiles would look like a physical piece of hardware to Hyper-V? IE, the server profile would be the same as thinking of each profile as a separate physical "server" to the Hyper-V OS?

In short - no. What UCS does is to abstract the configuration of the hardware into a profile that can then be assigned to a given blade. By moving the config profiles around, it becomes very simple to shuffle around blades with needing to reconfig them (i.e. in case of a failure). What you are imagining is essentially another layer of virtualization, which is basically unnecessary if you were to consolidate your virtualization platforms down to one.
 
Think of Service Profiles almost as a VMware template, but one that provides every identity to a server minus its serial number. (UUID, HBAs, NICs, WWPN,WWN, the list goes on.) So with SAN boot you can effectively have a server go down and re-assign its service profile to another blade and it will assume its identity and be able to boot right back up. Pretty cool stuff. I love selling, deploying, and using the UCS blades.

For CUCM, even with the above, I would coerce you to look at doing 2 - B200s for your environment so you can separate your CUCM SUB/PUB. Then buy however many Hyper-V hosts you want, the new M3 line they released will handle pretty much anything you want to throw at it. If you have anymore questions I am sure Netjunkie or myself can help you out.
 
Wait, so you can't mix server profiles? I can't have a Vmware server profile and a Hyper-V server profile operating at the same time on a single blade?
If that were the cause, and my ESX blade goes down, how would I move the profile to another blade if the other blades are all already dedicated to Hyper-V?
 
Why and how would you want both service profiles setup at the same time on a single blade? In UCS manager you would have the profiles built and assign the profile to the blade as needed. If a blade down, and you have a spare either in that chasis or another, you assign the profile to that blade or when you replace, you assign the profile to the new blade.
 
Server profiles basically contain all the unique information about the server hardware. As stated above things like WWN, UUID and Mac addresses. You could even specify a bios version if you had a system that required a specific version for some reason. So you can't have 2 profiles assigned to the same blade at one time, but you can change which profile is assigned to a blade quickly, much easier than say remoting into your ILOM card and changing settings there.

You really need a SAN to get the full benefit, if you are installing your OS/hypervisor on local storage, moving the profile doesn't move the OS. But if you boot from SAN it works great. Not sure if they've added iSCSI boot support yet, it was coming soon last I checked, so it may still be an FC only option for now.
 
I have been managing our UCS setup since we first installed them a year ago. I can say that I absolutely love the way service profiles function.
 
In our environment here at the office we have 6 UCS blades, 4 for servers and 2 for our View Desktops. These things are a beast and the service profiles make adding new blades silly easy.
 
isn't one of the features of having a failed blade for it to AUTO move the vm's to a working one ?
 
UCS doesn't do anything like that. It won't provision a blade for you automatically if one fails..you could script that. But if the OP has a single vSphere host and that host fails then nothing happens until the OP applies the Service Profile from the failed blade to a spare blade or re-purposes a Hyper-V blade.
 
You really need a SAN to get the full benefit, if you are installing your OS/hypervisor on local storage, moving the profile doesn't move the OS. But if you boot from SAN it works great. Not sure if they've added iSCSI boot support yet, it was coming soon last I checked, so it may still be an FC only option for now.
Can anyone confirm iSCSI boot support on the UCS platform? That would be a deal breaker for us if this is not available.
 
Uh..no. Chambers has even said emphatically no that UCS isn't going anywhere. Look at Cisco's data center strategy... UCS is key.

With Cisco recently certifying Call Manager for vmware, UCS is now a major part of most Cisco VoIP implementations. Cisco tossed it in the deal my company has for VoIP this summer, I'm looking forward to playing with the UCS, and probably migrate my excellent IBM Bladecenter into a new DR site in the next year or 2, depending on storage and blade costs.

The ability to run vcenter as a VM is really nice now, i'd run at least 2 blades with VMware just to keep HA running.
 
With Cisco recently certifying Call Manager for vmware, UCS is now a major part of most Cisco VoIP implementations. Cisco tossed it in the deal my company has for VoIP this summer, I'm looking forward to playing with the UCS, and probably migrate my excellent IBM Bladecenter into a new DR site in the next year or 2, depending on storage and blade costs.

The ability to run vcenter as a VM is really nice now, i'd run at least 2 blades with VMware just to keep HA running.

I've implemented a few UC installs running in VMware and UCS. Works slick.

UCS isn't going anywhere and I never would have expected it to. It's an awesome product, especially with the QoS it can perform rather than just setting maximums on virtual interfaces like Flex-10.
 
We are thinking of virtualizing our Cisco stuff at work also. We are Hyper-V also so we would need a esxi server just for that.
Can you install ESXi on a Dell or Whitebox server and install the CM on it, or do you have to buy a ucs system?
Edit: did some more reading and it looks possible, but no support.
Can you use the free version of ESXi or do you have to use a vSphere license?
 
Last edited:
We are thinking of virtualizing our Cisco stuff at work also. We are Hyper-V also so we would need a esxi server just for that.
Can you install ESXi on a Dell or Whitebox server and install the CM on it, or do you have to buy a ucs system?
Edit: did some more reading and it looks possible, but no support.
Can you use the free version of ESXi or do you have to use a vSphere license?

You can install CM in ESXi or even workstation, VSphere is not needed. There are some extra steps required for generating the license key since it is not on physical hardware, but there is nothing that specifically checks for UCS hardware. At least not in 8.5/8.6 CM. Of course the no support thing would be a big drawback if you were going to run it in production.
 
cisco is such a joke. I have a client on day28 of a mcs CUCM critical crash. teir1&2 support is a completely joke. hp tried 8 times to deliver (2) identical replacement drives and failed - eventually sent me a whole new chassis. MORONS

enjoy paying for that smartnet because you're getting shit for accountability with it.
 
cisco is such a joke. I have a client on day28 of a mcs CUCM critical crash. teir1&2 support is a completely joke. hp tried 8 times to deliver (2) identical replacement drives and failed - eventually sent me a whole new chassis. MORONS

enjoy paying for that smartnet because you're getting shit for accountability with it.

200px-Trollface.svg.png
 
One of the great things about UCS (I'm a UCS instructor) is the VIC card (which is what most of the blades get sold with). It allows you (among other things) to "fake" out extra NICs. And each faked out NIC (done through PCIe virtualization, so no drivers required on the OS/hypervisor or SR-IOV) can have different QoS/rate limiting parameters. Great for VMware (having separate NICs for vmotion network, switch, vmkernel, etc.).
 
Back
Top