Doing some lab work today...

NetJunkie

[H]F Junkie
Joined
Mar 16, 2001
Messages
9,683
Spending the day on some lab work. We finally got some Cisco UCS lab gear here in Charlotte (have some in our other lab). EMC loaned me a VNXe to work with and do some blog posts on. Installing vSphere to the UCS blades right now. Just racked the VNXe.

vnxe-small.jpg
 
Wish I could get my hands on some UCS gear. Hoping after VMworld this year my boss will be more open to it.
 
FCoE, FC, iSCSI?

3 guesses and you missed. ;) NFS. Doing local boot on UCS as the VNXe doesn't do FC or FCoE. I may play with boot from SAN w/ iSCSI on the UCS. The VNXe has 10Gb modules in it too. Upgrading the code on it right now...
 
Yep..the B series. I'll do a writeup on it here soon. Cisco has a completely different architecture for blades than anyone else. I think it's by far the best setup out there.

Looking forward to this post. I have always wondered what the appeal for UCS is?
 
3 guesses and you missed. ;) NFS. Doing local boot on UCS as the VNXe doesn't do FC or FCoE. I may play with boot from SAN w/ iSCSI on the UCS. The VNXe has 10Gb modules in it too. Upgrading the code on it right now...

Odd, I've got 2 cases with VNXe + FCoE using 10G and UCS. At least, I'm pretty sure it's e, they're definitely VNX.

Been trying to figure out where the communications issue is, as none of the 3 companies can quite pin it down yet (that's why I was curious - see if oyu were having any problems.) FNIC driver just doesn't seem happy with that array and Nexus.

Don't forget the BBWC config so that the cellera NFS doesn't perform like poo. :p
 
Looking forward to this post. I have always wondered what the appeal for UCS is?

Same, especially since I am looking at hardware refresh this fiscal year.
Also curious about at what socket count they become competitive with conventional servers. Meaning if I am looking at buying four quad socket 2U servers is UCS a viable alternative or do they only become economical at say 20 or 30 or 50 sockets.
 
Same, especially since I am looking at hardware refresh this fiscal year.
Also curious about at what socket count they become competitive with conventional servers. Meaning if I am looking at buying four quad socket 2U servers is UCS a viable alternative or do they only become economical at say 20 or 30 or 50 sockets.

Using the new bundles they are doing you can get four dual-socket blades, the two top-of-rack 10Gb interconnects, and the chassis for like $41K. Very well priced to start off.
 
Odd, I've got 2 cases with VNXe + FCoE using 10G and UCS. At least, I'm pretty sure it's e, they're definitely VNX.

They'd be VNX. No FC/FCoE on the little VNXe. What problems are they having? We have a lot of UCS + VNX/CX4 + FCoE out there. Which I/O modules do they have on the blades?
 
They'd be VNX. No FC/FCoE on the little VNXe. What problems are they having? We have a lot of UCS + VNX/CX4 + FCoE out there. Which I/O modules do they have on the blades?

Doesn't matter - palo / menlo cards both are acting odd, using fnic driver for native FCoE straight through (nexus 5k each time, with VNX). Just not stable connectivity wise, but switch to enic + swiscsi and it's great.
 
Looking forward to the blog posts. I have to say that I am very pleased with our UCS. Just recently added another blade to bring our ESXi host count to 5. Ran into a blade discovery issue that was resolved by software update. Took me 30 minutes to get to the point of installing ESXi on the blade. We do boot from SAN so I had to assign a service profile to the blade, create the LUN, register the HBAs, create the storage group, and add it to the LUN. I really like the way Cisco designed this system.
 
Two words, Unified Fabric . Can't beat it! I would take a few redundant 10GBe twinax over loads of Cat5e/Cat6 cables, Fiber, etc any day of th week. vnics/vsan, ports...mmmmm.

Another two words: Service Profiles

Adding another blade is cake when you have a profile built so not only do you get the ease of blades, you get the ease of blade setup.

Another word: Intel, same Intel Architecture in these blades that you've grown to love. Plus the memory density is phenomenal.

Now..if I could only get management to sign off!...grrr...I could essentially go from 10 racks of mess, to 2 racks of nice and neat and be 99% virtualized and they don't see the benefit.

I'll say it again, Please God, help me find a new job!..lol..

Really looking forward to the Blog posts. Are you setting up an AMP cluster..etc?

While we wait, however, I would suggest this site...awesome information: http://bradhedlund.com/2011/03/08/cisco-ucs-networking-videos-in-hd-updated-improved/
 
Last edited:
call me old fashioned..but I still prefer them separate.

Unified Fabric is not gonna do any miracle if your already maxing your links.

when it's on 40gbe ethernet..it's gonna be a different story.
 
call me old fashioned..but I still prefer them separate.

Unified Fabric is not gonna do any miracle if your already maxing your links.

when it's on 40gbe ethernet..it's gonna be a different story.

Old fashioned. :)

Two 10Gb unified connections gives me the equiv of two 4Gb FC connections and 12 Gb connections. Even if you're running it flat out that's very good and much more efficient.
 
call me old fashioned..but I still prefer them separate.

Unified Fabric is not gonna do any miracle if your already maxing your links.

when it's on 40gbe ethernet..it's gonna be a different story.

Unified Fabric is here now and it's here to stay and will shortly become the norm. I'd get used to that fact my friend...lol.
 
Old fashioned. :)

Two 10Gb unified connections gives me the equiv of two 4Gb FC connections and 12 Gb connections. Even if you're running it flat out that's very good and much more efficient.

that's what I'm saying.

I'm using my 8gb fiber..and already 5Gb of my 10Gbe link.


Unified Fabric is here now and it's here to stay and will shortly become the norm. I'd get used to that fact my friend...lol.

I'm not against it at all. I'm just saying is it not for everyone just yet. Once the bandwith goes up...I'll be all over it. Believe me..it would be so easy to have only 2 cables per server !!
 
Finally got my damn license key from EMC for the VNXe to enable all the features... Was a hassle since I didn't actually buy it..it's a loaner unit.
 
Back
Top