VSAN for ROBO?

moto316

n00b
Joined
Sep 22, 2005
Messages
59
Ok a little background about our environment. We currently have a mix of 8 poweredge r710/r810/r820 servers in a stretched cluster in two different facilities across our campus, each with 256GB RAM and two dual port 10Gb broadcom nics mated to eight Force10 S4810 10Gb switches, two for the LAN and two for the storage network at each datacenter. The 4810's are linked with 10Gb MM fiber. Storage is a FAS3240 fabric metrocluster.

We currently are running at ~250 powered on VM's for this cluster, mix of server VM's and Win7 VDI desktops running on View 5.3. Approximitely 150 of the VM's are VDI machines. Currently they are persistent dedicated pools for each user but we are planning on migrating the majority of them to nonpersistent linked clones with view persona management for the profiles, redirecting that user data to a cifs share on the netapp.

There is discussion about moving 40-50 of our users with a possibility of more later on (aka most likely) to new office space about a mile away from our main campus with a 30Mb WAN link back to the main campus. These user's would be executives, accounting and other task/knowledge workers. We're not looking to drop serious coin again on a new netapp HA pair as the majority of the VM's on this site will just be VDI sans a domain controller, print server and an app server or two. I'm also weary of going with a lower end single controller array like a FAS2500 because of scalability issues just like any other array that requires a headswap once the controller becomes overloaded. I know there's new players out there that excel at VDI like nimble, etc but those also cost a pretty penny too.

We'd like to start running cisco at our shop so I've been looking at the C240 smartplay bundles. For ~45k we can get four C240 servers with 128GB ram, dual E5 Xeons, VIC1225 adapter and Nexus 2K fabric extenders. For this greenfield deployment I think we would just need to add a pair of Nexus 5k's to link up to and then some access switches for the endpoints and that would take care of the switching infrastructure. Then add some SSD and disks to our C240's and get the VSAN cluster set up.

Given this scenario would you guys go the same route? I know I can run on just 3 hosts for VSAN but the smartplay bundle price is attractive and I hear its best practice to run VSAN on a minimum of 4 nodes. I'm also open to possibly going nutanix/simplivity but i think the price point would be a bit higher then the build your own route im leaning towards.
 
Look at the Nexus 3Ks or 9Ks. Cheaper. You can buy Cisco boxes pre-built for VSAN..so price that out and see. Will you be managing these remotely with a central vCenter?
 
Look at the Nexus 3Ks or 9Ks. Cheaper. You can buy Cisco boxes pre-built for VSAN..so price that out and see. Will you be managing these remotely with a central vCenter?

Deploying another vcenter instance on this new cluster most likely. Wouldn't that be best for a cluster thats at the other end of a WAN link Jason?
 
Also management wise what would we be losing by not connecting the C-series servers to an FI with UCS manager running on it?
 
Deploying another vcenter instance on this new cluster most likely. Wouldn't that be best for a cluster thats at the other end of a WAN link Jason?

The problem is that you can't easily run vCenter on top of VSAN. You get a chicken and egg situation. So you'd need storage for vCenter that's not VSAN.
 
Also management wise what would we be losing by not connecting the C-series servers to an FI with UCS manager running on it?

For 4 servers all running vSphere..not sure I'd bother. Not like you care much about Service Profiles and you're spending a good bit to do that. I wouldn't for this.
 
The problem is that you can't easily run vCenter on top of VSAN. You get a chicken and egg situation. So you'd need storage for vCenter that's not VSAN

Sure it's not easy, but it can be done. Bootstrapping vCenter is supported and there is a guide on how to manage all that during maintenance, power down operations..etc. I've done it a couple of times..not that bad.

Not as easy as having a local disk just to get vCenter up, you're right about that, but it's not really that bad.
 
Hey, as long as your comfortable with that at 2am when an outage happens. ;)
 
What's the downside to buying a C240 and not going with their OEM SSD and disk options? A OEM Cisco 400GB performance SSD runs almost $4k and I could get a 400GB S3700 for around 800 dollars. Pretty much if anything hardware wise goes wrong cisco support will point the finger at the non OEM drives?
 
They should have ready nodes per-configured so you don't have to do all that. You can roll your own, I've thought about it myself. The problem is that this is going to run a production workload. Get production support and supported configurations.
 
Have you considered HP StoreVirtual VSA? At the capacity point you're looking at, its a lot cheaper than VSAN. For example, a 3-node 4TB VSA license (including 3 years of support) is $3000. A VSAN license for a similar configuration is over $24,000 (including 3 years of support). VSAN is licenses by socket so I'm going to assume your hosts are 2 sockets (probably a reasonable assumption) X 3 hosts at $2495/socket and support is $524 per year per socket - and that comes out to around $24,000. There are a lot of nice features with VSA and if you want to play around with it, there is a free full featured 60 day trial at http://www.hp.com/go/TryVSA. One note on the pricing - the 4TB VSA doesn't not include sub-LUN tiering. There's a 10TB license that includes that our sub-LUN tiering software, Adaptive Optimization, and list price is $3,500 per node.

I'd be happy to answer any specific questions you have.
 
Back
Top