SMB virtualization project

The Spyder

2[H]4U
Joined
Jun 18, 2002
Messages
2,628
Hey guys, looking for your input on a small project I am planning over the next 3 months. I started working for my old company again and while I was gone, everything went to heck. Since it is a no downtime /little as possible environment (equipment runs 24/7), I decided to replace the 5yr old servers/ Corrupted SBS2003 I originally put in and switch to an ESXi environment.

Hardware plans consists of the following thus far:
Primary ESXi host:
Supermicro 2u Dual Quad Xeon, 32gb ECC, 4x 300gb SAS Raid 10, Intel Pro 1000PT Quad
Secondary ESXi host/test box:
IBM X3560 2u, Dual Quad Xeon, 24gb, 4x 300gb SAS Raid 10, Intel Pro 1000PT Quad
Storage server:
IBM X3550 1u, Quad Xeon, 8gb, 2x160gb Raid 1, Intel Pro 1000PT Quad, Acera 1680
Norco 4220 4u, HP SAS expander+ PCMIG, 600w Redundant Powersupply, 2x 300gb 15k SAS Raid 0 (volatile test box storage), 6x 750gb SAS Raid 5 with 1 hotspare, 2x 2tb Hitachi Raid 1 (IT storage.)

I have been advised to keep the VM/OS storage local to the servers and use my SAN for the larger storage pools. (As reflected by the disk choices in the servers.) OS of choice so far is SBS2011 as it fits our environment. However due to projected growth, I am open to running straight S2K8R2 and splitting the rolls between a AD/DNS, exchange box, fileserver, and Open Filer for the SAN. We have 35 users/50 devices now and over the next three years this will double and a second site will be added. I do need several virtual workstations (2nd server) and I figure it will be a good dev/test box.

The company has a limited budget and much of the hardware has come from ebay/CL/local businesses who have shut down. We have been incredibly lucky so far, grabbing the 2 IBM boxes for $1800 for the pair, and hopefully buying the supermicro off a user here for 1/2 price as new. We just grabbed 12 300gb 15k SAS drives for $80/each. I have 2 spare at all times for both 300 and 750's. I am hoping a 4gb trunk between the servers and iSCSI box will be enough.

I have used ESXi at home for years now and deployed it twice before, but I do greatly appropriate input and helpful advice from here. Thanks!
 
I'm not sure why someone might recommend you keep your VM storage local, especially if you are getting a Storage server anyways. Using SAN vs local storage has many benefits(such as HA, vMotion, DRS, etc) but for most of they you do admittedly need to pay out for a license. Ultimately that's up to you, and if you do decide you want the benifits of external storage(SAN), its possible and not too difficult to do.

My only REAL piece of advice is this; don't use Openfiler, its fine for lab use, but not for production environments. Use something like Starwind, which is VMware certified and free(with some limitations). http://www.starwindsoftware.com/starwind-free
 
My opinion will always stand...

Local Storage = Fucking Retarted. It kills me to hear of people using local storage for production environments. Tell me what your going to do when that local storage dies and all of your VMs are dead.

Personally If it's in the budget make both of the esxi hosts the same. With Shared storage you could have one host fail, re-register the vms on the failed box and turn it up quickly. Downtime 10-15 minutes..

I'm also not a fan of Openfilter for production. Not sure I'm a fan of FreeNas either. I guess if you have to do what you have to do then do it.

If you can also hack it get rid of SBS. We did and it's the best thing we ever did. Also made life way easier when we bought out a company and had to integrate stuff.
 
I am always open to new SAN software, I will check out Star Wind.

Calvin, that was my thoughts as well. However a counter argument is, what happens if the san fails? It again is a single point of failure. If machine snap shots are regular and backups working, it may be more time to restore, but it still gets everything back online. I personally am torn, because the direct speed of the SAS vs iSCSI, but I need to do more research. Better to ask questions now.
Sadly it is not in the budget ATM to make the machines identical. Working with what has been previously purchased can be annoying.
 
Question: Lets say I move everything off the local hosts, keep everything on the SAN. What would you use for storage on the host for ESXi?
 
I keep extra copies of all my windows and linux install images on the local storage, as well as a document w/ cd keys. I like to keep a few copies just in case.

Most SAN devices are raid, so you normally have redundancy. I cant remember the brand, but my last company had huge racks of drives for the VM backbone, and we had controllers die and everything, and still didn't lost a thing, disks went periodically, but since it was raid, it was fine.
 
SANs will have the right tools for managing the disk sets it hosts. In my (limited) experience its harder to get the right tools on ESX (and even harder on ESXi) to manage local disks, especially if you want to remain supportable.

On the host you can use a pair of small disks in a RAID-1 envionment - ESXi also supports boot from USB.

The most important thing whatever you do (ok one of them!) is to document the setup and make your boss aware of what he/she is getting in terms of availability and recovery time. Note down the requirements that you have designed to (24x7 for example) and the limitations regarding kit. Make them sign up to the document / design if you can. Having a 24x7 system with single points of failure does sound "wrong" to me at a glance if that is the requirement.
 
+1000

Using Openfiler for production is just begging for troubles :)) There's a nice article on VMware KB telling people to stay away from OF kludge and recommending to use something like NFS. Here we are:

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1026596

Olga

P.S. Hope StarWind guys are going to release something Linux or Solaris based one beautiful day. So we could get rid of Windows licenses :)) Using Hyper-V to run StarWind is possible but it's a very questionable way to go.

I'm not sure why someone might recommend you keep your VM storage local, especially if you are getting a Storage server anyways. Using SAN vs local storage has many benefits(such as HA, vMotion, DRS, etc) but for most of they you do admittedly need to pay out for a license. Ultimately that's up to you, and if you do decide you want the benifits of external storage(SAN), its possible and not too difficult to do.

My only REAL piece of advice is this; don't use Openfiler, its fine for lab use, but not for production environments. Use something like Starwind, which is VMware certified and free(with some limitations). http://www.starwindsoftware.com/starwind-free
 
Use proper SAN and you should be fine :)) If you're on Windows side both referenced StarWind and DataCore have HA solutions. Both work just fine! If you're with Linux you can play with any DRBD based solution. It's not a real deal but at least you could manage to get rid of a downtime b/c of storage node role switch. You may check LeftHand VSA stuff with their 3 node design you're running redundant and with all IOps even with one storage cluster node dead. Pretty much like RAID5 Vs. RAID6. Other entirely hardware options do exist but they are playing in a very different price league :))

Olga

I am always open to new SAN software, I will check out Star Wind.

Calvin, that was my thoughts as well. However a counter argument is, what happens if the san fails? It again is a single point of failure. If machine snap shots are regular and backups working, it may be more time to restore, but it still gets everything back online. I personally am torn, because the direct speed of the SAS vs iSCSI, but I need to do more research. Better to ask questions now.
Sadly it is not in the budget ATM to make the machines identical. Working with what has been previously purchased can be annoying.
 
A San with NFS would be the way to go honestly. We just implemented NFS this year and have been pretty happy with it.

I hear your pain man.. I would like to think I was very fortunate for the budget I was alotted this year. It wasn't as much as I hoped, but it was something that got me where I wanted to be functionality wise.
 
Another vote for ditching the local storage. Take the resources you were going to invest in local, and put it in the SAN box, either faster/more drives, more redundancy or whatever. Run esxi from usb, buy a couple extra flash drives and do the install on all of them, so if one of them dies, it's that much quicker to swap it out. If you are rolling the SAN on your own, instead of OF I'd look at solaris. I've used various versions (opensolaris, nexenta, openindiana) for several years, and the solaris express release has been the most rock solid. My main box (serving esxi via NFS) has been up for 156 days now, not a single reboot since installing it.
 
What is the consensus for Openindiana? I am just looking in to it now, but have seen it mentioned plenty of times as of recent. I just updated to 4.1 on my esxi box here at home, ditched the openfiler, and will be trying it out.

Thanks for the input guys!

The environment is flexible. Once a machine pulls programs from the fileserver, they can run for 12 hours +. This gives us fairly large service windows, but like any environment, the better I can plan and document this out, the easier it will be when there is a problem This is just the start, I should have an identical second server purchased by years end to replaced the IBM.
Will the one-per-machine Intel Pro1000 quad ports give me enough bandwidth for iSCSI?
 
Quick question, how does everyone feel about SSD's for the ESXi install? A mirrored set of 32gb wouldn't be more then $200.
 
kill the local storage assuming you've got a nice raid'd san setup. I'm going to do a bit of research on starwind now - I didnt realize they had a 'free' version. Anyone use virtualiron?
 
lol - just read that Oracle acquired them in 2009. Didnt know that, I just remember dell reps pushing it on me in 2008 cause I didnt want to buy the vmware licenses...
 
Dear OP,

There are obviously many different ways of looking at the situation. This is just one of the ways considering based on the generic orientation of this particular thread. If you changed your evaluation method, obviously you can go for different route.

1. The single MOST important factor for your entire scenario is NORCO 4220.

2. By opting for NORCO, you are basically declaring up-front you are not paying for enterprise storage gears such as those generic frequently mentioned NetApp/EMC,HP/IBM/Dell/etc.

3. However, it is commonly observed that in most of the generic VM platform deployment, DRAM ranks top consideration, then followed by IOPS. Although many suggest different ways of addressing I/O demand, generally agree I/O overall is a main concern.

4. So that leads us back to your Norco, and since this is Business (Production environment), and since you are already saving money by not going new enterprise gears (buying used equipments from ebay ), and since Norco DIY is not a certified VMware HCL item, to minimize other issues, I recommend a second unit of Norco. There are many reasons for that but it is too long a post to elaborate.

5. ESXi hosts are least of your concern because as VMware itself said, ESXi is designed to be quickly rebuilt, they do not even list out backup procedure for the ESXi configuration files. (read the recent thread in this forum. There is one possible approach). In this case, since this is not Fortune 500 setup, reasonable host will do the job just fine. Though for support purpose in case of emergency, your vendor likely more helpful in case the server is listed in VMWare HCL.

5a. Thus any entry level socket 1156 / 1366 /Opteron ECC servers / workstations with supported devices will do. The key is you can fit massive RAM with minimal cost. In this case, Socket 1366/Opteron (new-gen high ram) have advantage for low-budget setup because entry-level 1366 can go 24GB. However, if you opted for 1156 or soon 1155, then 16GB is the max. The good thing is Dell/HP are basically good brands pumping massive amount of such servers into the market

5b. Long story short, ESXi host cost is least of your concern because you can get replacement fast and reasonable cost for your scenario.

6. Now for Norco,
6a. 1st question is whether you want to pay for Solaris Commercial Support since otherwise you cannot use the Oracle version of ZFS.
6b. 2nd question WHY ZFS ??? Because looks like you want to save money many places, you gotta give back somewhere so the IT software industry can hire more people :) However, the answer is relevant to 6b itself, circular reference, I know, because you want to save money, when you really give extensive thought on how many levels your total Virtualization platform can have issue, a properly maintained ZFS-based storage platform can help address some of the issues quickly, if you designed properly and knowledgeable about the whole process, and in that process, help save some money. Obviously other non-ZFS storage solutions also viable. It depends on your combination.

6c. Obviously many have a stake in this process. We live on the goodwill of many so I observe that as well. If you do everything on your own, you save but self-debug on short notice when problem hits. Pay for support and get vendor to fix issue for you. If you are not getting any satisfaction and must explore further, probably can explore other full-scale solution as well (mostly in the Storage forum)

7. Summary : Norco is foundation of your scenario. Build a good foundation and you will have flexibility. Remember the 2nd unit of Norco.

Cheers
 
Last edited:
Quick question, how does everyone feel about SSD's for the ESXi install? A mirrored set of 32gb wouldn't be more then $200.

I feel it would be a waste of $400 (assuming 200 for each server). Once esxi is loaded, it only rarely goes to disk for anything except writing logs, it runs in memory. Pile that $400 back into ram or more spindles, and buy 4 usb sticks for 20 bucks.
 
Light, that was an excellent post, thank you.

Since I have been incredibly busy I have yet to update this thread. After doing my research (and am continually so-) I ripped everything up. I would rather start with solid hardware from the get go, then deal with questionable troublesome hardware and unavailable features due to mismatched hardware config. What's the point of creating a nearly HA environment, but then having a local disc or un-similar resources pool leave me dead in the water for hours.
The following have been spec'd and planned for. Yes we are running a homebrew Solaris/ZFS solution, but I could only squeeze so much out of the company. We do have an AMAZING Solaris/Unix admin available who has convinced me this is my only budget friendly option. In the future, it will be the first thing upgraded and moved to backup duties.

2 Dell R710's (3yr warranty referbs)
Dual E5645's (6 core 2.4ghz 1333fsb 12mg cache)
32gb DDR3 ECC
4 USB "industrial" drives

1 OpenIndiana + Nappit = ZFS server consisting of a:
Supermicro Dual Quad 2.5 Xeon with 24g ram, 34 hot swap bays, and quad redundant power supplies with:
10x 300gb 15k SAS (reused)
6x 750gb Sata 7.2k (reused)
2x 2tb 7.2k (reused)
(Disc options are being figured out now for raidz2.)

The existing IBM boxes are being re-purposed as an Astrix server and management box. The Quad port pci-e GB nics, HP 1800-24g switch, 42u rack, dell 1u kb/m/15"lcd, dell IP KVM, and APC battery backups will be reused. A 8000va APC has been purchased and will be hardwired in asap.

Now for the software. I am incredibly tempted to move away from SBS just because of the expected growth to multiple locations and new needs. I am looking in to moving our VPN to a hardware solution as we only have a handful of remote users (read <5) and would then move to a standard Server 2008 R2 + Exchange 2010 environment with <50 user CAL's. This gives me tons of room to grow and removes load from the single SBS. As long as RWW and VPN functionality can be setup in a reasonable amount of time, and printer/filesharing should be no problem, I see no reason not to jump to this new level on the planned hardware. In our test environment we have 6 production boxes (providing everything from DNS/DHCP/AD, Printer sharing, Backup, Exchange, File, and Monitoring).

This is as always a learning experience for me, however I do not want to have to learn on the fly more then I need too :).
 
I sure love this plan here!!!

Couple of things to remember here...

We moved away from SBS/EBS and haven't looked back. Not long after we migrated away from it we actually bought out another company in California so we already were in a position to start an initial domain trust and to start taking over their domain and migrating their stuff into ours.

Someone can confirm, but when you move away from SBS/EBS you lose RWW. Honestly we had one person out of an entire organization complain about it. We bought him a $500 netbook to take home and work from home and it shut him up pretty quick

Now before you get too much further I think you need to put some thoughts into what you may think on a firewall or switch gear. You have 2 esxi hosts, maybe two switches to create some level of redundancy.

Along with moving away from SBS/EBS we also purchased a Sonicwall NSA 2400 with content filtering, ids/ips, and gateway AV. We migrated everyone over to the sonicwall VPN client and it has worked great. Don't know what you have left in the budget, but you might be able to come up with something good at a decent price.

Just some extra things to think about as well. Maybe you already some some of this in place who knows.
 
Thank Calvin,
My budget for the year is essentially shot. With doing lab upgrades and FINALLY getting rid the 21" CRT's, I am sitting here fingers crossed at all times nothing terrible will happen, haha.

Our existing firewall is a Untangle based quad xeon/12gb/supermicro home brew that was put in before I left. We have 2 ASA 5505's for external VPN connections for the companies we contract to and one is being upgraded leaving it spare. Maybe I can look in to OpenVPN or using the Cisco.
Each of our internet connections has 5 IP addresses includes (7 spare ATM).

Loosing RWW really is not that big of a deal to me, I believe we have less then 5% who use it actively. The main people I need to keep happy are the president and VP.

I do have two 1800-24's for my storage network, and a plan in place for a redundant connection for the boxes. I still hate the idea of my ZFS box being a single point of failure, but as long as snapshots and backups are taken regularly and we continue to grow like this, I see no issues in the next few years that can not be handled in a reasonable amount of down time and with planned upgrades. I am working out what to do for our backup server and keeping spare parts for the ZFS server in stock. We currently have an old 2k3 white box with Acronis taking snap shots of each server and a set of externals taken off site weekly. I am starting to think having a second ZFS box (like the currently spare IBM X3650) would not be a bad idea for disc to disc backups. Outside of that I need a cheap, fast tape system next year :).
 
Sounds like you got it all in line man. Just vlan it out and you'll really be in business. Personally if you have the licensing available use the 5505 for your VPN into your network.
 
I realized the only thing that saved our butts this last upgrade was I overspec'd the servers (2 2900's Dual Dual Cores) and they lasted just over 5 years. We grew from 15 users to just under 50 and tripled the number of devices. Storage grew from 150gb to just shy of 1tb and has grown at least 150gb per year. Seeing as we are starting to do even more 3d modeling in house and purchasing a laser scanner and converting to a paperless office in the future, this is going to start tripling our annual growth. Even the 4tb I have budgeted for right now is going to start to get skinny here in the next 3 years. Just another reason I love ZFS + drive pools, I just add another set of -by then- cheap 2tb enterprise class drives and we have another 10tb of storage, same for the disc to disc backup.

I am looking forward to getting a dedicated 100/100 fiber to a colo + tossing a server in there for remote backups, but time and money is everything :).
 
Back
Top