How to get an idea of network bandwidth usage (migrating to offsite datacenter)

surrealillusion

Limp Gawd
Joined
Dec 10, 2009
Messages
132
So the CFO was approached by a local datacenter to consider virtualizing and hosting all of our servers offsite in their datacenter (they are a major ISP here). As such I've been concerned with a few things, most importantly network traffic and what kind of pipe will be required for such a move (and whether or not it's more cost effective to pay for the larger pipe) or makes more sense to keep all the hardware on-site. Just to give you an idea of what we would be hosting we've got a SQL server, Exchange Server, Sharepoint Server, two DC's, Network File Server, and a few other smaller servers (which we would keep on site due to the nature of the business).

What is the best way of going about determining network traffic that would be required for smooth/seamless day to day operation?
 
I would keep everything offsite. Internet can go down at any time, can you afford your entire operations to be halted because of that? At least with everything on site, things like internal email still work and most work can continue on. Maybe some departments that rely on the internet will be down, but at least as a whole the corporate infrastructure will stay up.

also in house you have 100mbps or even 1gbps to all servers/pcs, if you host it in a data center you'll be limited to your internet.
 
I'd have to bet that onsite stuff with have more downtime than a serious datacenter.

They have more redundancy than he would, and that includes multiple backbone connections probably.
 
I'd be more woried about the path to there, the internet. Unless this company is rich the internet line is probably 10mbps or less. Even the hospital I work at only has like 6mbps. Having it off site will require to setup some kind of VPN tunnel. Having everyone's mail client connect to the mail server, various ldap queries, other apps etc... that's going to be a LOT of bandwidth. Bandwidth that would normally not be going through the internet if it's in house. Stuff to consider.

Also how will backups be done? When it's in house you normally have a tape library or what not with a tech that regularly rotates tapes. Without physical access (unless the DC is actually near the company) this is a bit harder.

Just a few things to consider. It sounds good at first as you don't ahve to worry so much about things like power, UPS, Generator, AC and so on.
 
Currently, we have a 30mbps pipe and we're looking to bump that up to 100mbps (or two 50mbps pipes with different routes for redundancy). We'll be taking a tour of the datacenter this month to take a look at the facilities and the services offered and how much the hosting will cost (but I have a gut feeling it will end up costing more to have everything offsite vs cycling new hardware in every 4-5 years).

the DC is downtown so about a 20 min drive from the site and if I'm not mistaken there is 24/7 card access to the building.
 
so coming around full circle again, I'm currently looking at differnt network monitoring tools:

I'm already running ntop (great for real time but it crashes so I can't get any historical info)

I'm also looking at testing CactiEZ and NST. Any other suggestions?
 
CoLoing your hardware is not really that bad, and you don't need nearly the pipe you think.

Lets tackle these one at a time.

1: Exchange: This really doesn't need much bandwidth at all, just ask any hosted exchange company. We house a 1,000+ user environment on a 10/10 line and have no speed issues at all. I personally would also recommend using Outlook anywhere to connect to Exchange that way your outlook traffic isn't going through the VPN tunnel, and also so that laptop users can work from home without VPN.

2: SQL Database: Depending on how this is being used, SQL traffic may not even be traversing the VPN tunnel. If the database is just a back end for a program then the connection to the program is over the VPN, but the traffic between application server and SQL server stay in the CoLo.

3: Sharepoint. Depending on how you use it depends on whether or not it should be in the cloud. If you are using it sort of as a group drive and it is being constantly used, it may make more sense to have it locally. If you post things every once in a while, like company documents, memos, forms etc and the traffic to the box is minimal anyway put it in the cloud.

4: File shares. These don't go to the cloud the way you would normally think. What my company has found is that if you have two file servers, on on site, on in the cloud and you connect your users to the one on site, then use RSync to sync the local and cloud file server you end up using minimal bandwidth because instead of making transfers of 100s of MB as people open and close and save documents, you are making transfers of 10s of MB updating the changed blocks.

As far as backups go, tape is out. The reality is that the server you will be backing up will have 100s of GB of data to be backed up, potentially TB if you are doing exchange archiving or doing 14 day cycles of backups. Personally I don't want to change 10 tapes, or spend the money on an auto loader to get 1TB worth of backup on to tape. Not to mention some backup software is starting to drop support for LTO, AND you can't backup your virtual machines directly to LTO, you would have to backup the hypervisor, and if you are using VMware or Xen that requires you to purchase another physical server and another license (of say Veeam) just to run tape. Buying a 8TB NAS / iSCSI device from Buffalo, QNap, or Sans Digital makes your backup life a lot easier.

Depending on how many people you have at the company I would start off with a 20/20 in the CoLo, and make sure you can match at least the down speed and have around 10Mb up.

As far as the internet at the office being down, if you work with my setup the only things people can't access are exchange, sharepoint, and SQL. They can still get to their files, you have an in house AD server, and you have the local application servers. You also probably have your outlook client using an OST file and therefore all your mail is cached locally and thus even though you aren't getting new messages, old ones are still available.

IMHO there are a lot of advantages both ways. My company found that because we are a hosting provider (Email, Sharepoint, Web, Storage, and soon VDI) that it makes more sense to have a rack in the CoLo and have just a few servers locally then trying to run our own datacenter in house, have 3 phase power, adequate cooling, fire prevention, pay a fiber provider for the proper bandwidth etc. What we pay per month for our rack, power allotment, and bandwidth is about $100 less then what we were paying for bandwidth alone having it in house.
 
I would set up MRTG or RRDTool to graph the traffic inbound and outbound from the WAN interface(s) on your Router/access device, plus each switch port that the servers plug into. That would give you a good idea of how much bandwidth each server uses, peak and average, and how much bandwidth you are currently using.

Your router/access device and switch(es) will need to support SNMP. Most modern managed devices do.

If you don't have access to a linux box to set this up on, I have heard PRTG will do the same thing (but in a Windows environment).
 
Back
Top