Virtualisation Build - Ideas wanted

Zxcs

[H]ard|Gawd
Joined
Sep 11, 2004
Messages
2,007
I'm building a server that will be co-located on a 1gbps/10gbps line and provide VPS/hosting/gameserver solutions as well as FTP servers/seedboxes (if there is redundant bandwidth).

Here's what I'm thinking of getting so far (going Opteron as it's cheaper than 4-way Xeons and not much worse):

CPUs: 4x AMD 16-core Opteron 6272 2.1ghz (115w)
Motherboard: Supermicro AMD Opteron 6000 H8QGI+-F
RAM: 16x DDR3 8GB 1333mhz registered ECC (128GB total)
Case: 1U Supermicro SC818TQ+1400 (1400w PSU included) - I'm very much open to suggestions for other cases

I have no idea about storage. I was looking online at 'NAS storage solutions' and :eek: they are pricey. Any/all recommendations about a storage build would be useful. I would prefer it if it was maximum 4U size.

What drives should I get - is it worth getting SAS drives? How about a raid controller? SCSI or sata? If I use FreeNAS can I mount the drives to the VPS guests using ESX?

I'm thinking of installing SolusVM along with ESX so that users can manage their own servers - anyone have experience with this?
 
Just make sure that you've not hosting paid clients on ESXi unless you have a license from VMWare. If this is destined for co-lo and you want to avoid licensing costs, you might look in to XenServer, Proxmox or a roll-your-own.

Storage concerns are different if you are going with a separate storage host. If you have a storage server it should absolutely use RAID, but whether or not you choose hardware RAID is up to you. For your local storage, it depends on if you're going to go with something like a storage guest OS (like OpenIndiana, probably unnecessary if you also have an external storage host) or just use local data storage functionality.
 
All depends on your workload. Your clients might require a lot of IOPs so I would definitely go with SAS drives, 10k minimum.

It might even be beneficial to set up a few large 7.2k drives in a RAID 5 or 6 group as "archive" and instruct clients to use it as such, then a couple RAID 5 groups of 15k disks as "performance" for higher IOP loads.

Were I you, I'd instead build two less powerful boxes and connect them both to a NAS or SAN. With only 1 server, if it goes down so do all the VMs.

Maybe a pair of 1U servers each with 2xCPUs and 64GB+ of RAM would be better.

And load up your servers with as much RAM as you can afford. You'll run out of RAM much faster than CPU.
 
I agree with Child of Wonder. Two boxes minimum...should at least have HA.

I think we probably should talk about your budget on shared storage. SAS provides decent performance, but isn't all that cheap, SATA provide capacity but it's slow..you have to find out where you need to be in $/capacity and $/IOps or MBps. Sounds like you need both as previously suggested.

I see it constantly, people spending all the effort on compute and not enough on storage and network. Compute, these days, is less important to focus with massive multicore and high memory density.

BTW..is backup a requirement?
 
If the box is just for game servers, ftp, and piracy, I don't see why you need so much CPU.
 
Thanks for the input guys. After having a look at Dell outlet and eBay I managed to find some refurbished servers that end up being much more cost effective than a completely new build. However I still have problem with storage costs being high if I were to build a SAN.

For less than the cost of the build stated above I managed to get 4 of these:

2x Xeon E5472 3ghz
64gb ecc ddr3 ram
6tb (3 x 2tb) sata 7.2k rpm storage

For your local storage, it depends on if you're going to go with something like a storage guest OS (like OpenIndiana, probably unnecessary if you also have an external storage host) or just use local data storage functionality.
If I run a storage guest OS don't I lose all drives if the host machine running it fails? Could I use Fault Tolerance to make sure that the storage guest OS is always running?
All depends on your workload. Your clients might require a lot of IOPs so I would definitely go with SAS drives, 10k minimum.
Right now I don't expect a lot of IOPs or even constant disk usage so I decided to go with SATA. When HDD prices fall I'll look in to 10k SAS drives.
It might even be beneficial to set up a few large 7.2k drives in a RAID 5 or 6 group as "archive" and instruct clients to use it as such, then a couple RAID 5 groups of 15k disks as "performance" for higher IOP loads..
Since I don't currently have a dedicated storage server (too expensive right now) would it be possible to do the following in order to keep all storage accessible during a host failure?
I have 4 servers each with 3 HDDs. My plan is to make 3x RAID5 arrays each with 1 HDD from each server.
E.g.
  • RAID5 #1 - HDD1-server1 + HDD1-server2 + HDD1-server3 + HDD1-server4
  • RAID5 #2 - HDD2-server1 + HDD2-server2 + HDD2-server3 + HDD2-server4 etc.
That way if one host fails all storage is still accessible. Would this work? Is there a better way to make sure that storage is accessible during a host failure?

you have to find out where you need to be in $/capacity and $/IOps or MBps. Sounds like you need both as previously suggested.

I see it constantly, people spending all the effort on compute and not enough on storage and network. Compute, these days, is less important to focus with massive multicore and high memory density.

BTW..is backup a requirement?
Right now I'm trying to maximise my $/MB as I don't anticipate high IOPs. Backups would mainly be configs/small files from game servers so they don't require a lot of storage.
As for network, with 4 servers and a 1gbps line (eventually aiming for 100% constant usage) would I need a specialised router/switch or would any layer3 GbE one work?
If the box is just for game servers, ftp, and piracy, I don't see why you need so much CPU.
The aim is to pack as many gameservers as possible per U of rackspace. I figure with a load of CPU I'll be able to handle 20-50 VMs per host given that most of them wont be eating up cpu cycles 24/7.
 
Being honest with you, I would say that if you can't afford to deploy a dedicated storage server with 4 VM hosts, you need to seriously reevaluate your strategy. It seems like storage has taken an unfortunate back seat in this project, when it really deserves to be at the forefront of any multi-host VM deploy, because it will be your most difficult bottleneck to overcome without proper planning. It's one thing if this is just a personal deployment, but this is increasingly sounding like a commercial operation. If you expect people to pay for this, your aim can't be to "pack as many gameservers as possible per U of rackspace" or you will quickly oversell your resources leading to potential collapse.

Anyway...

If you absolutely can't do a dedicated storage device, go with RAID 10 over RAID 5, but even that is going to be barely scraping by if you have "20-50" VMs per host.

You also neglected to talk at all about your networking setup or structure. If you are going to colo 4 machines and (possibly) a dedicated storage system, you should strongly consider providing your own managed switch, or designating one (or more) of your VMs as a network controller.
 
You absolutely need some type of SHARED storage to do any sort of HA..etc. You would be better off putting all those drives into one ESXi host and running FreeNAS or some sort of ZFS variant on a VT-D capable host where you can pass through an HBA of some sort to the storage VM. You can do file and/or block.

Then you can lock down that VM to that specific host using DRS Affinity rules. This will solve the shared storage issue.

As for a switch, you can get a decent managed layer 3 switch but it looks like your budget may be an issue, but what about firewall? It is quite possible to run pfsense as well or some other VM firewall product along with something like a SMB Managed HP or Cisco switch.
 
Last edited:
Thanks for the suggestions guys. You've convinced me to get a proper SAN.
How about something like this? My servers do not currently have fibre channel support so what sort of hardware would I need to get? Would I also need a FC switch?

As for the GbE switch, I'll try and find a refurbished layer3 managed one.
 
What kind of network connections are each individual host going to have? You aren't planning on hosting 20-50 game servers on a single gigabit NIC on each machine are you? The amount of connections/TRXs that NIC will be having to deal with simultaneously are crazy.
 
It's dual gigabit. I know someone who runs 600 slot counter-strike servers on single GbE though. Will add either a FC or extra NIC for FC storage/isci. Any recommendations on a cheap <$400 layer3 managed switch with 20 ports?
 
Back
Top