Need Servers For Production

TType85

[H]ard|Gawd
Joined
Jul 8, 2001
Messages
1,549
We are looking at getting rid of the servers we currently have leased and buying equipment to replace it. Our budget is pretty slim, but if we can get rid of the leased hardware we would save a ton of money per month.

Our current set up is way over the top for where our company is now, it was designed/setup for the company to grow and it has shrunk. Currently we have around 25 people who we support and a few custom applications.

The current hardware is a HP Bladecenter chassis with 5 Dual Xenon E5540 with 45GB ram each backed by a Enhance-Tech Ultrastore SAN. Currently there are around 50 Windows 2008 R2 VM's running on ESX 4.1. I am going to turn down a lot of the servers we don't need and bring the non-production ones in to the office.

The heaviest usage comes from a few of the VM's, one creates PDF's all day, the others are SQL 2005/2008 servers. I am not sure if running SQL under a VM is a great idea myself.

Down to the questions:

In our production datacenter I am looking to put the new machines to cover the following (all Windows 2008 R2):
1 AD VM
4 IIS Servers (not heavy traffic)
4 Application Servers (one is heavily used)
4 "File Servers" although I could just map the applications to shares on our NAS.
3 SQL 2005 Database Servers
1 SQL 2008 Database Server
1 vCenter

Could I get away with 2 or 3 E3-1230 based servers (32GB ram each) for this?
Would you trust a pieced together system for production over a pre-built server?

I was looking at for each machine:
1x ASUS RS100-E7/PI2 1U Server Barebone LGA 1155 Intel C204 DDR3 1600/1333/1066
4x Kingston 8GB 240-Pin DDR3 SDRAM DDR3 1333 ECC Unbuffered Server Memory Intel Model KVR13E9/8I
1x Intel Xeon E3-1230 Sandy Bridge 3.2GHz LGA 1155 80W Quad-Core Server Processor

I have HDD's laying around I can use to load ESX.

Equivilant prebuilt servers appear to be:
HP ProLiant DL120 G7 Rack Server System Intel Xeon E3-1230 3.2GHz 4C/8T 4GB (1 x 4GB) No Hard Drive 658416-S01 but it appears to only support 16GB ram even though it has 4 slots.

Any suggestions?
 
To get it up and running so we can turn down the current production (which is currently across the country) we will be running off a Synology 1812+ NAS. once we can take down the current production we will be moving the Enhance-Tech SAN here since we own it.
 
In regards to storage: Have you ran only metrics on the IOPs and overall data throughput needed? How much RAM is currently being used for the "active" servers?

In regards to servers; you can go with a pre-built or one you built. I would just a server I built. The real question at hand is making sure the hardware is compatible you have selected and are you willing to support it is the hardware dies. I personally would buy a name brand server just to have the tech support and hardware under warranty. If the budget is small then it might be small in the future too. Are you going to have the budget to replace hardware if it goes bad in the future? If not, spend the money on the warranty now. If you are unsure of some of the hardware needed then set a base line of what you need, try it out and revisit the setup afterwards to see if you need to scale out versus scale up.

In regards to SQL: How much throughput is needed? Are they databases mainly read, write or read and write? Are any of the servers used for development? (My concern is that if a trigger or something is written incorrectly then the I/O for the SQL servers can bring the NAS/SAN of the rest of the environment down.) Can the current SQL servers be combined or must the databases that reside there be separated? You might want to separate the SQL I/O from the rest of the network. (I am not sure on this. I will let the other seasoned vets answer more on this.)

Is the vCenter server a Windows or Linux box?
 
Could I ask what your budget is for this project? If I know it I'll spec you out the "best" I can find.. But on my stuff you'd have to build it yourself.. My build list won't be hard to construct yourself but just expect that..
 
One last thing.. If you DIY don't think the warrenties are any worse.. they are actually generally better because you get the OEM warrenties.. You just have to have a little elbow grease and smarts to figure out what is wrong.. Though server grade equipment has never given me problems..
 
In regards to storage: Have you ran only metrics on the IOPs and overall data throughput needed? How much RAM is currently being used for the "active" servers?

It has been hard to break out the numbers for just the production VM's. Memory wise most of the servers are at 2GB and run fine. The DB and the harder hit app server are at 8 and 4. The heaviest IO is coming from the "File Servers" which don't need to be windows VM's, they are literally a stock 2K8 R2 install with a iSCSI disk mapped and shared. Quick adding up is about 46GB ram.

In regards to servers; you can go with a pre-built or one you built. I would just a server I built. The real question at hand is making sure the hardware is compatible you have selected and are you willing to support it is the hardware dies. I personally would buy a name brand server just to have the tech support and hardware under warranty. If the budget is small then it might be small in the future too. Are you going to have the budget to replace hardware if it goes bad in the future? If not, spend the money on the warranty now. If you are unsure of some of the hardware needed then set a base line of what you need, try it out and revisit the setup afterwards to see if you need to scale out versus scale up.

This is where I am really unsure. I am the IT department here, I have to service the hardware and software. I have priced out a basic Dell R210 II server and it is 25-30% more after I add in 32GB of ram myself. (they want 1100 for 32GB). The three year warranty is nice on that. I will present both options to my boss.


In regards to SQL: How much throughput is needed? Are they databases mainly read, write or read and write? Are any of the servers used for development? (My concern is that if a trigger or something is written incorrectly then the I/O for the SQL servers can bring the NAS/SAN of the rest of the environment down.) Can the current SQL servers be combined or must the databases that reside there be separated? You might want to separate the SQL I/O from the rest of the network. (I am not sure on this. I will let the other seasoned vets answer more on this.)

I was thinking of separating the SQL out as I know it is pretty heavy IO. There are some severe issues we run in to that I think are related to the SAN and SQL (slow downs, timeouts, etc). I really would like to combine most of the databases on to one physical SQL 2008 server and have a VM that I could spin up if the physical one fails.

Is the vCenter server a Windows or Linux box?
Windows box

Thanks for your help :)
 
Last edited:
Could I ask what your budget is for this project? If I know it I'll spec you out the "best" I can find.. But on my stuff you'd have to build it yourself.. My build list won't be hard to construct yourself but just expect that..

Trying to stay in the $6K range total for hardware.

Also do you already have the software licenses or was that leased as well?

Software licenses are all taken care of. We just re-upped our VMWare licences and have plenty of 2k8 licences.

One last thing.. If you DIY don't think the warrenties are any worse.. they are actually generally better because you get the OEM warrenties.. You just have to have a little elbow grease and smarts to figure out what is wrong.. Though server grade equipment has never given me problems..

I have been building my own systems for the past 20 years so the elbow grease and smarts are the easy part :) The ASUS barebones I listed earlier has a 3 year warranty, as does the dell server....
 
Just to point out

3 SQL 2005 Database Servers
1 SQL 2008 Database Server

That could be a lot of IOPs
 
Looking at your choices of hardware I'd say your going to do pretty good as long as the CPU usage isn't super high.. How are you going to connect to the SAN?
 
Looking at your choices of hardware I'd say your going to do pretty good as long as the CPU usage isn't super high.. How are you going to connect to the SAN?

I am not 100% sure, but I think the way it is set up now is at least 4 1GB connections to the ESX server(s) and the drives are split up between those.
 
Going to have to have quad port cards or something it would appear you can use it with the asus.
 
Here is what I built this summer for our VM lab and has been running smooth as silk. 2 boxes with 128GB of ram in each.

2x ASUS RS700-E7/RS4 1U Server Barebone Dual LGA 2011 Intel C602-A PCH DDR3 1600/1333/1066/800

Used this memory:
32 x ($64.99) Kingston 8GB 240-Pin DDR3 SDRAM ECC Registered DDR3 1333 Server Memory Model KVR1333D3D4R9S/8G

And these CPUs
4 x ($299.99) Intel Xeon E5-2609 Sandy Bridge-EP 2.4GHz 10MB L3 Cache LGA 2011 80W Quad-Core Server Processor BX80621E52609
 
Back
Top