Need to build my first home server. Looking for input.

Rock&Roll

[H]ard|Gawd
Joined
Oct 22, 2000
Messages
1,888
I've gotten to the point where my media is stored across too many systems as well has having a ton of personal data with redudancy being provided by external USB HDD's. It's gotten untenable and I need to beef things up a bit. I was hoping to run ESXi for 2-3 VM's now and potentially more later on.

VM1 - PFSENSE with 2 dedicated NIC's.
VM2 - Windows Sever - DC / Shared Storage / Print Server
VM3 - Windows 7/8 Guest
Overall goals:

- Run all needed VM's while idling at under 175 watts.​
- Will be within earshot of living space, so quiet operation is important.​

I am trying to be budget conscious about this, as I know the commercial grade hardware I'd like to use will get expensive quickly. While I would like to all out buying every component individually, the cost is prohibitive, and I likely would not get the value back out of a $3k+ investment. So, you can see the option's I'm weighing below. I may be greatly overestimating how powerful of a server I need.

Current bare metal systems to consider.
C2D Asus P5K-E (P35) - PFSENSE firewall. (65w power usage 24/7)
Core i3 Asrock Z77-Pro4 - HTPC / media storage center (I'll find power usage when I get home)
___________________________________________________________
Proposal 1:
Repurpose C2D sytem to be HTPC (would work fine)
Repurpose Core i3 system by upgrading to Xeon socket 1150 processor with VT-d support and adding more RAM (Currently 8GB).

Pro's
- Cheapest and fastest option

Con's
- 32GB max memory on z77 motherboard.
- Substantial limitations to expansion of storage.

___________________________________________________________
Proposal 2:
Buy a used 12-core (Dual six-core) xeon workstation with VT-d support.
Decommission C2D PFSENSE BOX.
Notes: I've been eye-ing the HP Z800. There's a lots of dell T7500's out there that can be had for cheap. But the internals of dell's machines look like a nightmare.

Pro's
- Relatively good bang for buck (~$1k or less)
- Big OEM gear of this grade is usually highly reliable
- Better room for expansion, but still somewhat limited
- Can be found with quality RAID card's preinstalled
- Very powerful system

Con's
- Might break my goal for power consumption (Trying to stay under 200 watts)
- These machines usually have proprietary hardware (PSU's esp) that will be hard to replace should something break 5 years down the road.
- Have to deal with shady vendors on ebay.

___________________________________________________________
Proposal 3:
Suck it up and and build my own socket G34 opteron server (Supposedly great for VM, and a lot cheaper than xeon when buying new)

Pro's
- It's all new
- I get to use standard hardware for easy replacement
- I get to design for maximum expansion capacity

Con's
- Still super expensive
 
Last edited:
A few notes for your first option:

An i5/i7 will generally also support VT-d (non-K models, most if not all of the i5s). Sometimes the equivalent Xeon is cheaper (some E3s are 4C/4T like the i5s). Check the motherboard manual to verify, but ASRock is pretty good about enabling VT-d on their consumer boards. If I read that right, you have a Z77 board and thus need an 1155 rather than 1150 CPU (or are planning on getting a new board and 1150).

The 3 VMs you're starting out with don't sound terribly heavy. It could all probably squeeze into 8GB, although 16 would be more comfortable. In your situation I'd look for an i5/i7 or equivalent Xeon. Your Z77 board will be an issue slot wise if you plan to pass through a fancy RAID controller, dual port NIC, AND a GPU to the 3 guests you mentioned. Not entirely clear what your plans are here in that regard.

Also: What else do you plan to virtualize in the near future, and what kind of storage are you looking at for the storage guest/machine?
 
Rule #1 about VM Ware: Memory, Memory, Memory...
Rule #2 about VMs: Memory, Memory, Memory
Rule #3 Storage Space
Rule #4 CPU

First off, and a dumb question, what's the Windows7/8 Guest for? While I've done VM'd guests, when using ESXi, there isn't a whole lot of point, unless you are a mac household or something, and need Windows once in a while for something that you can connect to via RDC. @ work, I use them for Virtual Desktops for specific purposes when I can't use a Terminal Server. @ home, I use it as a VPN client for work, to keep it segregated from my main network.

So... I would look at a minimum of 20 GB of memory, 12 for the Windows Server OS, 2 for the PFSense Box (which would be another issue), and 4GB for the client. That leaves about 2GB in overhead for VMware, which should be enough most times. Memory creep will kill you, and really slow down your boxes. @ Home, I've got a c2q 8400 w/ 16GB of ram, running W2k8 Server R2 & an XP virtual machine for VPN (soon upgrading it); runs about 20-25% CPU usage max, and typically 60-70% Memory usage. W2k8 will eat up all the memory you allocate to it.

Next, storage. You can run VMware off a USB key, and it actually works great. The key here is to find a raid controller supported by the free version of ESXi, as it's a real pain to install a controller driver to the free version without access to a CLI, Honestly, i've never played with trying to get into the CLI in 5.x as all, as I haven't needed to. Load up on storage, preferably large and fast stuff, and go whole hog raid 6 or 10, which hopefully should reduce some major downtime, and be fairly efficient. I run 5 2tb drives in a raid 6 with an adaptec controller that I can't remember the model of, but I found it on ebay for about $200 with a BBC. Protect your storage, as if it goes, you are fubared.

CPU: Get something with 4 cores preferably, you can do it on 2, but 4 would give you some room for growth. It really doesn't need to be fast. The i3 might pull it off okay with 2+2HT, but I think you might be happier to find a cheap Xeon or i5 that you can slot in with 4 standard cores.

Finally, PFSense, you will probably be okay with only a dedicated 2nd nic, but just be very careful how you build it out, you do not want an external NIC being allowed to a internal port.

As an Update:
Use the Z77 Motherboard:
i5 2320/2400: $140-150 ebay
LSI MEGARAID 9260-8i: < $200 working server pulls rebranded DELL/IBM (Cables will probably run you another $25-100 Total depending on what you need, this will be 4pin SAS to 4 SAS or SATA drives. )
32 GB DDR3-1333: ~$300 new from Amazon, various brands, bit on the overkill, but honestly, can't have too much memory
Intel Dual Port Gigabit Nic: $50 ebay

That leaves you still well below $1000, with storage as the only big thing, and if you wanted to, a new case, quiet PSU, and Hot swap hard drive bays.
 
Last edited:
@Syran, @fantabulous

Thanks for the input. I really hadn't given enough consideration to rolling with the z77 board and a non-k I7. If anything I was about to go for a dual hex-core z800 with 24-48GB of memory. Biggest thing holding me back was I had a feeling that it was going to be overkill generally be a waste of electricity.

And, you're right. The money would be better spent on more drives and a proper RAID controller. I had figured on going RAID 10 if I was to go RAID at all.

I know all about Windows Server being hungry for memory. I manage our server at work on 2k8 R2 and it never really put's much strain on the dual-quad-core xeons. But it does gobble up 32GB of ram in a very short period of time. That's with no virtualization of any kind. Just SQL and a few other services for about 15 domain computers.

The thing I really like is seeing our server only using about 150watts during the work day. Compared to our old dual-one-core xeon server that used 400watts at dead idle.
 
If it were me, I would simply repurpose the boxes I have, and spend the cash on a proper NAS/iSCSI storage unit, drives and a decent dedicated switch to move the data around. For what you are doing, 32GB of ram should be more than enough and a 4C Xeon should suffice. Decent shared storage however is a godsend if you want to be flexible and keep your data away from VM's and physical boxes. 2 nics for each physical box, one for regular lan, the other for iscsi access. Mount targets via iscsi and share out over regular lan. Adding new hardware is easier since you don't have to worry about any data migrations. Plus it lowers the burden placed on file servers and their hosts.

Just my 2 cents.
 
Only reason I went an i5 over a xeon was the integrated graphics, it didn't require a slot to be used for a graphics card.

Unless you are really running full blown SQL or something, 8-16GB of memory for Win 2k8 should be fine. Typically, my work servers run 6GB for a DC/Print Server (on a i5 xeon w/ 8gb of Ram, single vm running to allow for hardware independence), or 8-32GB of ram for the ones at my main office, depending on role (my SQL & exchange servers both run 32GB, pretty much everything else is 8GB or 16GB depending on workload). Each of my Hosts at work (4) run Quad Nehalem i7 Xeon's w/ 96GB of ram each. I run 55-65% memory usage and 12-20% cpu on each host.
 
I would not use desktop chipsets for a ESXi server as they lacks core features and may be not supported by ESXi.

Only a server chipset with a Xeon gives you ECC, vt-d, IPMI, Dual Intel nics and enough lanes for expansion.
Check http://www.supermicro.nl/products/motherboard/Xeon3000/ for Xeon serverboards.
Socket 1150 limits to 32 GB Ram, socket 2011 not. Use a board with an integrated LSI SAS/Sata Adapter if you want to virtualise a storage OS to use storage pass-through.
 
Last edited:
I would not use desktop chipsets for a ESXi server as they lacks core features and may be not supported by ESXi.

Only a server chipset with a Xeon gives you ECC, vt-d, IPMI, Dual Intel nics and enough lanes for expansion.
Check http://www.supermicro.nl/products/motherboard/Xeon3000/ for Xeon serverboards.
Socket 1150 limits to 32 GB Ram, socket 2011 not. Use a board with an integrated LSI SAS/Sata Adapter if you want to virtualise a storage OS to use storage pass-through.

I wouldn't go quite that overboard for a home server unless you are really building up a major lab type environment at the home, which I don't think he is. Same reason I personally wouldn't go into iscsi/nas arrangement either, as while nice, it adds levels of complexity that probably aren't needed for this size of a rollout. If you need tons of space, and lots of future expansion space, then start to build out a proper NAS.
 
Its not a matter of money. Intel Xeon and i7 are quite similar as well as a good desktop board vs a smaller serverboard. The desktop board offers Audio and several graphic slots whereas the serverboard offers more pci-e slots, IPMI and ECC RAM and integrated video for the same price.

So its quite clear for me
Gaming machine: Desktop chipsets
Server: Server chipsets, even for a home server
 
I'm a little surprised at people recommending 12GB of RAM for server 2012 that's file/print/DC only. My 2012 R2 file/DCs run on 1.5GB on my ESX environment with no issues. My SQL server runs 2012R2 server / 2014 SQL and about 3GB of RAM, serving three small databases (2GB, 12GB and 15GB in size, only a few thousand transactions a day).
I feel you could easily do your entire environment as listed with 16GB of RAM. If this isn't a lab where you plan on putting more VMs and such on for learning anything, I'd just do it on desktop hardware and save the cash - probably reusing what you have.
You'd need to really think and plan for a few years, what you want to actually do with this though. You mention a 7 / 8 guest, but no reason for it. A lot of times I've consulted people and small businesses and they have this pie in the sky, we'll run all of these VMs thinking, then they spin them up, use them twice and forget they're even running :D.
 
You could always consider moving that pfSense box to something smaller and more dedicated. I am in the process of setting up a system I put together from mini-box.com.

I used the pcengine APU with 4GB listed here and got the red case along with the power adapter.
http://www.mini-box.com/ALIX-APU-1C4-AMD-G-Series-T40E?sc=8&category=754

I grabbed a 30GB MiniSATA SSD from Amazon for around $30 and a USB to Serial (this board has no vga or standard video output) from amazon for another $15. Hope to finish the setup and migration tonight.
 
I personally just went from running an ESX(i) server 3.0, up to 5.5 over the last 5+ years to running a 2008r2 server with hyper-v configured as follows:

Host system runs the following:
1. AD
2. DNS & DHCP
3. File Services
4. WDS
5. WSUS
as well as Exchange 2010. (don't tell me about running Exchange on a DC)
Under Hyper-V I run the following:
1. OpenVPN (256MB RAM)
2. Citrix XenApp server (3GB RAM)
3. Windows RDS server (3GB RAM)
4. Windows 7 workstation (2GB RAM)

Hardware:
Foxconn 88GMV socket AM3 board with a Phenom II 955 with 16 GB of RAM. (replaced my old socket 775 dual core system with 8GB RAM)
Dell perc 5i controller w/BBWC with 8 hard drives, plus and additional 2 using the on board SATA controller


I average about 3GB of RAM free with everything running.

And as zerodamage stated I would go with a stand alone router.

My point is you don't have to spend big to get a nice home server running. It's all about the i/o and memory.
 
Last edited:
Thanks for everyone's consideration and time in responding.

Couple of hang up's I'm still considering.

Putting PFSENSE on a pico scale machine was something i have considered in that past. But once I started reading more about virtulization and people using PFSENSE under ESXi, I really thought that would be a great learning experience for me. Besides a need to consolidate and reinforce my storage solutions, building a server has been as much about wanting to learn more skills that I can use at work or at home. So, in some ways, I may be eager to do things the hard way, hoping i don't overwhelm myself.

I've reset my expectations for the build slightly. ATM, between bouts of crying over where the price of DDR3 has gone over the last year, I'm,mostly thinking about my RAID solution. I'm embarrassed to admit how ignorant I am about good RAID solutions and all the variants of chips from LSI. I've got a friend who would sell me his PERC H310 for dirt cheap. Not sure if it's something I could use or not. I see some RAID cards with BBU's and I wonder if I should be worried about that.

I actually already bought a load of WD RED 3TB SATA drives before I thought about SAS. Not that it's too late to change my mind.
 
Nice thing about SAS is it can run SATA drives. Sas has a slightly different connector (power & data are combined) but anything that drives SAS can drive SATA, just need the proper connectors (Most of the raid cards have 4 drive mini-sas connectors that can be split out to SAS or SATA, so you can still use the drives you purchased without issue). BBU's are basically to help you survive in case of a sudden power event of some sort, allowing things in memory to stay there until hard drives can be accessed again, and reducing the event of some odd failure of the array. I don't know about the H310 specifically, but the biggest thing to watch out for on any rebranded raid controller, be it dell, hp, ibm, etc, is to make sure it has direct hookups, some controllers are used to run the onboard ports for the motherboard, and don't have the connections you need if you install it in a non-branded motherboard.
 
Back
Top