Whitebox vs Server grade?

Red Squirrel

[H]F Junkie
Joined
Nov 29, 2009
Messages
9,211
Once I paid off my credit line I want to start looking into building myself a new server to replace my aging core2quad whitebox server, which is limited to 8GB of ram.

Going the white box route would allow me to get a decent high end processor such as the core i7 and up to 32GB of ram, which is quite a big jump from what I have.

Or if I go server grade I could put even more ram in it and possibly even dual processor, but it's going to cost probably double.

This is for home but still somewhat production, but support and that stuff I don't care about. I just want it to be reliable.

Is there a disadvantage to going the white box route in this case?

In fact I can even build two boxes for the price of one server grade one, so I can do HA and what not, depending on what VM solution I plan to go with (probably proxmox, KVM, or maybe Xen server - sticking with open stuff)


Also, quick question on Supermicro if I go that route... do I HAVE to use their cases or can I use a standard case? I think I recall hearing stuff about their mobos being non standard but maybe I heard wrong. The case is what makes the Supermicro route so expensive.

Storage would go to my new SAN/NAS that I built a while back, which is Supermicro based with redundant PSU.
 
If you can stand the noise buy a Dell C1100 on eBay. Tough to beat the value of that for a home system.
 
That's an option too I guess though I prefer new for my production stuff. Problem is they jack up the shipping super high, if ever something goes wrong like it comes DOA I am screwed since I probably wont be refunded the shipping. So it is a risk. Though it is tempting, the first ones I found have 72GB of ram. Even though they're asking for like 300-600 bucks shipping on most of the units it still ends up cheaper than even whitebox. 1U is nice too.

Though, I just finished putting together a quick whitebox on NCIX, comes up to 1,200 bucks, we'll say 2,000 because that's not counting taxes/shipping. I could almost build two for the price of one supermicro server (one I built came up to almost 4k). With two, I don't really have to care much about any redundancy as I'd just rely on HA or the ability to just spin the VMs back up if the solution I go with has no true HA.

I wonder if some of those ebay sellers will ship two C1100's without doubling shipping price... That would make it worth it as well.
 
What actual production workloads will you be running on this? Seems pretty unreliable for production if due to nothing else but power outages, network outages, just the general issues one has to deal with in a residential setting.

Also not sure what you mean by "I don't really have to care much about any redundancy as I'd just rely on HA". HA does actually need two hosts, if you are talking about vSphere HA.
 
It's productivity home stuff, like file server, mail server and so on as well as dev/test environments and other stuff I plan to add. Possibly a Hyperboria node with some services. Basically, if it does go down, I don't have anybody yelling at me, but it's still something I care enough about that I don't want flaky hardware either. So guess my question is, is server grade that much better than white box as far as reliability? Ex: random crashes, DOAs, that kind of stuff.

What I meant by redundancy is, if I can build two cheaper white boxes for the price of one high end machine, then I don't care if I sacrifice redundant PSUs and other redundancies, because I'll have two hosts instead of one, so I can just use HA or simply set it up in a way where I can spin up a VM on either host. I have big UPS power (good for 4-5 hours) so I got power pretty much covered. I'll be adding more batteries, and in the future a generator transfer switch with portable generator. That, or wind/solar.
 
Dual socket 2011 Supermicro board with one or two Xeon E5 v2 should get you going. They eat cheap reg ECC RAM and - using expensive(!) 32GB sticks - go to 512GB of RAM (1TB with future 64G sticks). Pick a board with dual 10Gbps LAN and maybe an onboard LSI controller and you should be future-proof for some time.

Something like the X9DRH-iTF (10Gbps) or X9DRH-7TF (10Gbps + LSI SAS).
 
Just get a cheap AMD 4, 6, or 8 core desktop CPU, 16GB-32GB RAM, a motherboard that supports 32GB+ of RAM with integrated video, 1 quad port or 2 dual port Intel NICs, an 80+ PSU, and cheap case per host and you're set.

I use MSI 760G-E51(FX) motherboards and they're fine and can be had for about $50.

Then get some sort of shared storage for your VMs to run on. All you need is a cheap CPU, motherboard with lots of SATA ports, and some disks. I run Windows 2012 R2 with a tiered Storage Pool of SSD and Velociraptor drives but just 7200RPM SATA will do in a pinch. There are lots of free ZFS solutions out there, too.
 
Server Grade is just a system composed of parts designed/built/tested to work together to fill a need (high uptime, redundancy, ease of maintenance, web server/storage server/file server, blah blah blah) - then OEMs certify/warranty that system.

Building a whitebox is not the chore it used to be and with decent components, they can run 24x7x365 if you need them to. You are more likely to have issues with your storage box than your compute nodes.

SuperMicro boards spec'd for ATX or MicroATX should fit most cases with those specs as well (i have heard you may need to move a standoff or two, but that's it). Avoid their other sized boards: "Proprietary, eATX, etc as those usually require their (SM) cases.

My only regret of scaling back to whiteboxes is my 32GB memory limit.

LGA2011 boards are something I wish I had available when I built my whiteboxes. I would have went with those .. maybe Supermicro X9SRL-F to keep the initial cost down and then add/upgrade components as needed (NIC/memory).

Unless you have a 10Gb switch ... no need for expensive 10Gb NIC (ie X9DRH-iTF).
Couple of dual/quad port Intel or Broadcom gigabit NICs will be fine.

You already have a storage box .. no controller needed at this time.

You may even find a single whitebox will meet your needs ..so get one and scale to two as needed (don't wait too long though - supply can run dry).
 
Dual socket 2011 Supermicro board with one or two Xeon E5 v2 should get you going. They eat cheap reg ECC RAM and - using expensive(!) 32GB sticks - go to 512GB of RAM (1TB with future 64G sticks). Pick a board with dual 10Gbps LAN and maybe an onboard LSI controller and you should be future-proof for some time.

Something like the X9DRH-iTF (10Gbps) or X9DRH-7TF (10Gbps + LSI SAS).

Let's think about this.

He's building a home server/lab. Does it really make sense to buy a motherboard that costs as much as a full server on eBay to "future-proof" anything? By the time the future comes along there will be other boards and other technology.

IMHO it makes little sense no matter in which environment one is to oversize IT purchases to account for future growth. Plan for one year out at the most, buy more when it's needed.

I don't see how for the OP stated use a C1100 from eBay can be beat in terms of cost vs. benefit.
 
I don't see how for the OP stated use a C1100 from eBay can be beat in terms of cost vs. benefit.

I am just wondering where the hell he is located for $300 shipping on the C1100s...

I bought two with free shipping.
 
I am just wondering where the hell he is located for $300 shipping on the C1100s...

I bought two with free shipping.

I plugged in shipping for a few C1100s just now at it came to just over $22 each.
 
Red lives in a frozen wasteland where everything has to be delivered by sled dogs, otherwise known as Canada. :D
Supermicro makes good stuff, we use it in production when it is not cost effective to do Dell.
Most of their Mobos are standard, but some of them will only fit in a propriety chassis.
Their site lists what standard each board uses.
 
I am just wondering where the hell he is located for $300 shipping on the C1100s...

I bought two with free shipping.

The shipping is a hard coded value, ex whatever the seller chooses. See these for example:

http://www.ebay.ca/itm/DELL-CS24-TY...909240?pt=COMP_EN_Servers&hash=item1e834d7078
US $394.04 (approx. C $418.12)

http://www.ebay.ca/itm/DELL-POWERED...104604?pt=COMP_EN_Servers&hash=item1c39944cdc
US $394.04 (approx. C $418.12)

That's also not counting customs. That could be another couple hundred bucks. So I'd be paying over a grand for a used server. I rather pay a bit more and build something new. I suppose these are a good deal if you are lucky enough to buy them direct from the company outside of ebay and happen to live close.


Just get a cheap AMD 4, 6, or 8 core desktop CPU, 16GB-32GB RAM, a motherboard that supports 32GB+ of RAM with integrated video, 1 quad port or 2 dual port Intel NICs, an 80+ PSU, and cheap case per host and you're set.

I use MSI 760G-E51(FX) motherboards and they're fine and can be had for about $50.

Then get some sort of shared storage for your VMs to run on. All you need is a cheap CPU, motherboard with lots of SATA ports, and some disks. I run Windows 2012 R2 with a tiered Storage Pool of SSD and Velociraptor drives but just 7200RPM SATA will do in a pinch. There are lots of free ZFS solutions out there, too.


I always forget about AMD, they may not be ahead of Intel but they still have some decent processors, and for the price I can build two nodes. I may actually look at going that route. I already have storage figured out, and it has redundant PSU and so on, so now I just need to add VM nodes and I'm set. I'll probably be using NFS. What is a decent processor these days from AMD?
 
Nothing wrong with whitebox if you spec things out properly.

I have two whitebox servers with i3s that run 24/7x365. Uptime is quite frankly as good as my environment. Was at 190days till we had a very long power outage that the UPSes could not last through. But was no big deal as the servers shut down gracefully and fired right back up when the power came on. This is my dev environment and file server.
 
So I'd be paying over a grand for a used server. I rather pay a bit more and build something new. I suppose these are a good deal if you are lucky enough to buy them direct from the company outside of ebay and happen to live close.

There have to be Canadian companies who sell off-lease systems. Shouldn't be too tricky to find someone selling a used Dell or HP in Canada.
 
There have to be Canadian companies who sell off-lease systems. Shouldn't be too tricky to find someone selling a used Dell or HP in Canada.

Tech stuff is very hard to find in Canada. Even the Canadian retailers are basically just buying from the US or China.

Though going with Tigerdirect/NCIX you do skip the customs which is nice, so you save a couple hundred bucks.

Think I will go ahead and go AMD. I'm not ready to do this any time soon anyway, so I'll see what happens from here to then. I just built an AMD FX based box on NCIX real quick without looking at ram/cpu compatibility lists and came up to a bit over 1k. It's almost tempting to build two at that price.

Going to pay off my credit line first though. :p My basement / server room project is almost done to the point I want for this year.
 
I always forget about AMD, they may not be ahead of Intel but they still have some decent processors, and for the price I can build two nodes. I may actually look at going that route. I already have storage figured out, and it has redundant PSU and so on, so now I just need to add VM nodes and I'm set. I'll probably be using NFS. What is a decent processor these days from AMD?

FX-6130 with a motherboard that has onboard video would work great. I'm just using Phenom II X6 1045Ts and can't complain. Sure, they get destroyed in benches vs almost any Intel CPU but I'm never pushing much CPU horsepower on anything in my lab.
 
That's also not counting customs. That could be another couple hundred bucks. So I'd be paying over a grand for a used server. I rather pay a bit more and build something new. I suppose these are a good deal if you are lucky enough to buy them direct from the company outside of ebay and happen to live close.

Ouch. I live about an hour from Ontario on the US side, and those listings ship to me for free. Sounds like Canada has it rough.
 
Email the sellers...

I'm in Edmonton, and was/am debating on getting a C6100, I emailed a few and shipping was around 100 bucks, ebay quotes it wrong...
 
G34 is also a good option for a home box because the CPUs are dirt cheap and you get 16 ram slots on the dual processor boards. I've got 2 6128 (8-core) with 80gb ram in mine. If you aren't going to be using 3TB drives you can also get an older raid card off ebay for about a hundred dollars.
 
Storage will be all iSCSI or NFS so I don't have to worry about storage, which makes it easier. I'll be using a SSD (for reliability so I don't have to worry about hardware raid) for the OS drive and that's it.

Think I'll stick with the AMD option though, playing around on NCIX I have a sub total of $1,077 for an AMD FX-8350 8 Core Processor based system with 64GB of ram, 3U rackmount case and a 2 port nic. It's almost tempting to buy now, but I will wait till I paid off my credit first, and I might buy two. :D I have the worst luck building system, always end up with DOA stuff, so if I order enough to build two, at least I'll have one up and running right away while I wait for replacement parts for the next one.

If I'm getting by with 8GB of ram now, think 32GB will be more than enough especially if I build two hosts. I have a crappy environment though right now (Virtualbox in a VNC session) so it will also be nice to use a proper VM solution. I have to use command line just to make VMs because by default it wants to put it in the local home directory and does not let me select a LUN like any sane VM solution would.

Think I will also do a full virtualized environment once I upgrade. Right now I have a hybrid, where lot of stuff is running locally while some stuff is virtualized on that same box. I need to completely revamp how I manage stuff like development too. Stuff is a bit all over the place. Unix permissions are also a royal pain in the ass to deal with between systems so I'll probably look into kerberos or something centralized like that.
 
Hey Squirrel, That seems expensive for the product you are getting at least in my mind... There are a few things that i am not fond of the AMD line one of them especially in that core 3DNOW technology, if you ever happen to get a real license with vMotion in the next gen series they actually pulled that function out of the cores and created a fun little vMotion issue with different grade hw b/c EVC has a 3dnow option and a non-3dnow option now b/c they pulled a function that has existed for a long time out of the core. (say u get a new server that has it removed and u didnt turn evc on initially without 3dnow, you couldnt vMotion between hosts) you would have to power down the VMs to get this to work correctly. Other issue i have is they are HOT and power consuming... Personally i like having a cheaper power bill with my 60W TDP Hex Core HT Xeons than 100+ Watt TDP Cores

My lab is running some Server grade HW and i think i got a pretty good deal on it all (Xeon)

250/Mobo (Dual Socket) (Dual Nic!)
80/CPU (L5639 - Hex Core Hyperthreaded)
100/Case (Rosewill 4U)
80-100/PSU (Modular less cables!)
60-100/8GB Stick ECC/REG (Kingston)

even at the low scale its like 600/700 ish for 8GB Ram full blown system, except that most boards u get in this line can handle >64GB Ram (Some 128 some 256GB) and dual sockets so u could have 2x6 Cores + HT = 24 Threads to play with (85% of the time Memory is going to be your constraint not CPU)

Upside this way Memory is Lifetime Warranty u can get a 5 year on some of the boards, the PSU has its warranty and CPUs hardly ever FRY. Running Server grade HW so u will get functions like VT-D, with SR-IOV, IOMMU, C-States and room for expansion! and sometimes u can work it out so that its all the supported HCL matrix.

i run all my hosts at 48GB/Ram b/c well Ram is essential for a lab of good size and production! 16GB wont cut it and 32GB of ram i am running near now... LOL

Biggest downside is that with Whitebox tech that isnt on the HCL you take a risk that in updates/major upgrades your board may loose compatibility on things like Nic cards or I/O Controllers, you would have to customize install disks to get them working which is extra work. With my lab i havent had that issue at all upgrades == simple no problem at a very comparable cost.
 
Last edited:
What is your budget?

Hard to beat some of the used deals out there.

Dell C6100 4 node server with 96gb memory total for $943 shipped. Not sure about Canadian shipping though.

Perfect for a HA cluster. 72GHz of processing power, 96GB of memory all in a small(er) footprint.

There are even threads around about taming the noise of the c6100
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
I'd say 2k is my budget but if I can get on the lower end of that, even better. If I build an intel based white box or Supermicro it usually ends up closer to 3k. Everything is more expensive here. 1k for a new full working system is pretty much a steal.

I plan to use Xenserver or perhaps Proxmox - I'll experiment at first, so I don't think HCL will be an issue. Vmware is a nice product but for home I prefer trying to stick to stuff that's more "open" and that does not have weird license restrictions like number of cpu, hosts, ram etc...

I'm not planing to build anything till probably next summer though, so just deciding at this point what is the best route.
 
Hey Red what are you talking about there are no limits anymore on CPU/Memory/Hosts.. if you want enhanced paid for functions like live migrations well i dont think you can get that even with Xen or Microsoft because you need their manageware just as vSphere vCenter is manageware for VMware...

VRam died a long time ago and the 32GB Limit was removed in 5.5 so i see no restrictions from what you were basing it on.

The two major things that you dont get with free edition is HW10 (bug in there right now that can allow it but not recommended with free) and Web Client (only thing can manage HW10 machines)
 
Hey Red what are you talking about there are no limits anymore on CPU/Memory/Hosts.. if you want enhanced paid for functions like live migrations well i dont think you can get that even with Xen or Microsoft because you need their manageware just as vSphere vCenter is manageware for VMware...
VRam died a long time ago and the 32GB Limit was removed in 5.5 so i see no restrictions from what you were basing it on.

The two major things that you dont get with free edition is HW10 (bug in there right now that can allow it but not recommended with free) and Web Client (only thing can manage HW10 machines)

Clarifying:
XenServer 6.2 is free/open ... only limitation is the ablity to patch systems is manual process without paid subscription/license
XenMotion (live migration and anything else) works.
 
Clarifying:
XenServer 6.2 is free/open ... only limitation is the ablity to patch systems is manual process without paid subscription/license
XenMotion (live migration and anything else) works.

Wow a lot has changed since i have used Xen. Appreciate the Clarification!
 
If I had the space for it I too would go with a C1100 or C6100 setup. All the sellers I've seen on eBay offer free shipping, not sure where you're seeing a $300 shipping charge.
 
Lots of RAM is what you want for VM uses. 8 CPU cores will handle a lot of VM's depending on the load.
 
Back
Top