Cheapest way to get into servers

travm

2[H]4U
Joined
Feb 26, 2016
Messages
2,049
Looking for cheapest way to get an enterprise level server set-up at home.
Looking at a used dell 1950, but i'm worried that its too deep for a cheap rack... Is there better options? No real plans other than something to play with at home. Might run a small web server, test/learn some open source software.
 
That is going to be loud and use a ton of power. You’re better off with an old office pc if you’re just using it for learning.
 
That is going to be loud and use a ton of power. You’re better off with an old office pc if you’re just using it for learning.
by a ton of power, you mean like $50 a year.
Loud, I suppose its a 95w setup in 1U, I guess its probably going to be loud.
Spending an extra $500, to save $20 a year is not helping. Alternative is an atom or Celeron mini-itx screwed to the wall. Currently i'm playing in VirtualBox, but i'd prefer to be on something more "enterprise" and not on my gaming PC.

I also want it in my basement, and i'd rather a rack than a shitty old PC on the floor. If concensus is i'm further ahead with an atom and 2.5 ssd, then that is option 2.
 
Looking for cheapest way to get an enterprise level server set-up at home.
What do want to learn exactly?

If you want to learn how to apply firmware and watch how long they reboot, then get a phyiscal server... otherwise you really dont need a 1U/2U server. A raspberry pi will work with your use case.

Can always go the VM route with vmware workstation for windows or virt-manager in Linux.
 
  • Like
Reactions: travm
like this
1u in the home is a no go (at least a dell 1950). Honestly some optiplex with an i7 and 32gb of ram will be faster. And if you want it in a rack get a rack case for it or a rack shelf.

A dell 1950 is a core 2 cpu running ddr2. Have fun with 15 minute reboots and 60+ db noise.
 
Last edited:
What do want to learn exactly?

If you want to learn how to apply firmware and watch how long they reboot, then get a phyiscal server... otherwise you really dont need a 1U/2U server. A raspberry pi will work with your use case.

Can always go the VM route with vmware workstation for windows or virt-manager in Linux.
Looking for the whole experience. I worked 10 years less than 10ft from a server cabinet with something like this in it, so I'm not really scared of the noise. Or power consumption for that matter. I've been playing around in a VM on my desktop, just thinking something always on and designed to be a server would be a step forward. Also an rpi will be considerably less powerful than the old dell 1950 and all done not much cheaper.
I want something I can install some server software on, webserver, idempiere ERP, whatever else I come across. Planning to run a couple be VMs, maybe a personal cloud and Plex server, etc.
 
Another option I've looked at is upgrading a gaming PC and putting it's guts in a 2u case. But that is expensive. I can get the 1950 for less than $200 delivered. Along with 2cpus, 32gb, and 2 480g 10k SAS drives. Planned to replace the drives with 500gb data SSDs.

Or I'll screw an atom 4core Mini PC to the wall.
 
Looking for the whole experience. I worked 10 years less than 10ft from a server cabinet with something like this in it, so I'm not really scared of the noise. Or power consumption for that matter. I've been playing around in a VM on my desktop, just thinking something always on and designed to be a server would be a step forward. Also an rpi will be considerably less powerful than the old dell 1950 and all done not much cheaper.
I want something I can install some server software on, webserver, idempiere ERP, whatever else I come across. Planning to run a couple be VMs, maybe a personal cloud and Plex server, etc.

Any non-portable computer is designed to be always on, you're not going to get anything more with old server hardware then you can get with a decently built custom system. The reason you buy Dell/HP/etc is for the support they're able to provide, something you don't need for a home lab. If you really want rack hardware, just buy a supermicro barebones and populate it yourself.
 
Any non-portable computer is designed to be always on, you're not going to get anything more with old server hardware then you can get with a decently built custom system. The reason you buy Dell/HP/etc is for the support they're able to provide, something you don't need for a home lab. If you really want rack hardware, just buy a supermicro barebones and populate it yourself.
Price is dictating here. That was actually plan 1, before i started looking into costs of old server hardware. The hardware features that are present to facilitate remote access are interesting. Fact is the server i am looking at would cost me less than the case and psu alone. I understand the noise concerns, I hadn't really considered that. Heat may also be a concern, 2 months a year.
Fact is custom built systems are expensive. I've been eyeing cpu/mobo combos on local markplaces as well. a $50 mobo/cpu/ram pull from someone would end this conversation.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Price is dictating here. That was actually plan 1, before i started looking into costs of old server hardware. The hardware features that are present to facilitate remote access are interesting. Fact is the server i am looking at would cost me less than the case and psu alone. I understand the noise concerns, I hadn't really considered that. Heat may also be a concern, 2 months a year.
Fact is custom built systems are expensive. I've been eyeing cpu/mobo combos on local markplaces as well. a $50 mobo/cpu/ram pull from someone would end this conversation.

So you want Dell's DRAC, that's the most important thing here? What OS are you going to run, ESXi, Hyper-V, Linux distro?
 
So you want Dell's DRAC, that's the most important thing here? What OS are you going to run, ESXi, Hyper-V, Linux distro?
Ubuntu Server w/ KVM is the plan. Learning curve expected. Dont know what DRAC is, i'm guessing thats related to the remote hardware i was talking about. I will not have a spare monitor near this thing, once set-up, it will need to work. for a long time, all the while i'm trying to break it with ignorance.
Price is the most important thing here, functionality second, but not far behind.

I'm currently leaning towards buying a cheap celeron/atom passive motherboard and a cheap 2u case.... Might be more fun, justifying the extra cost. that $50 apu combo is interesting me as well....
 
Old fx based machines make great "servers"

What are you looking to get jnto
I have an fx 8350e system that is still being used to game. Upgrading it is an option, but $$$$ is holding that back.
I want to make a test server for;
idempiere ERP, Webserver, Plex Server, Personal Cloud/shared work space for some CAD I do on a small RAID 1 array. Thinkin 2-3 VM's run inside Ubuntu Server. The idempiere ERP on its own VM cause i expect to trash it regularly as I learn.
 
Looking for the whole experience. I worked 10 years less than 10ft from a server cabinet with something like this in it, so I'm not really scared of the noise. Or power consumption for that matter. I've been playing around in a VM on my desktop, just thinking something always on and designed to be a server would be a step forward. Also an rpi will be considerably less powerful than the old dell 1950 and all done not much cheaper.
I want something I can install some server software on, webserver, idempiere ERP, whatever else I come across. Planning to run a couple be VMs, maybe a personal cloud and Plex server, etc.

I said the same thing until I had 6x 80mm fans deciding my cpus needed more cooling and the fan whine was something I could hear 3 floors up and behind a closed door.

Careful.
 
1950 is an old paper weight not worth it, look for Anything with a Xeon x Series or higher, or even an E3 or E5 xeon and at least DDR3 ram..
Example
https://www.ebay.ca/itm/HP-ProLiant-DL360e-Gen-8-G8-Server-2x-Intel-Xeon-E5-2430L-2-0-24GB-2x-300GB/114390348971?ssPageName=STRK:MEBIDX:IT&_trksid=p2057872.m2749.l2648
Decent enough price on that one....
Other comments have me wondering if i'd really be unhappy with the noise... No doubt this would perform better than that A6-6400k..... Some further research shows the dual core celly's dont support virtualization, leaves me back at used server, or old used crappy desktop gear.

what about this one?
https://www.ebay.ca/itm/IBM-X3650-M...rganic&brand=IBM&_trksid=p2047675.c100623.m-1
 
Ahh, you're Canadian. Yeah, prices are going to be bonkers for you, mate. Although, that price there, if you insist on it being in a rack mountable enclosure, isn't so bad. The shipping sucks, but not bad on the rest.
 
I think i would prefer a rack, that way I can better organize my shit. Currently I dont like the mess of cables and stuff basically sitting on the floor.
 
auntjemima the person also accepted my offer of $200 on it!

The noise is not bad, but I also only have mine running pfsense and it is in my basement, due to the chips being L series they use less power, so until you start to either fill the thing with drives, or put serious load on it, the noise is not awful.

The other option is you look for a use Dell T series poweredge, which are tower servers, I got a Dell T620 and put in 2 Noctua coolers instead of Dells system and runs almost silent.

travm that IBM is not a bad price either. Eventually you can change out CPU's if you need more cores, add more ram as you find it and room for more drives.
 
hrm, never tried the make an offer on ebay. Maybe i'll give that a shot.
 
Just discovered blade servers. So many options
I don't think blade servers really fit nicely in a rack. There are probably racks specifically for it, but I would stick to a server you can easily upgrade later to meet future needs.

As an example, I have a 2011 v3/v4 board, so I have upgrade options later on if I need to move from my Haswell chips.
 
auntjemima the person also accepted my offer of $200 on it!

The noise is not bad, but I also only have mine running pfsense and it is in my basement, due to the chips being L series they use less power, so until you start to either fill the thing with drives, or put serious load on it, the noise is not awful.

The other option is you look for a use Dell T series poweredge, which are tower servers, I got a Dell T620 and put in 2 Noctua coolers instead of Dells system and runs almost silent.

travm that IBM is not a bad price either. Eventually you can change out CPU's if you need more cores, add more ram as you find it and room for more drives.
From those two servers you'd pick the HP because it has L series CPU's for less power consumption?
 
OP is late to the party.
There is a movement towards serverless for a reason.

If you must learn the minutiae of an underlying OS I’d suggest getting the RHCE book and about anything more resource capable than an x40 thinkpad.
LPIC1 is also an option, but Rhel is the standard.

GCP or AWS would be better as you can tiptoe on free tier running down workshops and get exposed to a lot of how ntier web has moved on to different expressions of infrastructure. A bunch of us riddled the STH guy when he did his bare metal vs AWS vid on YouTube. He sounds like he doesnt know much beyond one of the polo shirt guys that physically swap out failed components for licensed and contractually warranteed gear.

Vanilla Kubernetes is a good idea bc it’s the hotness: https://github.com/Praqma/LearnKubernetes/blob/master/kamran/Kubernetes-The-Hard-Way-on-BareMetal.md

Docker is useful bc as a common unit of packaging for software it’s very handy for us on the Ops side to understand.
It’s 1 of the drivers of WSL.
A high level overview of Microsoft’s DevOps initiatives can be enlightening here.
They spent billions on Docker enterprise, GitHub, canonical partnership, etc for real reasons worth understanding.

Constants in computing are networking, virtualization, and being about to script.
Platforms evolve, or are replaced.
 
Last edited:
OP is late to the party.
There is a movement towards serverless for a reason.

If you must learn the minutiae of an underlying OS I’d suggest getting the RHCE book and about anything more resource capable than an x40 thinkpad.
LPIC1 is also an option, but Rhel is the standard.

GCP or AWS would be better as you can tiptoe on free tier running down workshops and get exposed to a lot of how ntier web has moved on to different expressions of infrastructure. A bunch of us riddled the STH guy when he did his bare metal vs AWS vid on YouTube. He sounds like he doesnt know

Vanilla Kubernetes is a good idea bc it’s the hotness: https://github.com/Praqma/LearnKubernetes/blob/master/kamran/Kubernetes-The-Hard-Way-on-BareMetal.md

Constants in computing are networking, virtualization, and being about to script.
Platforms evolve, or are replaced.
not in my lifetime.
 
by a ton of power, you mean like $50 a year.
Loud, I suppose its a 95w setup in 1U, I guess its probably going to be loud.
Spending an extra $500, to save $20 a year is not helping. Alternative is an atom or Celeron mini-itx screwed to the wall. Currently i'm playing in VirtualBox, but i'd prefer to be on something more "enterprise" and not on my gaming PC.

I also want it in my basement, and i'd rather a rack than a shitty old PC on the floor. If concensus is i'm further ahead with an atom and 2.5 ssd, then that is option 2.
Right on! There's so many people out there that spend $500 to save that $20/yr in power...and then sell off the server later when they get cash strapped--and this for some reason is still the promoted thinking in homelabs. Whatever--I'm with you. And instead of the 1950, check out the 2950--gets absolutely no love and is a great server. The HP twin of it, the DL380 G5 is also solid (I have both). For a 1U of the HP, it's the DL360.
 
1u in the home is a no go (at least a dell 1950). Honestly some optiplex with an i7 and 32gb of ram will be faster. And if you want it in a rack get a rack case for it or a rack shelf.

A dell 1950 is a core 2 cpu running ddr2. Have fun with 15 minute reboots and 60+ db noise.
I have an R410 and R710 as well LOTS of PCs--the 2950 and DL380 are no different than any of the other servers in terms of boot time or noise. In fact, I was expecting a huge difference in noise between the 2950 and R710 (as well as power usage) and it's only noticeable if you're listening for it. Power usage is like 50w difference under max load and like 30w any other time. Again, it's great that people don't like these systems because that makes them super cheap--got my first 2950 for $20 almost 5 years ago, the second fully loaded for $65 including a half rack and a monitor, and my HP for $65 including extra drives and a windows server license installed. Today, you can typically find these being thrown away or given away. I always opt for working hardware to be given away as it's worth more to someone out there as a working machine vs the scrap value.
 
Any non-portable computer is designed to be always on, you're not going to get anything more with old server hardware then you can get with a decently built custom system. The reason you buy Dell/HP/etc is for the support they're able to provide, something you don't need for a home lab. If you really want rack hardware, just buy a supermicro barebones and populate it yourself.
I've not found this to be true. Server hardware is ridiculously robust and even consumer OSes will run without issues many times longer than they would on consumer hardware. There's a lot of concepts you learn with almost any generation of server hardware that you'll never learn with regularly computer hardware--I still haven't learned all I can from mine.
 
You will remember the sound of datacenter hardware fired up outside of a datacenter.
Departmental boxes are basically desktops with validated hardware in them that often run in a different performance envelope than 24/7 gear.
What’s always been the case is that minding hardware isn’t much of a job.
What should become learning pts is that server workloads are different than desktop workloads.
https://info.crunchydata.com/blog/postgresql-13-benchmark-memory-speed-vs.-tps

Minding the environment, ingress/egress connectivity, or application health is just the skin on top of the mug.
 
You will remember the sound of datacenter hardware fired up outside of a datacenter.
Yeah, and it's awesome!! Once you've heard that come from a computer, everything else sounds so p...pathetic.
 
Yeah, and it's awesome!! Once you've heard that come from a computer, everything else sounds so p...pathetic.
Your pt about server gear longevity is important here bc e3 and e5 chipsets run forever as either desktop workstations or dept servers.

Something to think about since off license gear will get dumped cheap.

We even got g6 HP rack gear in for experimental use bc we don't care about support and it's internal use only. I've populated old compellant shelves with ssds for nearline cache use in test......also to carve out LUNs faster container experiments or key value store use. We actually comprehensively ran serverless build pipelines on that gear to see what resource contention we'd actually hit compared to monolithic pipelines.
 
somebrains fact - on prem / Hybrid and HCI is still massive and there has been large migration back to on prem / hosted datacenter's again as many have gotten a bad taste around cloud services. I work in a company that has most of the top companies in our province as clients and many of them spend millions for on prem physical hardware still, often out of need because the cloud can not provide for them what they require with out literally breaking the bank or allowing the flexibility they currently need with their environments. Serverless does have its place, but you are going to see on-prem / private cloud style / HCI going on for years to come still.
 
Your pt about server gear longevity is important here bc e3 and e5 chipsets run forever as either desktop workstations or dept servers.

Something to think about since off license gear will get dumped cheap.

We even got g6 HP rack gear in for experimental use bc we don't care about support and it's internal use only. I've populated old compellant shelves with ssds for nearline cache use in test......also to carve out LUNs faster container experiments or key value store use. We actually comprehensively ran serverless build pipelines on that gear to see what resource contention we'd actually hit compared to monolithic pipelines.
That's awesome to hear about those g6 still running. They're kind of an oddball to begin with since the g7 essentially were similar performance and because of that g6s are starting to get treated like the g5s. It's amazing how much life older sas das units have too, especially for use as you've done. Dirt cheap proof of concept stuff and still huge amounts of computing for a starter homelab.
 
somebrains fact - on prem / Hybrid and HCI is still massive and there has been large migration back to on prem / hosted datacenter's again as many have gotten a bad taste around cloud services. I work in a company that has most of the top companies in our province as clients and many of them spend millions for on prem physical hardware still, often out of need because the cloud can not provide for them what they require with out literally breaking the bank or allowing the flexibility they currently need with their environments. Serverless does have its place, but you are going to see on-prem / private cloud style / HCI going on for years to come still.
The biggest reason I see mentioned for keeping on-prem is just the sheer cost of the cloud, which can be as high as 10x on-prem. With certain things it makes sense and even cents, but for a lot of things, it is still better to have it on site.
 
Cloud costs are directly in proportion to technical debt.
If I have a million invocations a month with short run functions, then I don't need to deploy code onto multiple servers or instances in and around application perimeters. Especially for triggered events or essential crud functions linking applications together. I can use a key value store rather than an ever expanding sql DB from streamed data post transform and use endpoint triggered logic rather than monolithic middleware. I can set static assets as an origin, serverside code rendered client side, and essentially eliminate my web presentation tier servers. If my logging doesn't need to be as granular as a Palo Alto solution I can trigger actions off a runbook made from known API calls rather than have humans sit 24/7 be alarmed then have to rely on them to act in the best manner to a given situation. Orchestration means I can look at time series scaling of negotiated resources, 8am-8pm known workload scaling, and pay for my enterprise licenses during my actual workloads. Negotiation of known resource use often reduces my bills by 50% bc I'm not relying on a high level "scale on demand as I need it" without knowing the demands on my applications. Truly web facing unknowns like mobile app traffic is scale everything to near nothing to steady state to get metrics for known state negotiation.

There's a lot to move from a 2008 or earlier idea of how applications should run.

Even something as simple as building and deploying code I can avail myself of tools ala carte as I need them billed by the minute. I'm not locked in bc the company bought bought a package from 1 vendor. I can test for needs today, and the next project snapshoty pipeline and build out a new one as needed. Whatever is committed I have a streamlined and purpose built chain, rather than 1 enterprise solution + addon packs of plugins.

You start with 1 server, some day you have rows that turn into floors. Then you need vastly more for a couple months, maybe. You generate the hard data to path yourself forward to committing to a deployment and you move on. You control the flow of your workloads so under 1% utilization over x time period snapshot and put it to sleep so it doesn't bloat your bill.

Serverless doesnt mean no servers, but scale beyond the latency of lighting up a rows worth of compute + storage + applications + time to get it in a processing state. It means you can't afford the 20+ minute latency anymore. That much compute needs to be online. It means you need to know how to exceed limits in the most cost effective manner possible. If I deliver 100 times more work in a given time period with 50% less cost and no extra people, then don't fixate on how much the bill is. I just made the company a ton more $.

But yeah, I started with n+1 castoff 286 back in 1996 on a 128k isdn line and a Slack Linux distro. So 3 servers. 1 day I found myself using machine learning tools I didn't fully understand to trace 3.5million transactions per hour costing $5800 per hour. It's all Linux, some code, some bandwidth, storage, multiple days. Nothing really changes.
 
Last edited:
I've not found this to be true. Server hardware is ridiculously robust and even consumer OSes will run without issues many times longer than they would on consumer hardware. There's a lot of concepts you learn with almost any generation of server hardware that you'll never learn with regularly computer hardware--I still haven't learned all I can from mine.
Right on! There's so many people out there that spend $500 to save that $20/yr in power...and then sell off the server later when they get cash strapped--and this for some reason is still the promoted thinking in homelabs. Whatever--I'm with you. And instead of the 1950, check out the 2950--gets absolutely no love and is a great server. The HP twin of it, the DL380 G5 is also solid (I have both). For a 1U of the HP, it's the DL360.

If you're buying old G5s, G6s and G7s the hardware is already 10-15 years old, even if it is ridiculously robust, at that age, it doesn't make it any more reliable. Plus you can't run the latest OS's, you're stuck at ESXi 5.5 except with the G7 you can go to 6.0. Windows Server support is the same, you're stuck at 2012 R2.

I've had my i5-3570k based mITX system running for 8 years now as a NAS, sure I could have bought a used HP DL380 G3 at the time but it would be downright ancient at this point and worthless for running any modern OS. I have stacks of G5s and G6s that are sitting on a back shelf in the warehouse and there's no way I'd ever boot one up unless I needed to restore some old archived system for a business need. Don't waste your time with old hardware, you'll just be upgrading it every couple of years because it ages out so quickly.
 
Back
Top