New server build, looking for recommendations.

Deimos

[H]ard|Gawd
Joined
Aug 10, 2004
Messages
1,166
I'm thinking about replacing our aging server at work. It is still fit for purpose but is very long in the tooth and built with off-the-shelf PC parts;
Xeon X5670
Asus P6X-58D
48GB ram
743GB of storage (a couple of SATA ssds and a pair of WD Raptors).
Running Windows Server 2012 R2 with one VM running the same as a domain controller. We work with a lot of small files and our client database is around 34GB.

I want something modern that won't break the bank, mainly for the newer storage technologies such as M.2 and maybe a couple of SSDs mirrored for less frequently accessed data. I would be keen also to have ECC ram but is it actually necessary for a basic file server/domain controller?

Looking for recommendations about where to go from here. I don't need a massive core count but the old server is feeling a bit sluggish so I'm open to suggestions.
 
I am thinking a bunch of new drives and a bit more RAM, would make it feel a lot faster. 1TB SSDs ae getting pretty affordable.
 
DDR4 is sooo expensive now.

AFAIK you can use modern NVMe drives on old mobos like that, as long as you don't boot from them and run a newish OS. If you don't need more CPU grunt, maybe you could just grab a 1TB PCIe SSD, use your old storage for backup, and call it a day.


EDIT: Otherwise, maybe a Threadripper build, since it's cheap, has alot of PCIe and supports ECC? Xeon Scalable or Epyc seems like overkill for you unless you need a ton of SSDs or future expansion potential, and Intel's smaller Xeons just aren't very feature rich.

You could grab some of those new x16 M.2 adapters for SSDs:

https://www.anandtech.com/show/12987/msi-four-way-m2-pcie-card-it-looks-like-a-gpu
 
Last edited:
I should probably setup some performance counters to see just how utilized this server actually is. I just checked task manager and it's currently sitting around 6.2GB ram in use, so more ram is not going to make a lick of difference.

I have an Intel 400GB 750 series PCIe SSD just lying around, I never put it in the server due to having no redundancy. I don't think it will be an issue though as we run backups twice a day. I might shove it in there and setup some teaming on the dual NICs, see how much difference it makes.

My main concern is the age of this thing. It's officially 8 years old as of June (apart from the PSU which is much newer), I'm just wondering the likelihood of failure given the age of this thing.
 
By Server 2012 R2 running a VM do you mean Hyper-V?

Sounds like you're not at any of the resource caps, any other reason for the upgrade? or just want to reduce risk of failure?
For that small amount of storage you could easily go all SSD replacing those raptor drives, DDR4 and a newer processor line isn't really going to give you that much more speed especially if you're not resource capped.

Next steps (if you aren't already) would be to get all you data into redundant raid (5 or 6), followed by a reliable backup (which you said you have).
Perhaps an enterprise level server would be next for reliability and added redundancy.
It wouldn't be newer, but you can pickup the Dell R710/R720 lines pretty cheaply to get a second DC for a secondary authentication target.

Hows your networking load and what kind/amount of user base is accessing the server?
 
Network load is quite low, I think data access would benefit from a high IOPs drive, rather than raw speed as we work with a lot of small files. The Intel 750 SSD I have has some decent IOPs but I will be giving up Raid 1 redundancy. I don't need a second DC as I have one at home and a 100MB/s VPN tunnel connecting my home to the office. We also run DFS and connecting to an off-site server isn't even noticeable over the VPN, it's just like accessing the files locally, our routers kick ass. Average latency over the tunnel is 3ms which is nuts.

It's probably just the upgrade itch...
 
Last edited:
Network load is quite low, I think data access would benefit from a high IOPs drive, rather than raw speed as we work with a lot of small files. The Intel 750 SSD I have has some decent IOPs but I will be giving up Raid 0 redundancy. I don't need a second DC as I have one at home and a 100MB/s VPN tunnel connecting my home to the office. We also run DFS and connecting to an off-site server isn't even noticeable over the VPN, it's just like accessing the files locally, our routers kick ass. Average latency over the tunnel is 3ms which is nuts.

It's probably just the upgrade itch...

You can passively mirror stuff to your old drives via software if you use the 750.

Unlike RAID 1, you might lose a bit of data if the 750 suddenly fails. But losing a few seconds or minutes of writes on such a remote chance of failure isn't a big risk anyway.
 
Honestly if you have a microcenter around the threadripper 16c32t chip is a beast at $630. You would be looking at maybe 30% ipc per clock per core vs naleham. Ddr4 is expensive though. You would gain native x4 nvme drives. I would personally look for Enterprise level drives, as they tend to have better wear leveling and capacitors in power failure events.

It would be interesting to see where your bottleneck is actually happening though.
 
If it's a server, I'd look into Dell and HP servers that are on ebay. These platforms are built for serving and are built extremely well (reliable). I got some from cheap the late 2000s and even today they're still pretty potent. (y)
 
I'm about to replace my home server R710 with an HP DL380 if you're looking for a lot of power without breaking the bank in a legit server. I'm tempted to just go threadripper and a 4u case though to be able to do anything. getting 100Gb ddr4 though will suck. 64Gb gets pretty expensive and I can probably make due with that but we'd be talking about wrecking the budget for me at least.
 
Back
Top