Server build for work

pillagenburn

[H]ard|Gawd
Joined
Oct 3, 2006
Messages
1,271
So I got some budget approved at work to replace some old Intel hardware running a 7200 RPM SAS drive. This old hardware is running a key database server. I got here not too long ago and said pretty much from day-one that this thing needed to go.

I figured I'd just go to HP or Dell and grab a modest dual epyc server with a bunch of RAM and be done..... then I saw what my budget was.

$5,000 .... maybe a little more.... and they already don't like that number. Basically bordering on poverty-tier. Likely because this is mid-year and wasn't planned for budget, so yeah...

The other problems are that this server is getting hammered now and will be getting vastly more hammered going forward.

So they floated the idea of me actually building the server.... which I'm not 100% sure I'm cool with as I can't warranty the thing so I might get a local system builder to do it for a modest fee from a bucket of parts.

Specs:

Supermicro H12DSI: $750
dual Epyc 7282 (32c/64t in total) $1300
256GB ECC DDR4-3200: $1300
Supermicro case w/ redundant power supplies: $300
dual 240GB RAID-1 mirrored ESXi boot
.......and lots of NVME drives.

So my main question is.... in order to get this in budget I need a server-grade M.2 Bifurcation card - any suggestions on that? Also could Firecuda 530's be used here?
 
Honestly any Bifurcation card would work fine in this setting provided you dont need anything weird like hardware raid. Just be careful to note how many of the pcie lanes can actually be splot at the same time. Sometimes you will run into issues trying to run 4+ lanes in 4x4x4x4x mode.

I would say just build it yourself a local builder isnt going to be able to help much if a problem does arise and better then the company just replacing the needed part. Neither will be close to one support but if you get it up and running properly you should be pretty set.
 
Honestly any Bifurcation card would work fine in this setting provided you dont need anything weird like hardware raid. Just be careful to note how many of the pcie lanes can actually be splot at the same time. Sometimes you will run into issues trying to run 4+ lanes in 4x4x4x4x mode.

I would say just build it yourself a local builder isnt going to be able to help much if a problem does arise and better then the company just replacing the needed part. Neither will be close to one support but if you get it up and running properly you should be pretty set.

Thanks, and no I don't need hardware raid... but the other issue is that I'm going to have to ship this server out to another state and I won't have physical access to it... so yeah...

Also any idea if a Firecuda 530 would be suitable for a database server?
 
For shipping put it on a pallet and send it freight. It will only be a few bucks more and the chances of breaking/losing it are reduced.

Are you trying to use the 530s as the raid 1 for boot? Or as drives for the database? If for the database how many drives are you looking for and how do you plan on configuring them?
 
For shipping put it on a pallet and send it freight. It will only be a few bucks more and the chances of breaking/losing it are reduced.

Are you trying to use the 530s as the raid 1 for boot? Or as drives for the database? If for the database how many drives are you looking for and how do you plan on configuring them?

Thanks for the input I appreciate it.

The RAID-1 will be a pair of inexpensive Samsung 870 EVO's running as the ESXi boot and probably housing backups

The 530's would be independent database drives and I did not plan on putting them in RAID. I was going to separate out tempdb and other databases on to individual drives and just maintain very good backups as budget is tight.
 
Should be fine as long as you take appropriate steps backing up needed data. Maybe consider two larger ssds as a boot/active backup drive in addition to regular external backups.

Consumer nvme drives should work very well from a performance standpoint. I also like the nicer inland nvme drives for cheap high capacity drives with decent endurance.

Do you have a ballpark for how much space you are looking for?
 
Should be fine as long as you take appropriate steps backing up needed data. Maybe consider two larger ssds as a boot/active backup drive in addition to regular external backups.

Consumer nvme drives should work very well from a performance standpoint. I also like the nicer inland nvme drives for cheap high capacity drives with decent endurance.

Do you have a ballpark for how much space you are looking for?
I was thinking at least 1tb per database with at least 5 databases. So 5tb or more. Im thinking the Asus Hyper m.2 card with Firecuda 530's might be the way to go....

Also i need to track down a good, inexpensive, tower case with redundant power supplies
 
If your company wants service contract behind it you could give thinkmate a holler too they're a supermicro/asus/gigabyte reseller. I've not used them personally before though. Intel also has a million small integrators around the country, I've had some poor experiences with their server boards in the past so honestly it'd be my last choice.
 
If your company wants service contract behind it you could give thinkmate a holler too they're a supermicro/asus/gigabyte reseller. I've not used them personally before though. Intel also has a million small integrators around the country, I've had some poor experiences with their server boards in the past so honestly it'd be my last choice.
I have also had underwhelming or poor experiences with Intel server boards. Supermicro, has personally been very consistent and decent.

You should be fine with that card and those nvme drives. I think it would be reasonable to match the capacity of the nvme drives with raid 0 sata drives as a local backup. Servers can often be abused with the expectation that they will catch any faults and nvme drives can eventually fail in this workload.

Your going to have more luck with redundant psus and a rackmount case. Last I looked there wernt too many options and I ended up mounting a dual psu enclosure into a standard tower case which I cannot recommend for this project.
 
I have also had underwhelming or poor experiences with Intel server boards. Supermicro, has personally been very consistent and decent.

You should be fine with that card and those nvme drives. I think it would be reasonable to match the capacity of the nvme drives with raid 0 sata drives as a local backup. Servers can often be abused with the expectation that they will catch any faults and nvme drives can eventually fail in this workload.

Your going to have more luck with redundant psus and a rackmount case. Last I looked there wernt too many options and I ended up mounting a dual psu enclosure into a standard tower case which I cannot recommend for this project.
So other than Supermicro there are Chenbro, in-win and Intel tower cases with redundant power supplies, but first two seem to be lacking in availability. Intel has availability but their towers have front panel plugs that are specific to Intel boards.

I would also like a u.2 nvme backplane but i think thats asking waaaaaaaaaaay too much.
 
Ask your bosses how long can they afford to be down?
if they can not answer that question they should not be running a business.
Case $5k for a "critical database" server is a joke..

Sure you can build it yourself, and the second it goes down, they will be blaming you for it..
 
Ask your bosses how long can they afford to be down?
if they can not answer that question they should not be running a business.
Case $5k for a "critical database" server is a joke..

Sure you can build it yourself, and the second it goes down, they will be blaming you for it..

Truth.

Yes, you can build your own server-grade machine. Yes, most pre-built server-grade servers are just using off the shelf parts. But you don't want to be on the hook when you run into some edge case incompatibility or have to chase down a OEM for warranty support when you're server is down.
 
Truth.

Yes, you can build your own server-grade machine. Yes, most pre-built server-grade servers are just using off the shelf parts. But you don't want to be on the hook when you run into some edge case incompatibility or have to chase down a OEM for warranty support when you're server is down.
Bingo!
When SuperMicro wants you to send in the entire mobo via RMA and several weeks later you still dont have it back...

Same with any part.

So if you do choose to build your own server, keep enough spare parts on site also...now building your own is not so cheap.
 
With that budget I'd opt for something like RDS on AWS. 8 core, 32 GB instance is about 400$/month when used for 8 hours a day

Forget hardware and software provisioning, support and maintenance. just live your life
 
With that budget I'd opt for something like RDS on AWS. 8 core, 32 GB instance is about 400$/month when used for 8 hours a day

Forget hardware and software provisioning, support and maintenance. just live your life
Have to use azure unfortunately.

And this is an analytics db. It will laugh at 32gb unfortinately.
 
Azure boasts being cheaper than AWS with their Azure SQL database so would be worth looking into

32GB was an example. What you save upfront by building and supporting some welfare servers, you'll pay for (probably more) in the long term
 
I'm going to have to disagree with many of the comments above. Once you get a server up and running it should be pretty easy to keep it going. And nothing will compare in value for the performance you can get.

The only thing I would take care noting is repair costs would most likely be coming out of the companies pockets (if a motherboard dies rma doesnt matter, they will shell out a few hundred for a new board to have it running asap). Additionally the capabilities of the backups must be known by the users of the server or improved so a company can expect a similar degree of data integrity in comparison to a local enterprise oem server.

Allow for some excess in build time. You could potentially deal with some weird compatibility issues (pcie bifurcation has caused me trouble in similar scenarios). If they are happy running their current hardware till this build is ready and tested it should go well.
 
I'm going to have to disagree with many of the comments above. Once you get a server up and running it should be pretty easy to keep it going. And nothing will compare in value for the performance you can get.

The only thing I would take care noting is repair costs would most likely be coming out of the companies pockets (if a motherboard dies rma doesnt matter, they will shell out a few hundred for a new board to have it running asap). Additionally the capabilities of the backups must be known by the users of the server or improved so a company can expect a similar degree of data integrity in comparison to a local enterprise oem server.

Allow for some excess in build time. You could potentially deal with some weird compatibility issues (pcie bifurcation has caused me trouble in similar scenarios). If they are happy running their current hardware till this build is ready and tested it should go well.

I just priced out azure. They would have a stroke if I asked them for this kind of monthly commitment.

I would like Epyc with PCI-E 4.0 capability, but that doesn't seem like it's going to happen. I might have to settle for previous-gen Epyc with PCI-E 3.0 instead.

Maybe PCI-E 3.0 vs 4.0 doesn't make that much of a difference for database performance? I was looking at Samsung PM9a3 but might have to go with the previous-gen equivalent.
 
I just priced out azure. They would have a stroke if I asked them for this kind of monthly commitment.

I would like Epyc with PCI-E 4.0 capability, but that doesn't seem like it's going to happen. I might have to settle for previous-gen Epyc with PCI-E 3.0 instead.

Maybe PCI-E 3.0 vs 4.0 doesn't make that much of a difference for database performance? I was looking at Samsung PM9a3 but might have to go with the previous-gen equivalent.
So it sounds like you don't really have any actual idea what performance requirements are for this system and are just picking at random? I'm guessing they don't have a software/db admin for this system to help with the assessment?
 
So it sounds like you don't really have any actual idea what performance requirements are for this system and are just picking at random? I'm guessing they don't have a software/db admin for this system to help with the assessment?
He is looking at a pretty drastic improvement in performance with the proposed system.

The difference between x4 pcie3 and pcie 4 will.make very little difference for the considered drives.

Seems more budget constrained then anything. Which is fine just probably abit different then what many of you are expecting when specing out a server for a company.
 
He is looking at a pretty drastic improvement in performance with the proposed system.

The difference between x4 pcie3 and pcie 4 will.make very little difference for the considered drives.

Seems more budget constrained then anything. Which is fine just probably abit different then what many of you are expecting when specing out a server for a company.
Perhaps but depending on the load going wide may not help that much if they're thread limited as an example not knowing how this application runs.
 
Perhaps but depending on the load going wide may not help that much if they're thread limited as an example not knowing how this application runs.
True, it may be worth confirming the current bottleneck on the old system as well as ensuring the load could utilize the threads on the epycs as a modern ryzen would be leaps cheaper and faster for any single thread loads.

Although I belive the proposed system is a pretty balanced upgrade for nearly any workload as it has loads of cache, cores, and a focus on high speed storage (at the cost of redundancy) and a resonable single core compared to older xeons
 
Last edited:
So it sounds like you don't really have any actual idea what performance requirements are for this system and are just picking at random? I'm guessing they don't have a software/db admin for this system to help with the assessment?

I wouldn't say at random. There are many things that need to be done for which no systems exist upon which to do those things:

- The issues I pointed out above (running databases on 7200 rpm SAS drive)
- There is basically no test/sandbox environment for anything (one sort of exists for certain parts of dev)
- There have been a number of instances where contractors needed VM's to do actual dev work - I could hear crickets in the meeting.
- And on and on and on...

So we're talking code compile, test/QA, remote access etc.

I convinced them to get a Dell R6525 with dual 7282's, 256GB RAM and PM9a3 SSD's and a RAID-1 SSD for ESXi and local backup.... with this retbleed news maybe they need to spring for 7313's but I doubt it's going to matter.
 
So it sounds like you don't really have any actual idea what performance requirements are for this system and are just picking at random? I'm guessing they don't have a software/db admin for this system to help with the assessment?
Oh, and no, no dba...... Oh wait..... i guess thats me! Who knew? I took this job thinking it would be more relaxed....

I was wrong...
 
Oh, and no, no dba...... Oh wait..... i guess thats me! Who knew? I took this job thinking it would be more relaxed....

I was wrong...
Always good times lol, hope the new box kicks ass. I've seen a few times in the past when clients put in new hardware and didn't get any performance increase, that is always kind of awkward. One time a client switched from a few DL380 G5 servers to an Intel Modular Server with a few blades and their performance actually went down (this was hosting a dynamics crm server setup).
 
Back
Top