"budget" home cluster

airthimble

n00b
Joined
Jun 29, 2012
Messages
61
I'll be finishing my Ph.D. in this next year and afterwards won't have access to a valuable cluster resource(10,000+ cores, etc...). I'd like to explore the option of putting together a small home cluster(I'm thinking something around 4-5 nodes, 16 cores per node). I'm looking for some guidance from those of you with some experience with old server hardware.

1. I'm planning that each node will have 2 E5-2670 chips, unless there is something else with better value out there?

2. Would I be better off buying mobo, PS, cpus and ram off ebay and stuffing them in a rack mount case, or is it way more economical to find complete servers?

3. I'm not very familiar with server hardware, and notice that there are a lot of proprietary parts. How difficult is it to gather up parts that are compatible with each other? For example, does supermicro change the size/pinout of their PSUs once every ten years or every 6 months? If I pick up a supermicro mobo and psu that work together, can I purchase a generic rack mount case and stuff them in, or am I limited to housing them in a specific supermicro chassis? I guess the general question here is, are parts so proprietary and varying within each brand that its a huge pain the ass to find everything individually?

4. If its reasonable to start hunting for parts, I'd like some recommendations on motherboards to keep a lookut for on ebay. Something simple should be fine, I'd like the option to add infiniband cards to each node in the future, and support for two E5-2670, otherwise focus is on keeping costs down.

Thanks!
 
Some used E5-2670s are in a pretty nice spot price/performance wise, so that is not a bad choice.

It probably would be cheaper to buy whole servers off ebay, but as someone who has bought a lot of servers off ebay to run at home i suggest you consider things like.
- where will you set these up? prebuilt servers are pretty much meant to run in a data center with hot/cold isle setup, with fans blasting. which brings me to the next consideration.
- how much noise are you willing to tolerate? Some of these prebuilt servers are pretty loud, and if you do not do the proper research you can easily end up with one you hear threw doors/walls on the next floor. everyone's tolerance for this is difference, and even changes over time. i have personally become less tolerate of it over the last 15 or so years messing with used enterprise grade equipment at home. these things are not built with considerations of having people around.

How easy it is to toss a part in a generic chassis depends on the part. supermicro has some power supplies that are meant to work in certain chassis, they plug in to a backplane type setup, so you cant just move them to any case. they have some motherboards that are proprietary form factor that will only fit in certian cases, and then they have normal atx style motherboards. you will have to look in to each part you are interested in.

Depending on how far out the build is you can start looking now, get an idea of what things cost, what you want. But i would not buy until you are closer to wanting to do the build unless you find a really good deal. prices may continue to fall if you are half a year out.
 
  • Like
Reactions: 50Cal
like this
Thanks for the advice, very helpful! I was hoping it would be a little easier to transplant things so I could get bigger heatsinks and keep the fan speeds and noise at a tolerable level.

I was initially thinking I would put them in my home office, but with complete servers I don't think I could tolerate the noise in the same room.

It may very well be that I have to rent some amazon instances or set up a dual cpu workstation to tide me over until I have enough saved for a house. :(
 
What about Nested Virtualization? Do you really need that many horsepower for home cluster ?
 
Cloud Virtualization is probably your best, and least expensive option.

I'll have to look into that.

What about Nested Virtualization? Do you really need that many horsepower for home cluster ?

Unfortunately yes, the problems I work on boil down to large scale optimization. The smaller problems I've been working on lately can take several days to run on 200 cores.
 
Unfortunately yes, the problems I work on boil down to large scale optimization. The smaller problems I've been working on lately can take several days to run on 200 cores.
What is your field of research, what kind of optimizations are you doing? As a tech nerd, I get curious. :)
 
I have a single 2u server (HP DL380P G8) in the basement and you can hear it all throughout the house if the fans pick up. It's usually pretty quiet, but then again that definition tends to vary.

1u servers are screamers. I had a few that I had to get rid of because they were so damn whiney. 2u+ seems to be your best bet if it's going to live in your liveable space.

I second the cloud suggestion, though. You should at least look at what the cost might be compared to buying / shipping / storing / powering your own gear.
 
Holy jeebus! I thought their S2600 + 2x 2670 + 128GB bundle for ~$460 was cheap already! :eek:

Who needs Kabylake/Skylake-E or Ryzen? Just get one of those clusters, add some storage and you're done.
Can't imagine what 160c/320t of Sandybridge goodness would do to Handbrake :p

Make your power company smile?
 
i've been buying up NUCs. I know they don't have ECC or many cores but they're quiet and self contained.
 
I remember hitting up a few local tech meetups with folks showcasing Microsoft's "High Performance Computing" (HPC) and it was damn snazzy. I believe it was an add on to Win Server 2008 and later. The guy had a whole mess of random old machines, CL bought machines, etc all networked up and sharing the load.

It may be worth checking that or looking at GPU clustering.. depending on what you plan on using it for.
Example: cuda cluster
https://devblogs.nvidia.com/parallelforall/how-build-gpu-accelerated-research-cluster/
http://www.nvidia.com/object/tesla_build_your_own.html
 
Thought I would check in and give an update as this thread is getting some recent replies.

Thanks for the suggestions everyone. I was looking at Amazon EC2 instances for a while, but it gets really expensive quickly. I also looked at some 1U servers and I'm pretty sure my wife would kill me with how loud they are. In the past I had used a Tesla K20 card for computations, they work great for specific tasks but aren't general-purpose enough for some of the stuff that I do.

I ended up putting together a small test machine that I've been using over the past couple months, it has dual Xeon E5-2683 v3 cpus and 8x8GB DDR4 RDIMMs. So far I'm pretty happy with it, the cores are kinda slow at 2.0 GHz(2.5 turbo), but there are 56 logical cores so it chews through work at a decent clip. I've got it housed in a 4U case and noise-wise it's nice and quiet!

I noticed used 10Gb NICs have gotten dirt cheap, so I added a dual 10Gb card to my storage server and single 10Gb cards to my two machines I run experiments on. I have it set up such that I don't need a switch. Right now I'm thinking that I'll eventually add a couple more dual E5-2683 v3 machines in the future.
 
Back
Top