HPC cluster and hyper-v

pillagenburn

[H]ard|Gawd
Joined
Oct 3, 2006
Messages
1,271
Will a HPC cluster running Hyper-V on the head node, would the load generated by Hyper-V distribute over the other nodes in the HPC cluster? On Server 2012 or 2008 even.

Googled the crap out of it and couldn't find an answer specific to this question, but I'm thinking it won't,
 
I would also be very interested in knowing the answer to this
 
Add me to the list of people who are interested in the answer... subscribed.
 
So you are asking if you can cluster together servers running Hyper-V to create one large VM, or am I thinking wrong?
 
Im asking if you can create a hpc cluster from, say, a blade server with 4 nodes installed. Then on the head node have hyper-v installed.

Then, in your head node PHYSICAL environment, spin up ONE VM i hyper-v to run folding.

My question is would the load generated from folding inside that ONE VM (which was spun up on the PHYSICAL instance of hyper-v) distribute to all of the compute nodes in the hpc cluster?

Simply put, can i distribute the load generated by hyper-v across multiple, in this scenario, Blade servers? I.e. effectively creating a single server with 4x blades functioning as one running hyper-v?
 
Simply put, can i distribute the load generated by hyper-v across multiple, in this scenario, Blade servers? I.e. effectively creating a single server with 4x blades functioning as one running hyper-v?
What is your hardware/firmware, exactly?

Windows HPC needs your hardware/firmware to appear as single system before you can do what you've described.
 
tear - meaning connected together via something like scalemp or some other kind of vSMP product? Or am I misunderstanding?
 
Yes, or it's gotta be single system image hardware from SGI, Cray or similar.
 
My HPC & grid computing experience is limited to AIX and Linux and has thus far always used Infiniband usually on the gx+ or gx++ side of things, enabling cpu time to be shared across all the systems in the HPC/Grid. I've always wanted to try folding this way but it's not supported in AIX or Linux on Power and Lx86 on Power has never worked right with Folding, otherwise I probably could have set a few new records over the past few years. Anyhow, I'm rambling....

Like Tear said it's going to need to look like one system in order to do what you've described.

There are PCIE x8 Infiniband adapters out there but I've never used them in a Hyper-V setup.

Does it have to be Hyper-V?
 
Well, there is a free version of hyper-v so this could be part of the equation. In the past the topic died once the cost for the hardware/software needed was added to the equation.
 
Why cluster if you're just folding? Create a VM for each blade and run folding on each one.
 
Why cluster if you're just folding? Create a VM for each blade and run folding on each one.

I don't think he's after clustering exactly, but rather High Performance Computing, sometimes referred to as Grid computing.

My understanding of what OP is wanting to do...

Over simplification: The idea is you can take a bunch of nodes and tie them together using a high speed bus interface (like infiniband) and farm a workload out to the whole grid.


Over simplified example: Instead of having 20 systems each running a bigadv workload you have one system of 20 nodes running one bigadv workload resulting in some very sweet TPF numbers and even sweeter bonus for very quick workload completion.

Maybe I'm misunderstanding the purpose of the OP but I think that's an example. Interchange F@H for whatever program you're participating in mind you
 
I don't think he's after clustering exactly, but rather High Performance Computing, sometimes referred to as Grid computing.

My understanding of what OP is wanting to do...

Over simplification: The idea is you can take a bunch of nodes and tie them together using a high speed bus interface (like infiniband) and farm a workload out to the whole grid.


Over simplified example: Instead of having 20 systems each running a bigadv workload you have one system of 20 nodes running one bigadv workload resulting in some very sweet TPF numbers and even sweeter bonus for very quick workload completion.

Maybe I'm misunderstanding the purpose of the OP but I think that's an example. Interchange F@H for whatever program you're participating in mind you

Exactly. And I want to know if Hyper-V is written to take advantage of this sort of setup.

Look up c6105 on ebay and you can see why im investigating this. These used "cloud" racks have fallen way off in price.
 
Last edited:
I wish I had the free time to work on this myself but I'm already spread incredibly thin on time with my existing projects.

With the right node communication network (again, I really think Infiniband is ideal, QDR 20 minimum) and the right control interface you could get this working without too much fanfare.

Personally I would love to see this happen.
 
when i get my main box up and running I'll try to mock it up in a virtual environment and see what happens.... I just don't have the resources on my 12-core 16 gig esxi box.
 
I have 5 physical boxes that are the same, but no method to interface them.
 
Hi guys,
do you know about scale-mp free edition?any experience with it? It supports Infiniband HCAs as interconnect, but only one HCA per node

cheers,
 
20gb fiber cards are running in the $45 range... maybe less on ebay? this is doable in a semi-cost effective manner I think... it's simply a matter of figuring out what
 
Unfortunately it's just memory sharing/aggregation that they offer in free tier, no CPUs (they
call it "System Expansion"): http://www.scalemp.com/products/product-comparison/

Though ScaleMP looks great on paper, I have heard it falls short on performance unless you code the app(s) based upon the cluster setup. Might as well use MPI for that. Same performance with a 1/100000000000000000 of the price.
 
Wouldn't know about vSMP performance.

Though indeed, it is often cheaper to port your app to sort of an MPI.
 
Back
Top