Distributed computing software

aye29

Weaksauce
Joined
Jun 20, 2006
Messages
76
At school, I work in a lab does a bit of computational fluid dynamics work that takes a long time on a single computer. There are a lot of other computers in the lab that are just used for general everyday tasks. Is there anyway to use the spare computing power of the other computers to help with the CFD calculations? All the machines run Windows XP.
 
If the person who's in charge of that lab and those pc's okays it then you can run distributed computing apps on all the sleeping computers. They will pull more power and throw off more heat.
 
NoEcho seems to not grasp the meaning of the question but I do. Actually, I don't know of any ready-made software to be used to distribute computing power across a group of computers. However, I will suggest you look at any form of cluster computer like the Beowulf project for Linux or anything similar. Those systems will allow you to do any intensive math computation like CFD in your case.

The DC forum is more geared toward distributed computing using software provided by universities to help on medical research. Some of them is Folding@Home, Rosetta@Home and others similar projects. You may have a better luck asking this in the programming or operating system (Linux sub-forum) to help on your needs. Or even in the multiprocessor systems since the Beowulf cluster is technically a multiprocessor system (since Hollywood boxes is often using big rendering farms made from A64 computers in parallel, hence my suggestion).

Hope my answer will guide you on your quest and this will surely be a interesting quest :)

 
What specific program are you using? Some of them may have that ability built into them to do those types of computations. It would just be a matter of setup.
 
Depends on what software you are using. I use Fluent at work that is specifically designed and sold with the intention of stringing it across a cluster. If you are using something that's not designed for it then the simple answer is no, you can't, not really.

There are a few things to consider. First, what size are your cases? I am solving around 8 million cell meshes in approx 24 hrs but that's with a cluster I built specifically for this, 4 nodes in 2 boxes (core2 duos) and tons of RAM. If you are only doing about 500,000 cells or so then multiple processors will not help you because of how that works. If you split the domain to run parallel then it has to do an iteration in each domain at each node and then swap notes about the interface values to start the next iteration. At some point the overheads for that start to outweigh the speed gained.

Also, if your processor is waiting on something already, then there's no point in splitting the solve. The easiest/best way to speed up a solve is to find the bottleneck and fix it. If your RAM is slow or if you are tapping it out and going to page file then you need to fix that first (of course if the case is too big for one box's RAM then running parallel is a good solution)

So, tell us more. Software? Case details? etc....
 
Back
Top