Microwulf - take a byte out of folding

I read about this the other day but besides the price per performance, what is so great about this cluster? Surely, others have built similar setups before...??
 
It is a beowulf cluster they built earlier this year. They were compareing it with machines built over 13 years ago. Making this seem like a massive machine was a bit overboard. Interesting build, but it can be done for much cheaper than the original $2500 price tag.
 
That's what I figured, especially today when one can configure a dual Clovertown and OC it. I don't know how that would compare in Gflops, but the price would likely be lower.
 
Cheaper :

4 boards with 4 Q6600, Cheap 1 Gb of DDR2-667 ram each board and this will beat the crap out of those X2 ;)
 
Agreed. However it was a proof of concept. They did a good job on it for the budget that was had. I think it would be neat to do some xeons on a mATX board. Also, cost per watt would be an important factor in it too. They had a $2500 budget and built that back in January. They showed a comparison to the cost of parts to a few days ago and it had dropped by nearly half for the same parts.
 
I just linked that because it's pretty and reminded me of the Tournelle stack, although he used scotch tape.
 
Unfortunately, their is no way to fold on a beowulf system at the moment.
 
The interesting part for me was the extra NIC per board. I hadn't thought of that to increase network speeds (me=duh :p).

I'm assuming that Folding generates enough data to flood the PCI bus so that adding more NIC's wouldn't help. What about the PCI-e slots?

Too bad the client doesn't/can't/won't work with this kind of set-up. It would be interesting to play with.



 
I believe Marty brought it up one time to the Stanford guys. They were mildly interested in his crapper farm. Nothing that I know of came from it. A scaleable SMP program would be an interesting build though.
 
[H]ella|[H]ard;1031394674 said:
that blows cuz I whould whip up a small cluster fast if u could



Well, it doesn't really matter because there would be no real advantage to Folding on a cluster anyway. The F@H clients are designed to run on individual PCs. Just put the 4 PCs together running individually and you would have the same result.
 
The whole point is the possibility of running the SMP client on a cluster. Sure, the client currently sees only four cores. But, what if you had four matching single-core machines, or two matching dual-core set-ups?

You see, I have a dream. A dream where one day the Folding client will dynamically detect and utilise multiple cores, with the possibility of working on a cluster. Mmmnnn... Four quad-cores in a cluster all working on the same protien.:eek:

In reading the Cluster Monkey article from the linked page, I noticed he used a form of MPI, which the SMP client also uses. Too bad the network transport is a major limiting factor. Could the PCI-e bus support the bandwidth needed?

It's funny, the main thing that got me interested in Unhappy_Mages' Fold-Server was the Beowulf cluster that was linked here a couple of years ago. I miss you U_M.:(
 
Could the PCI-e bus support the bandwidth needed?

I miss you U_M.:(

I don't know how exactly "Infiniband" tech is transmitted from one PC to another. I saw that a Tyan tank had that technology built into for the 4 quad core box that they had made. Someone else in that type of environment would have the info you would need on that.


Where is the good mage? How is college going? You done yet?
 
I don't know the practicality of doing something like this, but that isn't the point to begin with. I'd guess, most people wouldn't consider spending hundreds or thousands of dollars on distributed computing projects 'practical'.

I think here in a few years, something like AMD's Torrenza (and perhaps, to a lesser extent intel's Genossa) would be great for tying multiple PCs together. Personally, I'm not an expert on something like this, but I don't see how a super low-latency, close to the silicon type interface like that couldn't improve things over using Windows' or Linux's networking stack.

That still doesn't clear the big hurtle. The only high speed interconnect that's really available to people like us is Gigabit Ethernet. 10Gbe might be in a few years, but technology like Infiniband probably never will be. Keeping the latency low between the nodes might prove to be nigh impossible.
 
The only high speed interconnect that's really available to people like us is Gigabit Ethernet. 10Gbe might be in a few years, but technology like Infiniband probably never will be. Keeping the latency low between the nodes might prove to be nigh impossible.
Well, there's also proprietary infrastructure/architecture that utilizes a motherboard backplane, but the problem with that kind of technology is the inherent proprietary design, and of course the high cost that goes with it. Bandwidth shouldn't be an issue when such close coupling of the individual 'systems' are implemented. As others have stated, however, any type of 'cluster' implementation for F@H, is not in the cards for the foreseeable future despite its great production potential.
 
Back
Top