Distributed GPU-Enabled PC Network

sprtdfire

n00b
Joined
Apr 28, 2016
Messages
7
Hey, I work in the computer science department at the University of Pennsylvania. We’re doing deep learning research, and we’re looking to access unused hardware for our technology. We’ve developed software that could be installed on a distributed network of dedicated host machines that would run our analyses full-time. The catch is, you would have to plug-in a dedicated machine for this. Our lab is able to provide compensation for accessing the machine, but our first question is whether or not you have an unused PC that has a modern GPU (for CUDA or OpenCL programs). Before we try to distribute our software, we want to see if people like you actually have these machines lying around. What do you guys think? Do you have extra machines you'd be willing to plug-in? Thanks!
 
I take it you've already looked at Boinc? Maybe not that particular client, but going in with a larger distribution platform like that puts you in reach of a lot of people who are interested in supporting many various causes.
 
I take it you've already looked at Boinc? Maybe not that particular client, but going in with a larger distribution platform like that puts you in reach of a lot of people who are interested in supporting many various causes.
We have, but our software requires a bit more control than BOINC offers and the machines don't necessarily meet our specifications.
 
Tinkering with CUDA now..

Am I going to open up a box for some unknown person to put software on and run on with only the assurances of good intentions and a smile? Yeaaaaaaah, no.
 
When you say you looked at BOINC and the machines don't necessarily meet your specifications, I get a bit confused. You do know that the systems ran on BOINC are like machines ran at most other distributed computing projects as well right? I mean, I have server grade hardware running BOINC just as it had ran for others on Folding at Home. It has also ran several other non-FAH/non-BOINC projects. As far as GPU's go, how "modern" do you need. We have members with cards from all generations but lets face it. The 750Ti will perform much lower than a 980Ti. Is a 650Ti too old? You really need a much bigger description on what you are asking because there are some users that may have access to enough hardware to run their own data center. And yes, history on the project, what the project is actually computing, and who is going to be running it are all HUGE factors on volunteers offering their hardware. You then mention compensation. What kind? How much? I need to know it is worth the expense either in meeting my costs or in satisfaction of the research being done.
 
When you say you looked at BOINC and the machines don't necessarily meet your specifications, I get a bit confused. You do know that the systems ran on BOINC are like machines ran at most other distributed computing projects as well right? I mean, I have server grade hardware running BOINC just as it had ran for others on Folding at Home. It has also ran several other non-FAH/non-BOINC projects. As far as GPU's go, how "modern" do you need. We have members with cards from all generations but lets face it. The 750Ti will perform much lower than a 980Ti. Is a 650Ti too old? You really need a much bigger description on what you are asking because there are some users that may have access to enough hardware to run their own data center. And yes, history on the project, what the project is actually computing, and who is going to be running it are all HUGE factors on volunteers offering their hardware. You then mention compensation. What kind? How much? I need to know it is worth the expense either in meeting my costs or in satisfaction of the research being done.
These are all great questions. I'm just trying to get a gauge if people have spare hardware lying around for the purposes of the project. Any graphics card from the last 3-4 years will work. Before we get into the details of how much we need to compensate people and budget that from our grant, we need to just confirm that people have the hardware lying around. My main purpose in posting was to see if people were open to it. modi123 posted a relevant concern, and we want to know if others share the same concern that would prevent them from offering dedicated machines running software in a virtual machine. If that's the overwhelming feeling, it will be difficult to build the dedicated network we need for our purposes. But we are using this post as a gauge interest and see if we should pursue this route or another.
 
Well, I am pretty active in most DC projects and read a lot of the forums in them. I can tell you that some will run anything with the promise of financial return. Others will only run it if they trust the source and agree with the "science". I think you will get a lot more interest in discussion if you were more forth coming of the details of the who and what. I don't think anyone is demanding the compensation details just yet. That is more of something you get to once people know what they are looking at delving into.

You can get an idea of the hardware I have at my disposal here: Computers belonging to Coleslaw
Just ignore the few that say 80 cores. Those were just VM's for testing.
You mention running your work in a virtual machine. That gives everyone an idea on some system requirements as not all systems handle virtual machines well. However, if you are talking virtual machines, I am curious as to what kind as GPU pass through can be a rather pain for some VM software out there.
 
Well, I am pretty active in most DC projects and read a lot of the forums in them. I can tell you that some will run anything with the promise of financial return. Others will only run it if they trust the source and agree with the "science". I think you will get a lot more interest in discussion if you were more forth coming of the details of the who and what. I don't think anyone is demanding the compensation details just yet. That is more of something you get to once people know what they are looking at delving into.

You can get an idea of the hardware I have at my disposal here: Computers belonging to Coleslaw
You mention running your work in a virtual machine. That gives everyone an idea on some system requirements as not all systems handle virtual machines well. However, if you are talking virtual machines, I am curious as to what kind as GPU pass through can be a rather pain for some VM software out there.

We are asking for dedicated machines because of the need to boot into Linux, which is best for passing the GPU into a virtual machine via KVM. You are correct, other VM software make it very difficult to pass the GPU into the virtual machine. Windows 10 could make this possible in hyper-v, but they've purposefully disabled it on any machine that is not running Server 2016.
 
So, now you are also getting into some other queasy territory for many. There are a lot more Windows users than Linux users out there. If you have a very easy setup for Linux like we have for our [H] appliance found here: [H] Ubuntu DC Appliance -- alternative approach to Linux crunching/folding

Or easy setup guides like the ones that can be found from here: All Inclusive DC Project list v.2

That would go a long way. I for one don't typically run Linux so have a very Newb love/hate relationship with it. :)
 
Also, do you plan on having teams, a point system, stats export, etc.. like many other DC projects?
 
sprtdfire, it sounds to me like you're reinventing the wheel without a need. I believe BOINC with VirtualBox is where you should investigate further. There are plenty of BOINC projects that only run on GPUs and require VirtualBox to be installed on the host. If this meets your needs, you would not need to reinvent the points, stats or other wheels; you'd just tie into BOINC.
 
sprtdfire, it sounds to me like you're reinventing the wheel without a need. I believe BOINC with VirtualBox is where you should investigate further. There are plenty of BOINC projects that only run on GPUs and require VirtualBox to be installed on the host. If this meets your needs, you would not need to reinvent the points, stats or other wheels; you'd just tie into BOINC.

That's pretty interesting. We weren't aware of the potential to pass the entire GPU through VirtualBox. We're going to look at that now. Still interested in learning if more people have extra machines lying around for this use case, but thanks for pointing us that way.
 
I doubt you will find much information or very good results trying to do GPU pass through with Virtualbox unless there was some major improvements since the release of version 5.
 
I doubt you will find much information or very good results trying to do GPU pass through with Virtualbox unless there was some major improvements since the release of version 5.

This was our understanding as well. We're going to look into it though since plext0r recommended to do so. Our understanding was that BOINC would not work for our use-case. We're doing deep learning on video streams from cities, so the distributed network is valuable to reduce the latency and overall data costs to move it to AWS
 
IIRC, passthrough kinda works on Virtualbox, but not very well for crunching. I think it has to do with how CUDA/OpenCL is handled. However, I've not seen much discussion on it in quite a while in regards to any DC projects.
 
And there are always enthusiasts that would like a DC project that could utilize the Xeon Phi's.... hint hint... lol
 
... Beside an individual user setup you also should have a team setup ... At least for the [H]team ...

As for hardware: I have nothing sitting idle around due to power limitation ; but if there is trust build up switching is always possible. Though I have "only" a dual GPU setup 980Ti/970 under CentOS 7
 
Back
Top