I do not fold with GPUs at the moment, but I installed BOINC inside a Docker container and BOINC is configured to pause using a cc_config.xml file similar to the one Gilthanis posted. My quick-n-dirty script is here: https://pastebin.com/CqTH1hq3. I'm currently using William Lam's OVA since it...
I can recommend this Fling since I deployed six copies of it on various ESXi nodes. He's going to release a new version v1.0.1 that automatically applies the BigAdv flag if it detects you have 16+ vCPUs defined. The OVA asks a ton of questions when you deploy it and allows you to change your...
After stepping away from Folding and BOINC in 2016, I'm back in the mix. I hope to ramp up more nodes in the next couple of days. It's good to see so many of you are still here from back then! Fold HARD!
I have powered off G'Kar, my only quad AMD G34 system since it has been unstable for a while even when running with stock clocks. I don't have time to mess with it, but I've tried new power supplies and it still freezes or falls off the network. It stays running, but the Ethernet ports go...
I just found out my crunchers were not pulling down new WCG work due to an HTTP proxy issue and I've just fixed it. No wonder my POEM scores were going through the roof.
Doh! It had been so long since I added a new project I forgot to join the team. I just joined. Will my points move over or only newly-submitted work units?
Linden, I still run 4P E5-4650 rigs but I'm not paying for power. I would say there's nothing special like there used to be with F@H BigAdv. I'm running BOINC splitting between WCG and Poem@Home. I recently decommissioned four of these systems and sent them to another site so I'm mainly back...
I remember breaking the 1M PPD barrier with my quad E5-4650 rigs (thanks to tear). Now those rigs are basically useless for folding so they're crunching BOINC POEM@Home and WCG. Those rigs had about $15K in just the CPUs. :D
sprtdfire, it sounds to me like you're reinventing the wheel without a need. I believe BOINC with VirtualBox is where you should investigate further. There are plenty of BOINC projects that only run on GPUs and require VirtualBox to be installed on the host. If this meets your needs, you...
Too bad this is going on. I had switched my farm back to F@H a few months ago but now I'm back on BOINC since I was no longer getting WUs with the bigadv flag. What a freakin' mess.
I moved my farm back to BOINC (POEM & WCG) until the BA servers are put back online or we figure out some other way to take advantage of 32-core+ systems.
Shows how out of touch I am. Who left? And yes, I meant kasson. I don't follow foldingforum.org; I only log in when I have a serious problem like we've seen this week. :banghead:
Nathan_P, I still have 40 idle hosts. What the heck is going on? Are they silently removing support for the 6.34 client and forcing people to V7? I have no desire to switch to V7 at this time. I've seen no site admins or professors comment on the thread about all servers down.
I'll be moving the majority of my rigs back to team 1115 since they're providing power, cooling and other necessities to keep the rigs running (including many of the rigs themselves). They're also starting to make a small internal push to legitimize F@H on company resources! I hope to help...
I'm just thinking of the power infrastructure required to maintain a rack of these. You could fit 13 of these in a rack with a 1U or 2U switch at the top, but you'd need almost 40KW per rack! Assuming the 3200W power supplies are worst case with no redundancy and it really just pulls 1400W x...
It was my first trip to Uwharrie; had planned to go last year but my daughter ended up in the hospital. I live outside Raleigh and work in RTP so the park is about 1 hour 45 minutes away. I definitely want to go back.
I've thought about updating the folding appliance to Ubuntu 16.04 LTS when it's released. I've worked with tear in the past to provide some updates to his awesome appliance so maybe we can come up with a nice 16.04-based appliance.
How long until F@H supports this beast of a GPU server? 8 P100 GPUs in a 3U server pulling 3200W.
NVIDIA Unveils the DGX-1 HPC Server: 8 Teslas, 3U, Q2 2016
I went to Uwharrie with my son and his friend this weekend and it rocked. Definitely would like my own trail rig. We drove a 2008 JKU, friend drove an FJ. We had a blast.
I think you're heading down the right trail, but I'm not sure a Xeon D is cost effective compared to any other low-end desktop CPU. I just read this interesting article from Facebook's Blog about deploying the Xeon D to replace dual-CPU hosts and improve their performance.
Facebook's new...