Linux or Windows for Multi-GPU Crunching or Folding?

Linden

[H]ard|Gawd
Joined
Sep 8, 2005
Messages
1,199
Please give me your advice and experiences concerning multi-GPU distributed computing.

Very soon, perhaps as early as this coming weekend, I will be dismantling my '4P', 48-core, Linux BOINC crunchers. My intention is to keep donating to worthy distributed computing projects. (I've been contributing non-stop since 2000.) Whatever projects I pick up, I most likely will employ multi-GPU systems. My previous experience with multi-GPU DC was with Folding@Home, for which I ran several *multi-GPU boxes in Windows. I exclusively used Nvidia GPUs. Although I think AMD has some fine hardware, I will continue to only use Nvidia GPUs for distributed computing. I have much DC experience with Linux (Ubuntu) CPU crunching, but not with Linux-GPUs.

Here's where I could use your advice and knowledge. Yes, I know I could scour the Internet for tidbits of info here and there; but I'd like to get it straight from the [H]orde DC experts, all in one spot.

Folding@Home: Linux or Windows? Why? What's the current state of Nvidia's Linux drivers for late-model video cards, that is, 980-class and newer?

BOINC projects: Linux or Windows? Why?


* 9800 GX2s, 295 GTXs, & GTS 430s. With those double-GPU 9800 GX2s and 295 GTXs, I really learned how optimize case airflow!
 
Linux usually earns more points, but typically doesn't have supported drivers as quick or as easy to set up as Windows for BOINC. You will also find Linux distros verying in whether there are drivers. Also, many users have had troubles with GPU crunching within BOINC because the drivers take to long to initialize and BOINC thinks there isn't proper drivers when it starts. So to fix it, you have to add a line in your cc_config.xml file (which you have to create) to delay BOINC's startup. If you are a Linux guru, you probably can figure it out. However, I will say it is a lot more work than using Windows.
 
In BOINC, what would you estimate the performance difference to be? Linux 10% more productive? Less? More?
 
F@H: I run 99.5% my nV GPUs on CentOS. Ubuntu drove me nuts from 14.04/14.10 onwards; after 10 years using it.

In general it yields higher points; but overclocking is difficult/impossible. Compensating control: get factory OC hardware. My last two cards are MSI 980Ti sea hawk and MSI 1070 gaming x. For both I even don't need cool bit/fan control. Both running well with their cooling concept.

Driver I use right now is 367.27 (first to support 1070); running stable overall; I think one driver crash last week; but first one in years.
I have the best experience with the download direct from nV and the use of the .run file. Others prefer alternative repos like edgers.

Driver installation /upgrade can be a bit painful first but there are good tutorials over in FF and here is help too. Once you have the routine it normally go smooth.

Linux rules, CentOS too.

Sidenode: I struggled once to get GPUgrid running under CentOS but there was a way around and meanwhile packages are maintained; but I focus 24x7 FAH these dats
 
Last edited:
Christian, Gilthanis, what would you say the performance difference is between Linux and Windows. Yes, I know Linux allows better performance, but how much?
 
If memory serves me well between 10% and 15%; let me check if my dual boot still work on weekend; then I can compare "more scientific".

For me it's also the convinience to quick remote-shell into the box and fix settings; from my mobiles . I know, W could too; but bas[H] is more [H]ard
 
For GPU's 10-15% sounds about right. With CPU's, I have heard people claim up to 50% depending on the science application such as VINA

And depending on the project/driver requirements, there are still some out there that continue to run XP because it out produced Vista+ due to how Microsoft handled drivers. However, XP has other limitations that I'm sure your rigs would not be happy running on.
 
For F@H I use primarily 980 Ti cards. I've got 6 averaging around 500K PPD, stock clocks on EVGA pre OC'ed cards. I have high hopes for the 1080 Ti cards when they come out, but I'm rarely an early adopter as many early products suffer from heat issues, coil whine, or driver bugs. The price on 980 Ti dropped sharply when the 1080s were released, so it's a pretty good buy for F@H right now. I run Windows on mine for ease of use, it's what I support in my professional career, but for GPU Linux is slightly more efficient. I can't speak to CPU Linux vs. Windows performance for any projects. Just make sure if you go GPU that you have at least one full slot between each card, 2 if you can. I've found the sweet spot to be a 2-card system because I can't find many motherboards that have a 1-4-7 slot spacing for GPU. There are a few new Gigabyte motherboards that do have that configuration available, but the SLI layout isn't necessarily supported if you actually wanted to game that way. If you're going with 1080 cards, it's a mute point anyway. Nvidia doesn't support 3-way SLI on the 1000 series and you need an HBA bridge, a new type of SLI bridge that's higher bandwidth to run them in SLI anyway. As independent cards, though, you could go 3-way on one of the newer 2011 boards with the right case layout if you get plenty of air moving through there or use the FE cards to keep the air moving out of the case. The FE cards are selling at a premium, though, and might not be worth it.
 
Thanks guys, I'm soaking it all in. MGMCCALLEY, thanks for the value-added content. Although I wasn't yet actively soliciting hardware advice for new systems, you were right on time. I have been considering the following GPUs: 980, 980TI, 1060, and 1080. I have to consider all the variables: Up front cost, power consumption, availability, and estimated obsolescence ('future proofing,' which is impossible...). You are right, concerning video card spacing for optimal airflow. Years ago, folding with dual 9800GX2s and dual GTX 295s (each card had two GPUs) gave me valuable experience with designing for computers good airflow.
 
If you are thinking of Maxwell based cards I'd look at the 980ti over the 980, its massively dropped in price and will do 500k with ease, a stock 980 tops out at 380-400k. As for Pascal a 1060 will do 400k for 120w, which is not to be sniffed at, however a 1080 will do 800k for 180w so is much better in terms of PPD/w. However the entry price is steep. As for obsolescence, the Kepler cards came out in 2012-2016 and are still going strong in F@H, they do lose something in PPD/W but still work just fine.

Nvidia's drivers for late model cards on linux are fine, unlike Christian though I've stuck with using the Ubuntu based zorin 9 distro and the edgers ppa - my Linux skills are shite and as zorin 9 looks windowsesque it works for me. You should be able to gpu fold on the old [H] folding appliance but I haven't tried installing the drivers yet

If you are dismantling all 4 rigs in your sig you won't need much in the way of new parts, just gpu's and mobo's
 
Nathan, thanks for the tip. You and MGM have both steered me towards 980Tis as a better solution than vanilla 980. You make perfect sense. I think, bang for buck, the 1080 is awesome; but the 'buck' part is still a bit rich. So yeah, looks like 980Ti is the proverbial sweet spot.

I was unaware this Zorin 9 distribution. I'll have to take a look. I have plenty of Linux Folding and BOINC experience, but only with CPU processing. I also need to see what Linux BOINC/F@H installation and configuration guides are out there. I have really enjoyed using Linux with CPU Folding. Unless there's a power outage, the 4P crunchers chug along like bulldozers.
 
As for spacing: I have one watercooled 980Ti, and an air cooled 1070 one slot apart. No temp issues at all.
 
Side note to DC systems configurations: I've put up three of my 4P systems for sale in the DC FS/FT forum.
 
Back
Top