Back in the [H]orde!!

Sort of.....

I don't think that was it, the one I was thinking of had PPDPW and PPDPWP$. With the prices that tj mentioned, once you take the PPDPW and divide by the cost of the card, some economical choices become obvious.

As people are looking at folding and worry about the cost to get performance (both in electricity and cost to purchase), it reminded me of those two metrics. If our team does go back this direction, we may want to resurrect the concept. We could look at that sites yield numbers and divide by price. That might provide a number that is similar to PPDPWP$.
 
With the newest nV driver (like 355.11 beta) you can even see the power draw as value, my 970 most of the time above 150W with peaks over 160W.. I love those 970 cards.

How do I find the reported wattage? If this is fairly accurate then I can avoid using my kill a watt.
 
Well on my kill a watt, a gtx 970 pulls 140w more when folding than idle, a GTX 960 pulls 82 watts more.

Total system load for an 8 core 95w E5 v1 xeon (idle), 8gb quad channel 1.35v Ram, crappy Samsung 40gb HDD, 4 fans and the 970 on an asus Z9PE -D8WS was 232-242w from the wall (240v), adding the 960 to it to 324w
 
How do I find the reported wattage? If this is fairly accurate then I can avoid using my kill a watt.

Under Linux you can use nvidia-smi

Without parameter you get this screen
Code:
------------------------------------------------------+                    
| NVIDIA-SMI 355.11     Driver Version: 355.11         |                    
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 970     Off  | 0000:01:00.0      On |                  N/A |
| 80%   75C    P2   159W / 151W |    475MiB /  4095MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 970     Off  | 0000:02:00.0     Off |                  N/A |
| 80%   31C    P8    19W / 151W |     15MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                            
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1471    G   /usr/bin/Xorg                                   84MiB |
|    0      3235    G   /usr/bin/gnome-shell                           210MiB |
|    0     15892    C   .../NVIDIA/Fermi/beta/Core_21.fah/FahCore_21   162MiB |
+-----------------------------------------------------------------------------+


With parameter like

Code:
nvidia-smi -q -x -i 0
You get a XML output with the same information very handy to feed into a monitoring tool (like zabbix in my case).

http://imageshack.us/a/img673/1420/qrijiV.jpg

With -i option you can define which card you want to see.

Sorry, no idea in case of Windows. But I'm sure similar capabilities exists.
 
Last edited:
Which GTX 970 cooler style do you guys prefer when you're using multiple cards? I prefer the single fan "reference" model which pushes air out the rear of the chassis since the dual and triple-fan coolers just push around around in the chassis. The single fan blowers work great in server chassis for me. Something like this:
http://www.amazon.com/EVGA-Superclocked-256bit-Graphics-04G-P4-1972-KR/dp/B00NI45AUU/

Which model do you guys prefer?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Which GTX 970 cooler style do you guys prefer when you're using multiple cards? I prefer the single fan "reference" model which pushes air out the rear of the chassis since the dual and triple-fan coolers just push around in the chassis. The single fan blowers work great in server chassis for me. Something like this:
http://www.amazon.com/EVGA-Superclocked-256bit-Graphics-04G-P4-1972-KR/dp/B00NI45AUU/

Which model do you guys prefer?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I prefer the single fan solution like 780 or the one you showed. That said: I have two Gigabyte 970 with dual fan (and also don't like it too much)
 
I haven't owned a blower style card in ages, how noisy is something like a reference 970?
 
Under Linux you can use nvidia-smi

Without parameter you get this screen
Code:
------------------------------------------------------+                    
| NVIDIA-SMI 355.11     Driver Version: 355.11         |                    
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 970     Off  | 0000:01:00.0      On |                  N/A |
| 80%   75C    P2   159W / 151W |    475MiB /  4095MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 970     Off  | 0000:02:00.0     Off |                  N/A |
| 80%   31C    P8    19W / 151W |     15MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                            
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1471    G   /usr/bin/Xorg                                   84MiB |
|    0      3235    G   /usr/bin/gnome-shell                           210MiB |
|    0     15892    C   .../NVIDIA/Fermi/beta/Core_21.fah/FahCore_21   162MiB |
+-----------------------------------------------------------------------------+


With parameter like

Code:
nvidia-smi -q -x -i 0
You get a XML output with the same information very handy to feed into a monitoring tool (like zabbix in my case).

http://imageshack.us/a/img673/1420/qrijiV.jpg

With -i option you can define which card you want to see.

Sorry, no idea in case of Windows. But I'm sure similar capabilities exists.

Since you brought up the subject of Linux, any recommendations on which distro/release version to use. I'm putting together a second rig. :D
 
The guys at Overclockers.com recommend using zorin 9 for a gpu install - I followed their instructions and was up and running 20 minutes after burning the iso, perhaps we could try different installs and write our own guide?
 
The guys at Overclockers.com recommend using zorin 9 for a gpu install - I followed their instructions and was up and running 20 minutes after burning the iso, perhaps we could try different installs and write our own guide?

Wait a second, does the folding GPU client work on Linux now? I thought Windoze was still required.
 
core 17, 18 and 21 all work on Linux, the only downside is that you have to use the v7 client. PPD is about 10% more than you would get on the same hardware running windows
 
Which GTX 970 cooler style do you guys prefer when you're using multiple cards? I prefer the single fan "reference" model which pushes air out the rear of the chassis since the dual and triple-fan coolers just push around in the chassis. The single fan blowers work great in server chassis for me. Something like this:
http://www.amazon.com/EVGA-Superclocked-256bit-Graphics-04G-P4-1972-KR/dp/B00NI45AUU/

Which model do you guys prefer?

Dual fan - the blowers make too much noise, especially when you have cards right next to each other
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
core 17, 18 and 21 all work on Linux, the only downside is that you have to use the v7 client. PPD is about 10% more than you would get on the same hardware running windows

Thanks, Nathan_P. After a few manual tweaks to config.xml, I got one Linux GPU system running v7.4.4 and nvidia 355.11 driver up and running. I had to manually download GPUs.txt since it was not detecting the GTX Titan properly and the client wasn't trying to update GPUs.txt for some reason.
 
As for Linux distributions after many years I runaway from Ubuntu with 14.04 and 14.10. It was nearly impossible for me to get a working installation. Others got it running thoug.

I switched to CentOS 7 and very happy with it. much less overhead and not so much crap installed by default (like Amazon lenses).

Ubuntu I still have running as 12.04 on my ESXi as virtual machine for CPU-work and also in cases I rent EC2 instance from time to time.
In all cases it's important to blacklist the default "Nouveau kernel driver".
 
Last edited:
Unasked, but maybe helpful sharing my xorg.conf to allow manipulation of fan via cool-bit

Code:
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 346.35  (buildmeister@swio-display-x86-rhel47-09)
  Sat Jan 10 21:58:11 PST 2015


Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    Screen      1  "Screen1" RightOf "Screen0"
    Screen      2  "Screen2" LeftOf "Screen0"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
    FontPath        "/usr/share/fonts/default/Type1"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/input/mice"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor2"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:1:0:0"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:2:0:0"
EndSection

Section "Device"
    Identifier     "Device2"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:3:0:0"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Coolbits" "5"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "Device1"
    Monitor        "Monitor1"
    DefaultDepth    24
    Option         "Coolbits" "5"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen2"
    Device         "Device2"
    Monitor        "Monitor2"
    DefaultDepth    24
    Option         "Coolbits" "5"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

And a little script: fanset.sh

Code:
nvidia-settings -a [gpu:0]/GPUFanControlState=1
nvidia-settings -a [gpu:1]/GPUFanControlState=1
#nvidia-settings -a [gpu:2]/GPUFanControlState=1

nvidia-settings -a [fan:0]/GPUTargetFanSpeed=80
nvidia-settings -a [fan:1]/GPUTargetFanSpeed=80
#nvidia-settings -a [fan:2]/GPUCurrentFanSpeed=65

With different cool-bit you can also start playing with the frequencies ; but I never digged deeper into overclocking so no practical experience here
 
Looks like Grandpa couldn't wait just one day to let me be the top producer for a day. Darn you, lol. But seriously, congrats on getting back into the game. Looks like I'll have to get another video card and see if I can push the envelope for a month or so until I can't afford it anymore, hehe.
 
You can use additional drivers to install up to 355.11 you just need to add the PPA

https://sites.google.com/site/easylinuxtipsproject/12#TOC-Purely-manual-installation-of-the-new-driver

Also I am Overclocking a GTX 680 and a GTX 580 using coolbits and it works fine. http://www.overclock.net/t/1506137/ubuntu-f-h-installation-for-ubuntu-gpu-cpu

Also on the 4P's on Ubuntu 14.04 I make more PPD running 2 regular smp WU's vs 1 of the bigadv WU's and running 2 - cpu and a 580 or the 680 I am getting between 300k and 350k ppd on the Intel 4650 rigs at stock clocks on the CPU's. :rolleyes:

I am going to move 1 of the 980 Classified cards over soon and do some testing with them but I have no info with them on linux at this time.

Brilong I use the dubble fan cards but all of my rigs are naked so the direction of the air flow does not matter.

MGMCCALLEY GO FOR IT :D
 
Last edited:
I'm having issues with two nVidia cards and I wonder if it's because of the mixture of GPU architectures. I have a GTX 580 and TITAN Black in the same system. This worked fine with BOINC work units, but Folding doesn't appear to like it very much. Can someone look at my config.xml and tell me what's wrong or if I need to separate the two GPUs?

Code:
<config>
  <!-- Client Control -->
  <fold-anon v='false'/>
  <max-packet-size v='big'/>
  <verbosity v='5'/>
 
  <!-- Folding Slot Configuration -->
  <gpu v='true'/>

  <!-- Slot Control -->
  <power v='full'/>

  <!-- User Information -->
  <team v='33'/>
  <user v='brilong'/>
  <passkey v='foo'/>

  <!-- Folding Slots -->
  <slot id='0' type='CPU'>
    <max-packet-size v='big'/>
    <client-type v='bigadv'/>
  </slot>
  <slot id='1' type='GPU'>
    <max-packet-size v='big'/>
    <client-type v='beta'/>
    <gpu-index v='0'/>
  </slot>
  <slot id='2' type='GPU'>
    <max-packet-size v='big'/>
    <client-type v='beta'/>
    <gpu-index v='1'/>
  </slot>
</config>
 
What trouble ? What happen ? If I would be over in FF I would ask for first 100 lines of the log file with the system details. Would help us here too. ;)

Then a
Code:
lspci
or
Code:
FAHClient --lspci
to show us the correct indizes of the nV cards.
GPU-index and OpenCL-index will need to get sorted maybe.

What might help and the fastest way: reinstall the client. The initial system scan is mostly getting it right.

Another trouble maker can be driver. In the recent past newer driver like newer cards. Older driver for older cards. When I mixed 780 and 660TI some time back there was no driver dealing with both sufficient. Eventually I divorced both cards and each got its matching driver.

(Hint: replace the 580 or send me FOC the Titan)
 
Last edited:
Should we start another thread as the discussion is drifting away from the original posting???
 
Agree, with A helping hand of a mod and spin off ...
("Reported" this post and asked for a split)
 
Brilong your config looks correct and I doubt it is a mix and match thing I was running a 680 and 770 on the same rig and it was working at one time. My guess would be a typical Linux bad driver install. Have you tried to purge and reinstall. Also you do not need the big in the GPU config I do not believe it does anything there.
 
There have been reports over at FF about Fermi and later cards not playing well together, I think this is another one of those - the only true cure is to move one card to a different system and get the best drivers for each
 
The guys at Overclockers.com recommend using zorin 9 for a gpu install - I followed their instructions and was up and running 20 minutes after burning the iso, perhaps we could try different installs and write our own guide?

Link please?
 
Trash Talk! :D:D:D:D:D:D:D

Nah, just some friendly banter. I wish we all had a collection of video cards and 4P rigs to fold on. The cures for these conditions and diseases could not come soon enough. If I ever won the lottery, not that I play, one of the first things I'd do is invest more heavily in this project.

Actually, my plan was always two-fold (pun intended). I wanted to open up a computer gaming center for inner-city kids to help give them a more productive outlet than gangs, violence, and drug use. Then when the gaming rigs weren't used for gaming, they'd be chugging away on F@H. With the cheap electricity we have in Iowa, due in no small part to cheap coal and our huge investment in wind power, a place like that could actually be profitable if some funding from the state or local government helped fund the project. If I was more invested in politics I'd pitch the idea myself, but I have neither the expertise nor the time to invest. I do hope that someone some day has a similar dream and the means to make it happen.
 
I'm checking my points today, seeing what 2 970s can get me and I'm like who the F just passed me? Then I see its you. At over 1 million PPD. Yea, ok, so he's got this one lol

Good job man, moving up hella quick now
 
I'm checking my points today, seeing what 2 970s can get me and I'm like who the F just passed me? Then I see its you. At over 1 million PPD. Yea, ok, so he's got this one lol

Good job man, moving up hella quick now

Thanks Vaulter. Two 970s should get you moving up the ladder as well. I've currently got two 980 Tis running 24/7 and two 690s running about 2/3 time. Considering selling off the 690s while they're still good cards and getting another couple of those Tis. MSI makes a pretty good model.
 
OK Guys I picked a 970 SSC on ebay for $250 and I can not get it to fold on Ubuntu 14.04 running F@H 7.4.4 on the Intel 4P's when I try to add a GPU slot F@H says I do not have a GPU but it has no problem recognizing a GTX 580, 680 or 770 I have Nvidia 355.11 drivers installed. I have tried adding the GPUs.txt file to var / lib / fahclient folder (maybe I put it in the wrong place) :confused: there was not one there originally. I have edited the config file to GPU v= true and rebooted after doing those steps to no avail. I tried un-installing fah and reinstalling it and kicking the damn thing none of which worked.

X server shows Ubuntu recognizes the card and cool bits works on it also. The card works on the Windows rigs quite well, the card OC's pretty well it is currently running at 1560Mhz with .12mv bump. I also tried one of the 980 classified cards on the rigs with the same results.

So what am I missing here I see some of you are running 970's on Linux so I know it works.

HALP :(
 
I had been fighting with my GTX 960 system today and FINALLY got it working.

My situation: I installed Mint 17.2 and installed the 355 drivers as earlier posted. I installed fahclient and fahcontrol and unsurprisingly, there was no GPUs.txt file. I got the file and put it in the /var/lib/fahclient folder and it all looked good until I started folding. I got all sorts of error message and download retries.
Lucky for me, Mint's Driver Manager had the 346.72 driver on their list of choices. I had Mint do all of the work and the driver was successfully (shockingly) switched over to 346.72. Next I removed fahclient and fahcontrol then re-downloaded and installed them. I fired up fahcontrol and stopped the cpu, added the gpu, started the client and the damn thing actually works. :eek:

That was my experience today but I now have a factory overclocked 960 (1304 base) churning out a 9609 project at est 180k ppd , 24.6 sec tpf. :cool:

Hopefully there is some useful info in my post Grandpa.
 
@Grandpa_01

1) does the GPUs.txt have around 79kBytes ? (location is correct)
2) do you have enough CPU cores free ? Each GPU need one core/HT. What is your CPU slot setup ?
 
Last edited:
I had/am having some issues, though I know what they are, and am sorting them out.

Grandpa, they might be similar to yours.

The second card I added to an already running system was in the first PCIe slot. This meant that the computer indexed it lower than the other card. Yet folding in its wisdom indexed it in the next available location. So..... this means that the GPU index and the CUDA index do not match. This makes for all kinds of issues from the one you describe to cards just not folding as fast as you know they should.

Sorting this out is a simple process. While I only know the process for windows, there must be some way of doing this in Linux

Open GPUz and look at the drop down to select the card, the Top card in the list is OpenCL and CUDA index 0, any other would index the number up.

Next open a command prompt, navigate to your FAH client program directory. run the following command "fahclient.exe -lspci". this causes the FAH client to spit out a list of how it indexes PCIe devices. Somewhere in the list it will give you the name of your cards. The first is GPU 0 and as with the last step, they increment up.

In my case, GPUz listed my 960 first, and FAH listed the 570 first.

When I created my slot for my 960, I set the GPU index to 1 and the OpenCL/CUDA indexes to 0.
Once that was folding properly, I set up the 570 at GPU index 0 with OpenCL/CUDA indexes of 1.

There.... both folding nicely.

No idea if this helps for linux, but when your win box is not playing nice because you added an extra card this works, and finding a guide to manually set GPU/OpenCL/CUDA index is nearly impossible.

John
 
CV,

I think I tried that. though I might not have deleted all files in the user directory. For whatever reason, I do not recall that a fresh install worked for me, but manually setting that which FAH should have done, works like a charm. Besides, knowing how to do it, this is faster, and you don't have to dump WUs doing that fresh install. For example, a CPU unit that is not affected by this.

I think it is always worthwhile to know how to manually configure something that should be automated. No matter how good a programmer is, when trying to automate a task, there is always the chance that some super specific situation will slip through. This is not the first time I have had this issue, and I don't think that a re-install is warranted when there is a real solution.
 
Thanks for the replies guy's, it looks like I have some things to try.

I had been fighting with my GTX 960 system today and FINALLY got it working.

My situation: I installed Mint 17.2 and installed the 355 drivers as earlier posted. I installed fahclient and fahcontrol and unsurprisingly, there was no GPUs.txt file. I got the file and put it in the /var/lib/fahclient folder and it all looked good until I started folding. I got all sorts of error message and download retries.
Lucky for me, Mint's Driver Manager had the 346.72 driver on their list of choices. I had Mint do all of the work and the driver was successfully (shockingly) switched over to 346.72. Next I removed fahclient and fahcontrol then re-downloaded and installed them. I fired up fahcontrol and stopped the cpu, added the gpu, started the client and the damn thing actually works. :eek:

That was my experience today but I now have a factory overclocked 960 (1304 base) churning out a 9609 project at est 180k ppd , 24.6 sec tpf. :cool:

Hopefully there is some useful info in my post Grandpa.

Thanks TJ I will try 346.72 hopefully I do not end up having to install a new version of Linux. :eek:

@Grandpa_01

1) does the GPUs.txt have around 79kBytes ? (location is correct)
2) do you have enough CPU cores free ? Each GPU need one core/HT. What is your CPU slot setup ?

1) 79.2 and I checked and 970 is listed in there
2) running 2 - smp @ 31 that leaves 2 cores and the 680 works. I am pretty sure it is a bug in Linux 7.4.4 version of fah or the 355.11 driver does not report it correctly. (I am doubting the driver because x server recognizes the card propperley). #1 it does not download the GPUs.txt file at leas not to the var/lib/fahclient folder but it must be downloading it somewhere because fah recognizes the 580, 680 and 770

I had/am having some issues, though I know what they are, and am sorting them out.

Grandpa, they might be similar to yours.

The second card I added to an already running system was in the first PCIe slot. This meant that the computer indexed it lower than the other card. Yet folding in its wisdom indexed it in the next available location. So..... this means that the GPU index and the CUDA index do not match. This makes for all kinds of issues from the one you describe to cards just not folding as fast as you know they should.

Sorting this out is a simple process. While I only know the process for windows, there must be some way of doing this in Linux

Open GPUz and look at the drop down to select the card, the Top card in the list is OpenCL and CUDA index 0, any other would index the number up.

Next open a command prompt, navigate to your FAH client program directory. run the following command "fahclient.exe -lspci". this causes the FAH client to spit out a list of how it indexes PCIe devices. Somewhere in the list it will give you the name of your cards. The first is GPU 0 and as with the last step, they increment up.

In my case, GPUz listed my 960 first, and FAH listed the 570 first.

When I created my slot for my 960, I set the GPU index to 1 and the OpenCL/CUDA indexes to 0.
Once that was folding properly, I set up the 570 at GPU index 0 with OpenCL/CUDA indexes of 1.

There.... both folding nicely.

No idea if this helps for linux, but when your win box is not playing nice because you added an extra card this works, and finding a guide to manually set GPU/OpenCL/CUDA index is nearly impossible.

John

I will try indexing and slot assignment but I am only running 1 card so it should see it correctly.
 
OK I got the 970 running on one of the 4P's I reverted to 352 driver and downloaded the GPUs.txt fil again and replaced the other one I had before this time it worked. I do not know if it was changing the driver or the GPU's.txt file but I am going to move 1 of the 980's over to one of the other 4P's and try replacing the GPUs.txt file on it and see if that does the trick.
 
I'm running CentOS 7 with two 970 and 355.11; smooth as silk. But my nV driver setup has a "history" based on CUDA SDK 7 installation with 346.46. Not sure if that factors in. But glad to hear you got it running.
 
Back
Top