News from PG re GPU benchmarking

Discussion in 'Distributed Computing' started by Nathan_P, Oct 21, 2012.

  1. Nathan_P

    Nathan_P [H]ard DCOTM x2

    Messages:
    3,077
    Joined:
    Mar 2, 2010
  2. Brainbug

    Brainbug [H]Lite

    Messages:
    87
    Joined:
    Feb 15, 2012
    Guess eVGA/NVidia has increased their support to FAH.

    Although in the long run more competition between [H] and eVGA benefits the overall research production.
     
  3. Nathan_P

    Nathan_P [H]ard DCOTM x2

    Messages:
    3,077
    Joined:
    Mar 2, 2010
    It might also aide us, there are more than a few people outside the top 20 that fold on GPU's and it may encourage a few that have dropped gpu folding to restart
     
  4. sirmonkey1985

    sirmonkey1985 [H]ard|DCer of the Month - July 2010

    Messages:
    18,616
    Joined:
    Sep 13, 2008
    we've been calling for this shit for how long? evga cries about it and a week later its in beta testing.. give me a friggin break.
     
  5. Nathan_P

    Nathan_P [H]ard DCOTM x2

    Messages:
    3,077
    Joined:
    Mar 2, 2010
    My guess is that this has been in internal testing for a while now, and to be fair to EVGA they have not been the only team asking for QRB to be applied to GPU projects. Perhaps Kendrak can give us a bit more info?
     
  6. Nathan_P

    Nathan_P [H]ard DCOTM x2

    Messages:
    3,077
    Joined:
    Mar 2, 2010
    I've made an enquiry about ATI and EVGA are not the only team that have asked for this to happen
     
  7. -ZS-Carpenter

    -ZS-Carpenter n00bie

    Messages:
    3
    Joined:
    Oct 21, 2012
    I sure almost everyone who GPU folds has been asking for this since the SMP QRB was released. It has nothing to do with a single team "crying" but the entire donor base asking for it. Its been a long, long time coming and will be very interesting to see how it plays out once the beta testing is done.
     
  8. Kendrak

    Kendrak [H]ard|DCer of the Year 2009

    Messages:
    22,871
    Joined:
    Aug 29, 2001
    This has been talked about off and on in the DAB for over a year.

    The part that made this truly possible is the advent of running the same WU on both SMP and GPU. So now the same TPF = the same PPD if it is on a CPU or GPU.

    Once I get an idea of numbers I will share. However I might elude to something that was said. GPU will most likely get a boost with this. Outside of that possibility I know nothing at this moment.

    Maybe it is time for the return of Origami Master ala 2.0

    [​IMG]
     
  9. Punchy

    Punchy [H]Lite

    Messages:
    100
    Joined:
    Sep 4, 2010
    I can't quite parse that sentence structure, but I hope you're not wasting any researcher's time on a question like that.

    http://foldingforum.org/viewtopic.php?f=16&t=19042&p=190618&hilit=+GPU+QRB#p190618

    July 2, 2011: We have been considering QRB for GPUs, which should finish the rebalancing we've had in mind to do. There are some issues to work out though, which is why we haven't made that change now.
     
  10. Grandpa_01

    Grandpa_01 [H]ard|DCer of the Year 2013

    Messages:
    1,157
    Joined:
    Jun 4, 2011
    Time to put on the tin hat, damn it is too big must of lost too much hair from the last conspiracy. No I can not see a conspiracy here Just Stanford attempting to do what we all have been asking for = pay for = work. If a GPU is faster at doing the same work then it should be reflected in points if it is not then it is not. Lets see where the chips fall if they are faster then it is going to cost us less to fold. :D

    I can buy allot of GPU's for the cost of a 4P
     
  11. Punchy

    Punchy [H]Lite

    Messages:
    100
    Joined:
    Sep 4, 2010
    Amen!

    But what about power consumption? It may be tricky balancing the overall cost when you consider what power hogs some of those GPUs are.
     
  12. Kendrak

    Kendrak [H]ard|DCer of the Year 2009

    Messages:
    22,871
    Joined:
    Aug 29, 2001
    It all comes down to cost of ownership if you are going pro when folding.
     
  13. DooKey

    DooKey [H]ard DCOTM x4

    Messages:
    5,732
    Joined:
    Apr 25, 2001
    I think it will level out. The only way QRB works for GPU is if you have the latest GPU and they significantly increase the size of the wu. This will put old GPUs out of the game.
     
  14. DF454

    DF454 Gawd

    Messages:
    821
    Joined:
    Apr 25, 2001
    Does the PCIe slot configuration i.e. x16, x8, x4 matter for folding 2 or 3 cards on the same MB?
    I melted down the fans on my 560GTX, an purchased another one to fold while I RMA'ed the original one.
    My MB only has one PCIe slot and I was thinking of swaping out the board to run both cards at the same time. But since I have not read alot about folding multiple GPUs I'm now trying to find the info I need.

    Thanks :)
     
  15. Grandpa_01

    Grandpa_01 [H]ard|DCer of the Year 2013

    Messages:
    1,157
    Joined:
    Jun 4, 2011
    he he each of my 4P's use 1080 to 1190 watts my GTX 580 uses 175 Watts measured at the wall for folding That is allot of GPU's can be run so that is no big concern for me anyway. It all depends on where it all shakes out I may be better off shutting down the 4P's and running GPU's we shall see. :)
     
  16. Kendrak

    Kendrak [H]ard|DCer of the Year 2009

    Messages:
    22,871
    Joined:
    Aug 29, 2001
    A 4x slot is plenty (if the back if the socket is opend to let the card hang out)

    On 1x there is a slow down.

    There have been a number of people who have done ext cables and such to make it work.
     
  17. Kendrak

    Kendrak [H]ard|DCer of the Year 2009

    Messages:
    22,871
    Joined:
    Aug 29, 2001
    Calling Atlas Folder!
     
  18. DF454

    DF454 Gawd

    Messages:
    821
    Joined:
    Apr 25, 2001
    Thanks Kendrak, the slots are all x16 in lenth, but most board specs say x16 with one slot filled and 2 slots at x8 electrically with 2 slots filled, or one at x16 and the second at x8.
    I've seen a lot of foks run SLI on their gamming rigs with 2 cards running in slots at x8, so I figured it might not make a difference with folding.
    The boards that can run 2 PCIe slots at x16 at the same time are few, and costly, and I'm to cheap to do that these days.
    I've spent to much over the years on this and have leanered to stay away from the bleeding edge. :eek:

    Thanks :D
     
  19. -alias-

    -alias- Limp Gawd

    Messages:
    374
    Joined:
    Jul 15, 2012
    Once this is completed, my SR-2 computer finally come into its right world. I have to replace the PSU in the rig with a Corsair 1200W, and cramming into 5 pieces GTS450 I have lying around and folds forward. Guess that each card then will give up to 100 - 150K PPD, if this is to have any meaning. The power consumption of such a unit will be 800 - 900W if I leave the CPUs also to fold on, but at default speeds. Yes, I will welcome this new regime very much.:D

    It is about time that the GPU folders get paid for their efforts.;)
     
    Last edited: Oct 21, 2012
  20. theGryphon

    theGryphon [H]ard|Gawd

    Messages:
    1,262
    Joined:
    Nov 21, 2011
    I'm very happy to see this finally happening.
    Very curious to see the results of the benchmarks! :eek:
     
  21. knopflerbruce

    knopflerbruce Limp Gawd

    Messages:
    172
    Joined:
    Nov 8, 2007
    I wonder how much PPD a 670/680 will get if this is true. My slow 4P project is finally starting to happen, and I was hoping I'd be able to get an edge over most guys who are above me in the overall ranking :p Won't happen I suppose...
     
  22. Axdrenalin

    Axdrenalin [H]ard|DCer of the Month - Nov. 2009

    Messages:
    6,176
    Joined:
    Jan 28, 2004
    Wow Kendrak, do you still have that unit?!! I'm still folding three GPU's on one of those three Bad Axe2 MB's I had. Would be nice to know this one is coming back to life! :cool:

    Ax
     
  23. Kendrak

    Kendrak [H]ard|DCer of the Year 2009

    Messages:
    22,871
    Joined:
    Aug 29, 2001
    No, I sold it off to capreppy and use the funds to build one of the first 2P 1366 rigs in the horde.

    You can ask Tobit how much fun he had helping me with that one.

    The 2.0 part would be a new version. I would need to know where the sweet spot on the GPU ppd/w curve is.
     
  24. [H]ugh_Freak

    [H]ugh_Freak [H]ard|Gawd

    Messages:
    1,359
    Joined:
    Apr 30, 2003
    Back when GPU first came out I had a system that could change the x factor of the PCI-E slot manually. On testing those WU then there was around a 10-12% difference between x16 and x1. That was on PCI-E rev. 1.0.

    Now with PCI-E rev 2.0 and 3.0 boards I'm not sure you'd see any noticeable difference outside a margin of error really unless the WUs are passing LOTS more info across the bus then they used to. Just because the WU is bigger doesn't mean it's moving more info across the bus. My guess would be it's less pronounced then the effects of RAM timings for SMP.
     
  25. rhavern

    rhavern [H]ard|DCer of the Month - Apr. 2013/Oct. 2014

    Messages:
    505
    Joined:
    Mar 26, 2011
    I am very excited about this. I was big into GPU folding prior to joining [H] and getting a 4P built. I still have 11 GPUs spinning, down from 15. If this bonus is worthwhile, it may be time to retire the G92 cards and replace them with more 460 or 560ti. Also, this may finally even out the points discrepancy between core_15 2.22 and 2.25. We can only hope.

    Exciting times!
     
  26. rhavern

    rhavern [H]ard|DCer of the Month - Apr. 2013/Oct. 2014

    Messages:
    505
    Joined:
    Mar 26, 2011
    As your PCIe question has been answered, I can help you with the multiple GPU question. When GPUs were the king of the hill, I built 5 boxes as dedicated folders. Each box had at least two PCIe GPU slots, whereas some had three. I went the GPU route as I could add another GPU when funds and ebay availability allowed. Bear in mind that nvidia is currently the only real option for dedicated folding boxes, due to the low PPD/high CPU usage of AMD GPUs. Another thing to consider is that small WU really hammer the CPU-GPU communication and a slow CPU can cripple your PPD, as the CPU can't keep multiple GPUs fed. Case in point: I had 2x 9800GX2s and a GTS250 (that's 5 GPU clients) on a Asus P5N32-E SLI plus with a single core CPU (no folding client running). When a new small WU project was released, my PPD dropped dramatically, maybe 40-60% as I vaguely recall. The CPU did not show high utilization in task manager, however when I swapped the CPU for a quad-core, the PPD returned to previous levels. Currently, there aren't any WU that small to cause this problem, however it is something to consider. As a result, all my GPU boxes have at least a dual-core CPU and I haven't noted any PPD drop due to CPU bottlenecking.

    So, with all that in mind and remembering that these boxes were built from one to two years ago on a shoe string budget, here we go:
    • ASUS M3N-HT Deluxe, AMD64x2 4000+, 3x GTX460
    • MSI P7N Diamond, Q6600 @ stock, 1x GTX460 2x 8800GTS 512M
    • DFI LanParty RDX-200, AMD64x2 4600 S939, 2x GTX460
    • Asus P5N32-E SLI plus, QX6700 @ stock, 1x GTS250 1x 9800GX2 1x empty slot

    Power supplies range from 750-1000W and are Corsair or CoolerMaster. Don't skimp on your PSU, you are going to need a lot of amps @ 12v for long periods of time. Make sure you shift a shed load of air and if they aren't in a hosting facility, clean the dust out of everything regularly. I'm thinking quarterly at least. Heat kills. I had one of those tiny northbridge fans die. That took out an entire system, CPU and 2x 9800GX2s. I now use coretemp and set the system to shutdown on high temp. Don't know if it would have caught the northbridge failure, but it can't hurt. I use sendEmail from coretemp to email me if an overtemp event occurs.

    I use MSI Afterburner to OC the shaders and crank up the fans; leave memory speeds at stock, faster makes more heat for very little return, ~1%. I currently use nvidia v285 WHQL drivers, as they are proven and new enough to support the latest projects. I monitor the whole thing with one instance of HFM.net, set it up to create a web page on every refresh and use Apache for my web server. Lastly, I set up W7 to auto-login, as all microsoft OSes after XP prohibit GPU access from a service.

    Any further questions, just ask, but probably in a new thread, I've hijacked this one enough ;-)

    Also: All [H] GPU folders, make sure you enter your passkey on your GPU clients if you haven't already!
     
    Last edited: Oct 23, 2012
  27. jebo_4jc

    jebo_4jc [H]ard|DCer of the Month - April 2011

    Messages:
    14,628
    Joined:
    Apr 8, 2005
    Interesting. Will be keeping my eye on things
     
  28. tear

    tear [H]ard|DCer of the Year 2011

    Messages:
    1,567
    Joined:
    Jul 25, 2011
    It seems that first GPU QRB units (8057) have hit the streets.

    Whoever gets one, can you please make a copy of client's work directory (look in
    C:\Users\%USERNAME%\AppData\Roaming\FAHClient or similar) and shoot me a PM?

    Thanks much in advance :D
     
  29. jebo_4jc

    jebo_4jc [H]ard|DCer of the Month - April 2011

    Messages:
    14,628
    Joined:
    Apr 8, 2005
    Interesting. I assume a passkey must be entered. Any word on whether a flag should be set?
     
  30. -alias-

    -alias- Limp Gawd

    Messages:
    374
    Joined:
    Jul 15, 2012
    The first 2 result have hit the ground, and it seems to be good:
    Configuration: FX-8120@4500 1.5V, 8Gb, W7x64, GTX580@865/1000, +SMP 8
    Project number: 8057 (0-16-3)
    Work unit: p8057
    WU size: 56,3 КБ
    WU result: ~ 345,24 КБ
    Credit: 22341,73
    Frames: 100
    Core: OPENMMGPU
    Server IP: 171.67.108.144
    PPH (Points Per Hour): 9613,17
    PPD (Points per day): 230716
    Avg time per step: 0:01:23
    Bonus factor: 8,7649
    Client.cfg: bigpackets=big
    Comleted: 19%
    FahSpy 2.0.1
    Configuration: i7-2600K@4500, 8Gb, W7x64, GTX570@825/1000 1.075V
    Project number: 8057 (0-6-4)
    Work unit: p8057
    WU size: 56,3 КБ
    WU result: ~ 345,24 КБ
    Credit: 19599,77
    Frames: 100
    Core: OPENMMGPU
    Server IP: 171.67.108.144
    PPH (Points Per Hour): 6473,32
    PPD (Points per day): 155360
    Avg time per step: 0:01:49
    Bonus factor: 7,6892
    Client.cfg: bigpackets=big
    Comleted: 26%
    FahSpy 2.0.1

    Link: http://foldingforum.org/viewtopic.php?f=66&t=22808&start=15#p227226
     
    Last edited: Oct 27, 2012
  31. DooKey

    DooKey [H]ard DCOTM x4

    Messages:
    5,732
    Joined:
    Apr 25, 2001
    If I could get my 3 GPUs to run it I'd go for it. I have 2 680s and a 650 Ti in one rig. If someone can help me get it going I could rock out some PPD.
     
  32. jebo_4jc

    jebo_4jc [H]ard|DCer of the Month - April 2011

    Messages:
    14,628
    Joined:
    Apr 8, 2005
    o_O
    Is there a missing decimal there?
     
  33. Grandpa_01

    Grandpa_01 [H]ard|DCer of the Year 2013

    Messages:
    1,157
    Joined:
    Jun 4, 2011
    Nope nothing is missing there 230,716 PPD
     
  34. Skripka

    Skripka [H]ardForum Junkie

    Messages:
    13,281
    Joined:
    Feb 5, 2012
    Yikes.

    Yay for even more irrelevant SMP cpu folding.
     
  35. -alias-

    -alias- Limp Gawd

    Messages:
    374
    Joined:
    Jul 15, 2012
    Can we all fold this WU, or is it only for the betatesting folks?

     
  36. Kendrak

    Kendrak [H]ard|DCer of the Year 2009

    Messages:
    22,871
    Joined:
    Aug 29, 2001
  37. Skripka

    Skripka [H]ardForum Junkie

    Messages:
    13,281
    Joined:
    Feb 5, 2012
    For folks who spent big bucks dedicated 4p servers...most certainly. We need to crank output as evga just got a Christmas present.
     
  38. DooKey

    DooKey [H]ard DCOTM x4

    Messages:
    5,732
    Joined:
    Apr 25, 2001
    Did you expect differently? PG has done this crap forever.
     
  39. jojo69

    jojo69 [H]ardForum Junkie

    Messages:
    10,155
    Joined:
    Sep 13, 2009
    no kidding
     
  40. Skripka

    Skripka [H]ardForum Junkie

    Messages:
    13,281
    Joined:
    Feb 5, 2012
    I think he was selling off the GPUs for 4p last I knew...