News from PG re GPU benchmarking

Guess eVGA/NVidia has increased their support to FAH.

Although in the long run more competition between [H] and eVGA benefits the overall research production.
 
It might also aide us, there are more than a few people outside the top 20 that fold on GPU's and it may encourage a few that have dropped gpu folding to restart
 
we've been calling for this shit for how long? evga cries about it and a week later its in beta testing.. give me a friggin break.
 
My guess is that this has been in internal testing for a while now, and to be fair to EVGA they have not been the only team asking for QRB to be applied to GPU projects. Perhaps Kendrak can give us a bit more info?
 
I sure almost everyone who GPU folds has been asking for this since the SMP QRB was released. It has nothing to do with a single team "crying" but the entire donor base asking for it. Its been a long, long time coming and will be very interesting to see how it plays out once the beta testing is done.
 
This has been talked about off and on in the DAB for over a year.

The part that made this truly possible is the advent of running the same WU on both SMP and GPU. So now the same TPF = the same PPD if it is on a CPU or GPU.

Once I get an idea of numbers I will share. However I might elude to something that was said. GPU will most likely get a boost with this. Outside of that possibility I know nothing at this moment.

Maybe it is time for the return of Origami Master ala 2.0

omaster3.jpg
 
I've made an enquiry about ATI and EVGA are not the only team that have asked for this to happen
I can't quite parse that sentence structure, but I hope you're not wasting any researcher's time on a question like that.

http://foldingforum.org/viewtopic.php?f=16&t=19042&p=190618&hilit=+GPU+QRB#p190618

July 2, 2011: We have been considering QRB for GPUs, which should finish the rebalancing we've had in mind to do. There are some issues to work out though, which is why we haven't made that change now.
 
Time to put on the tin hat, damn it is too big must of lost too much hair from the last conspiracy. No I can not see a conspiracy here Just Stanford attempting to do what we all have been asking for = pay for = work. If a GPU is faster at doing the same work then it should be reflected in points if it is not then it is not. Lets see where the chips fall if they are faster then it is going to cost us less to fold. :D

I can buy allot of GPU's for the cost of a 4P
 
Just Stanford attempting to do what we all have been asking for = pay for = work. If a GPU is faster at doing the same work then it should be reflected in points if it is not then it is not.
Amen!

I can buy allot of GPU's for the cost of a 4P
But what about power consumption? It may be tricky balancing the overall cost when you consider what power hogs some of those GPUs are.
 
It all comes down to cost of ownership if you are going pro when folding.
 
I think it will level out. The only way QRB works for GPU is if you have the latest GPU and they significantly increase the size of the wu. This will put old GPUs out of the game.
 
Does the PCIe slot configuration i.e. x16, x8, x4 matter for folding 2 or 3 cards on the same MB?
I melted down the fans on my 560GTX, an purchased another one to fold while I RMA'ed the original one.
My MB only has one PCIe slot and I was thinking of swaping out the board to run both cards at the same time. But since I have not read alot about folding multiple GPUs I'm now trying to find the info I need.

Thanks :)
 
Amen!


But what about power consumption? It may be tricky balancing the overall cost when you consider what power hogs some of those GPUs are.

he he each of my 4P's use 1080 to 1190 watts my GTX 580 uses 175 Watts measured at the wall for folding That is allot of GPU's can be run so that is no big concern for me anyway. It all depends on where it all shakes out I may be better off shutting down the 4P's and running GPU's we shall see. :)
 
A 4x slot is plenty (if the back if the socket is opend to let the card hang out)

On 1x there is a slow down.

There have been a number of people who have done ext cables and such to make it work.
 
Thanks Kendrak, the slots are all x16 in lenth, but most board specs say x16 with one slot filled and 2 slots at x8 electrically with 2 slots filled, or one at x16 and the second at x8.
I've seen a lot of foks run SLI on their gamming rigs with 2 cards running in slots at x8, so I figured it might not make a difference with folding.
The boards that can run 2 PCIe slots at x16 at the same time are few, and costly, and I'm to cheap to do that these days.
I've spent to much over the years on this and have leanered to stay away from the bleeding edge. :eek:

Thanks :D
 
Once this is completed, my SR-2 computer finally come into its right world. I have to replace the PSU in the rig with a Corsair 1200W, and cramming into 5 pieces GTS450 I have lying around and folds forward. Guess that each card then will give up to 100 - 150K PPD, if this is to have any meaning. The power consumption of such a unit will be 800 - 900W if I leave the CPUs also to fold on, but at default speeds. Yes, I will welcome this new regime very much.:D

It is about time that the GPU folders get paid for their efforts.;)
 
Last edited:
I'm very happy to see this finally happening.
Very curious to see the results of the benchmarks! :eek:
 
I wonder how much PPD a 670/680 will get if this is true. My slow 4P project is finally starting to happen, and I was hoping I'd be able to get an edge over most guys who are above me in the overall ranking :p Won't happen I suppose...
 
Maybe it is time for the return of Origami Master ala 2.0

Wow Kendrak, do you still have that unit?!! I'm still folding three GPU's on one of those three Bad Axe2 MB's I had. Would be nice to know this one is coming back to life! :cool:

Ax
 
Wow Kendrak, do you still have that unit?!! I'm still folding three GPU's on one of those three Bad Axe2 MB's I had. Would be nice to know this one is coming back to life! :cool:

Ax

No, I sold it off to capreppy and use the funds to build one of the first 2P 1366 rigs in the horde.

You can ask Tobit how much fun he had helping me with that one.

The 2.0 part would be a new version. I would need to know where the sweet spot on the GPU ppd/w curve is.
 
Thanks Kendrak, the slots are all x16 in lenth, but most board specs say x16 with one slot filled and 2 slots at x8 electrically with 2 slots filled, or one at x16 and the second at x8.
I've seen a lot of foks run SLI on their gamming rigs with 2 cards running in slots at x8, so I figured it might not make a difference with folding.
The boards that can run 2 PCIe slots at x16 at the same time are few, and costly, and I'm to cheap to do that these days.
I've spent to much over the years on this and have leanered to stay away from the bleeding edge. :eek:

Thanks :D

Back when GPU first came out I had a system that could change the x factor of the PCI-E slot manually. On testing those WU then there was around a 10-12% difference between x16 and x1. That was on PCI-E rev. 1.0.

Now with PCI-E rev 2.0 and 3.0 boards I'm not sure you'd see any noticeable difference outside a margin of error really unless the WUs are passing LOTS more info across the bus then they used to. Just because the WU is bigger doesn't mean it's moving more info across the bus. My guess would be it's less pronounced then the effects of RAM timings for SMP.
 
I am very excited about this. I was big into GPU folding prior to joining [H] and getting a 4P built. I still have 11 GPUs spinning, down from 15. If this bonus is worthwhile, it may be time to retire the G92 cards and replace them with more 460 or 560ti. Also, this may finally even out the points discrepancy between core_15 2.22 and 2.25. We can only hope.

Exciting times!
 
Does the PCIe slot configuration i.e. x16, x8, x4 matter for folding 2 or 3 cards on the same MB?
I melted down the fans on my 560GTX, an purchased another one to fold while I RMA'ed the original one.
My MB only has one PCIe slot and I was thinking of swaping out the board to run both cards at the same time. But since I have not read alot about folding multiple GPUs I'm now trying to find the info I need.

Thanks :)

As your PCIe question has been answered, I can help you with the multiple GPU question. When GPUs were the king of the hill, I built 5 boxes as dedicated folders. Each box had at least two PCIe GPU slots, whereas some had three. I went the GPU route as I could add another GPU when funds and ebay availability allowed. Bear in mind that nvidia is currently the only real option for dedicated folding boxes, due to the low PPD/high CPU usage of AMD GPUs. Another thing to consider is that small WU really hammer the CPU-GPU communication and a slow CPU can cripple your PPD, as the CPU can't keep multiple GPUs fed. Case in point: I had 2x 9800GX2s and a GTS250 (that's 5 GPU clients) on a Asus P5N32-E SLI plus with a single core CPU (no folding client running). When a new small WU project was released, my PPD dropped dramatically, maybe 40-60% as I vaguely recall. The CPU did not show high utilization in task manager, however when I swapped the CPU for a quad-core, the PPD returned to previous levels. Currently, there aren't any WU that small to cause this problem, however it is something to consider. As a result, all my GPU boxes have at least a dual-core CPU and I haven't noted any PPD drop due to CPU bottlenecking.

So, with all that in mind and remembering that these boxes were built from one to two years ago on a shoe string budget, here we go:
  • ASUS M3N-HT Deluxe, AMD64x2 4000+, 3x GTX460
  • MSI P7N Diamond, Q6600 @ stock, 1x GTX460 2x 8800GTS 512M
  • DFI LanParty RDX-200, AMD64x2 4600 S939, 2x GTX460
  • Asus P5N32-E SLI plus, QX6700 @ stock, 1x GTS250 1x 9800GX2 1x empty slot

Power supplies range from 750-1000W and are Corsair or CoolerMaster. Don't skimp on your PSU, you are going to need a lot of amps @ 12v for long periods of time. Make sure you shift a shed load of air and if they aren't in a hosting facility, clean the dust out of everything regularly. I'm thinking quarterly at least. Heat kills. I had one of those tiny northbridge fans die. That took out an entire system, CPU and 2x 9800GX2s. I now use coretemp and set the system to shutdown on high temp. Don't know if it would have caught the northbridge failure, but it can't hurt. I use sendEmail from coretemp to email me if an overtemp event occurs.

I use MSI Afterburner to OC the shaders and crank up the fans; leave memory speeds at stock, faster makes more heat for very little return, ~1%. I currently use nvidia v285 WHQL drivers, as they are proven and new enough to support the latest projects. I monitor the whole thing with one instance of HFM.net, set it up to create a web page on every refresh and use Apache for my web server. Lastly, I set up W7 to auto-login, as all microsoft OSes after XP prohibit GPU access from a service.

Any further questions, just ask, but probably in a new thread, I've hijacked this one enough ;-)

Also: All [H] GPU folders, make sure you enter your passkey on your GPU clients if you haven't already!
 
Last edited:
It seems that first GPU QRB units (8057) have hit the streets.

Whoever gets one, can you please make a copy of client's work directory (look in
C:\Users\%USERNAME%\AppData\Roaming\FAHClient or similar) and shoot me a PM?

Thanks much in advance :D
 
Interesting. I assume a passkey must be entered. Any word on whether a flag should be set?
 
The first 2 result have hit the ground, and it seems to be good:
Configuration: FX-8120@4500 1.5V, 8Gb, W7x64, GTX580@865/1000, +SMP 8
Project number: 8057 (0-16-3)
Work unit: p8057
WU size: 56,3 КБ
WU result: ~ 345,24 КБ
Credit: 22341,73
Frames: 100
Core: OPENMMGPU
Server IP: 171.67.108.144
PPH (Points Per Hour): 9613,17
PPD (Points per day): 230716
Avg time per step: 0:01:23
Bonus factor: 8,7649
Client.cfg: bigpackets=big
Comleted: 19%
FahSpy 2.0.1
Configuration: i7-2600K@4500, 8Gb, W7x64, GTX570@825/1000 1.075V
Project number: 8057 (0-6-4)
Work unit: p8057
WU size: 56,3 КБ
WU result: ~ 345,24 КБ
Credit: 19599,77
Frames: 100
Core: OPENMMGPU
Server IP: 171.67.108.144
PPH (Points Per Hour): 6473,32
PPD (Points per day): 155360
Avg time per step: 0:01:49
Bonus factor: 7,6892
Client.cfg: bigpackets=big
Comleted: 26%
FahSpy 2.0.1

Link: http://foldingforum.org/viewtopic.php?f=66&t=22808&start=15#p227226
 
Last edited:
If I could get my 3 GPUs to run it I'd go for it. I have 2 680s and a 650 Ti in one rig. If someone can help me get it going I could rock out some PPD.
 
For folks who spent big bucks dedicated 4p servers...most certainly. We need to crank output as evga just got a Christmas present.

Did you expect differently? PG has done this crap forever.
 
Back
Top