Interesting read

Yes, read that and it display a good insight in the future of F@H. Everyone should read this.
 
Vijay's point #5 is very encouraging. Finally, something relatively soon will be forthcoming for Ati. /crosses fingers
 
Vijay's point #5 is very encouraging. Finally, something relatively soon will be forthcoming for Ati. /crosses fingers

And hopefully in time for winter. Since it will enter beta testing fairly soon, I wonder how much longer it will take to enter open data so all of us can try it out. I can't afford the extra power draw right now, but will make other sacrifices if the ppd/watt ratio is very good. Maybe we will get an ATI GPU3 client before the ATI Cayman and Cayman XT hit retail later this year. Of course i'm only dreaming..........
 
Interesting points being presented there, and VJ's remarks are encouraging as well. I don't expect we'll get to see the end results we're all hoping to see as far as points consistancy, or "consumer grade" explainations of what kind of progress is being made concerning our contributions, but if some of the things he remarked on come true before the end of the year then I think we're going to see a resurgance of interest in the overall F@H project. While I don't own very many ATi / AMD cards myself, implementing a compatible client alone would be a huge stride forward for all of the existing people out there who desire to process data on them.
 
Getting a good ATI client running might entice me to start folding again. Might.
 
It's also this teams only hope to pass EVGA once again and regain our rightful position at #1 :D

I have mostly ATI GPU's. I can't afford to shovel more $$$ retooling to Nvidia cards.
 
Yeah my 5870 sits idle now since I replaced my 8800GT early this spring. Cut my ppd in half, i'd love to see an ati client to warm the office with during the winter months!
 
And hopefully in time for winter. Since it will enter beta testing fairly soon, I wonder how much longer it will take to enter open data so all of us can try it out. I can't afford the extra power draw right now, but will make other sacrifices if the ppd/watt ratio is very good. Maybe we will get an ATI GPU3 client before the ATI Cayman and Cayman XT hit retail later this year. Of course i'm only dreaming..........

I still have three 4xxx and one 5xxx AMD/ATI card(s) waiting for this. It's not worth sucking CPU cycles to get 4K PPD from my 5870 at this point, but it is a far quieter and cooler card than my GTX 470 which I refuse to have in my office/main rig for this reason.
 
I just switched from running multiple clients to the smp and my points doubled on my q8200 rig. Its good to help fold to cure cancer but competition is what drives so many people to fold. There are some valid points here and not addressing them is just gonna end up hurting everyone involved.



 
Thanks for the link.

You know, what we need to do is workshop a better points scheme, come up with solutions and evangelise it. I can't believe there is no better solution that what is happening right now.
 
I still have three 4xxx and one 5xxx AMD/ATI card(s) waiting for this. It's not worth sucking CPU cycles to get 4K PPD from my 5870 at this point, but it is a far quieter and cooler card than my GTX 470 which I refuse to have in my office/main rig for this reason.

A lot of us gamers are running ATI hardware mainly because they were the best of the best for the 6 months when nVidia didn't have anything to offer enthusiasts. A lot more are running ATI hardware because we are sick of nVidia's TWIMTBP crap.
 
Just found this on FF, its an interesting read. Whats even better are some of Vijays responses.:)

http://foldingforum.org/viewtopic.php?f=16&t=16375&start=0


wowa vijay responding to something in the forum... gotta read this one for sure..


interesting but we'll see what happens..


Well damn, you pulled a 6701.

I hate those things as my 980x gets lower ppd with those when my Bloomfield pulls a bigadv.

in his answer he was talking about using the persons system as the benchmark, then determining the PPD bonus off that instead of a test system..

thats basically what we have been preaching since day 1 of the smp client release.. it will work.. he seems to think people can cheat on it.. but the way you stop cheaters is by making the client run a benchmark every time its started instead of a single time when the person could say overclock their processor to 4.5ghz long enough to run the benchmark, restart and put it back at 4ghz or 3.8ghz and double their points.. thats really what Vijays worried about but there are ways to stop that..

the GPU client on the other hand is way to easy to cheat using that point system so you would have to continue to run it the way it currently is. unless you say run a benchmark WU between every 3rd or 4th WU.. or make it random so the person cant catch it and change their shader clocks to increase their base point value..
 
Last edited:
thats basically what we have been preaching since day 1 of the smp client release.. it will work.. he seems to think people can cheat on it.. but the way you stop cheaters is by making the client run a benchmark every time its started instead of a single time when the person could say overclock their processor to 4.5ghz long enough to run the benchmark, restart and put it back at 4ghz or 3.8ghz and double their points.. thats really what Vijays worried about but there are ways to stop that..
1. Running a benchmark before each unit would take time and would waste a considerable amount of CPU resources just for the sake of keeping the points system in check. That would not be practical.

2. Running a benchmark at a high speed and then going to a slower one for general use would actually decrease the amount of points you get, since your F@H performance would be below par compared to the benchmark speed.
 
Why run benchmarks? Do they not already have a performance fraction, updated every work unit?

They are sitting on a sea of real world benchmark data already.
 
Why run benchmarks? Do they not already have a performance fraction, updated every work unit?

They are sitting on a sea of real world benchmark data already.
The idea is to run a benchmark on each individual computer so the client can generate appropriate PPD figures itself, rather than comparing performance to Stanford's own benchmark machine. However, I think that concept has too many flaws in it to be successful.
 
Agreed. I just wonder why we need benchmark machines at all, when every unit is already being timed.

I don't mind minor variation in things. I do mind a 2684 treated the same as a 2692, and scored less than 60% of the points. And I do mind that to get either you have to deal with the even worse 670x, which score 33% of the points.

3x variation on the same machine - any new points system will have it's own quirks, but you would have to work hard to make a worse system than right now.
 
Honestly, the servers deciding the points/bonus is the best solution.

What doesn't make sense is why the servers aren't more reliable. I keep hearing about power outages being the issue, but like... What datacenter isn't survive outages? If it can't run for 24+ hours off the grid, how the hell can anyone take it seriously and host their servers there?
 
Agreed. I just wonder why we need benchmark machines at all, when every unit is already being timed.
Stanford runs every new unit on the benchmark machine to figure out how long it takes. Points values for each unit are then generated based on a target PPD level and the amount of time recorded. That's why the benchmark machine is needed.
 
I read that, but bigadv units are all treated at the same PPD for the same time per frame - not much benchmarking going on there. It is like they benched the 2684 then just sent the 2685/6/92 out into the world.

That would also imply that somewhere, somebody on the same type of machine as they bench on would find 6701s as equally good as 6060. Would it not?:confused:
 
I read that, but bigadv units are all treated at the same PPD for the same time per frame - not much benchmarking going on there. It is like they benched the 2684 then just sent the 2685/6/92 out into the world.

That would also imply that somewhere, somebody on the same type of machine as they bench on would find 6701s as equally good as 6060. Would it not?:confused:
I believe they revised their benchmarking method between when they did the original A3 SMP units and when they benchmarked 6701 and 6702.
 
The main problem with the benchmark system is that the PG use machines which isn't always what most people will use, under a OS rarely used by everyone. It might be very consistent for the PG but when different machines crunch them, we see discrepancies.

A better method would be to have a pool of 10 benchmark machines benchmarking a given WU then do some calculation to get a good points value which can show a variation in ppd less than 5-10%. Like that, it get closer to expectations everyone have in terms of ppd.
 
I also agree that the points awarded between different work units is absurdly out of proportion. So much so that even mentioning the word "benchmark" is laughable.

I had other comments, but decided they would do nothing but piss off PG. So i'll keep those to myself. :)
 
1. Running a benchmark before each unit would take time and would waste a considerable amount of CPU resources just for the sake of keeping the points system in check. That would not be practical.

2. Running a benchmark at a high speed and then going to a slower one for general use would actually decrease the amount of points you get, since your F@H performance would be below par compared to the benchmark speed.


1. no not before every WU.. just when you start the client..
1a. no it wouldnt waste resources because you still get a base value points for the benchmark.. i mean hell you have been doing this long enough.. so im pretty sure you remember the days when 9 out of the 10 WU's we were running on GPU's werent even for science they were just test WU's..

2. yeah screwed that up ment to say lower but ya got the point..


The main problem with the benchmark system is that the PG use machines which isn't always what most people will use, under a OS rarely used by everyone. It might be very consistent for the PG but when different machines crunch them, we see discrepancies.

A better method would be to have a pool of 10 benchmark machines benchmarking a given WU then do some calculation to get a good points value which can show a variation in ppd less than 5-10%. Like that, it get closer to expectations everyone have in terms of ppd.

true but then nvidia cards wouldnt get as many points since all the benchmarks are based off an ATI card on a broken 2 year old ATI client..
 
1a. no it wouldnt waste resources because you still get a base value points for the benchmark.. i mean hell you have been doing this long enough.. so im pretty sure you remember the days when 9 out of the 10 WU's we were running on GPU's werent even for science they were just test WU's..
Those units were used to test the accuracy of calculations, not to test performance. Big difference. Running a benchmark just for performance testing would be a waste of resources as far as the project is concerned.
 
Is it just me or is that 7im character dumber (with an emphasis on the "b") than a bag of hammers rusting in the rain used to beat a bag of crap?

I mean what a brainwashed folding fanboy if I've ever seen one. Sorry about the off-topic mini-rant but he can't hold an argument worth its weight in dog feces and is so out of touch with everyone around him....

What is his connection to the folding@home project outside of beta testing anyway?

Sheesh.
 
Last edited:
Is it just me or is that 7im character dumber (with an emphasis on the "b") than a bag of hammers rusting in the rain used to beat a bag of crap?

I mean what a brainwashed folding fanboy if I've ever seen one. Sorry about the off-topic mini-rant but he can't hold an argument worth its weight in dog feces and is so out of touch with everyone around him....

What is his connection to the folding@home project outside of beta testing anyway?

Sheesh.

Tell us what you really think - don't sugar coat - we can handle it ;)

But yes, he seems a little bit Iraqi Information Minister. The only person I have seen who explains it as 6701 = normal and fantastic, 2684 = better kind of normal, and 2685 as SUPER bonus normal....
 
This right here is making me drool thinking of the possibilities if we only had a decent ATI client.

Unfortunately, I feel Stanford is in the same pocket as Evga!

Don't see an ATi client any time soon! But I hope I'm wrong!
 
If you think Stanford is being paid off by nVidia, you need to take off the tinfoil hat, because it must be microwaving your brain :p.

There already is an ATI client.

Gary means an ATI client that isn't a POS you smartass :)
 
and one can't lay all of the blame for the lack of an OpenCL client on Stanford. OpenCL in general isn't a very mature technology and the specifications keep changing all the time.
 
From what I gathered from that thread (especially Pande's post), the ATI/AMD client (i.e. better client) should be in beta within a few weeks.

vijaypande said:
ATI deprecated their GPU language, Brook, in favor of OpenCL. So, we were forced to completely rewrite our GPU core for ATI in openCL. This takes time, especially to write highly optimized code. We have been internally testing this and expect to start beta testing it shortly (days to weeks). It will require client changes that are now built into the v7 client.

He also said that he expects v7 client by the end of the year or sooner.
 
He also said that he expects v7 client by the end of the year or sooner.
Well, in June he said we'd probably see an open beta in a "couple months".. it's been more than a couple months. PG's time estimates are always skewed.
 
Well, in June he said we'd probably see an open beta in a "couple months".. it's been more than a couple months. PG's time estimates are always skewed.
let me guess. The next one will be "when it's done"
 
LOL the phrase "soon" has its own meaning in the programming world.

It could mean tommorw. or it could mean when hell freezes over.
 
Back
Top