-bigadv points adjust

Kendrak

[H]ard|DCer of the Year 2009
Joined
Aug 29, 2001
Messages
21,141
http://foldingforum.org/viewtopic.php?f=24&t=20788

Kasson and I have been talking back and forth on this topic in the DAB
I have to say this is the best outcome I could of hoped for with the different things that could of come about.

I will continue to work with PG to give them information to make choices that will bennifit us out here folding and be fair.

PG is trying keep and maintain good relations with all of us. They are listening.
 
Last edited:
Good work guys! We brought the needed data required for these changes to be made. Thanks Patriot, R-Type, and everyone else who put thier perfromance data in the datasheet. If we went off of 7im's and Punchy's work we would have a 6903 class unit with less than 6901 class credit.
 
Last edited:
I did not fabricate any data in the beta forum thread - it was all measured. I think what you meant was "if went off of 7im's and Punchy's opinions", but I never expressed any opinion about 8101 points, only that 6903/4 are overvalued. If they want to discourage GPU, uniprocessor and SMP folding, they certainly are doing it.
 
Punchy why are you so dead set against the bigadv points. Stanford obviously wants them done with the server class rigs and is enticing people with there PPD and it is working. People are building them much more frequently now since Stanford announced the new requirements. I am pretty sure Stanford knows the value of speed when it comes to the science and are assigning the point value accordingly.

To me it says the server class rigs and there ability to get work done in a timely matter are pretty important to them.
 
They are increasing the gap between the "haves" and the "have-nots", without any true justification. I think in their rush to advance the science of distributed computing (which is their real contribution, not the medical research - see a post in this forum), they are forgetting their roots in using spare computing power of existing home computers.
Since all of my systems would make the deadlines of the first bigadv-16 project as is, I should be happy, because the change will allow me to get even more points than before compared to SMP, GPU and uniprocessor folders. To me, though, the points system has lost all fairness, and I'm not interested in "competing" when the system isn't fair.
It would be great to have a poll of a true cross-section of all folders to see where overall opinions lie. The most active, vocal folders are the most likely to buy larger and larger systems, but I think they are a tiny fraction of the overall user community.
 
They are increasing the gap between the "haves" and the "have-nots", without any true justification. I think in their rush to advance the science of distributed computing (which is their real contribution, not the medical research - see a post in this forum), they are forgetting their roots in using spare computing power of existing home computers.
Since all of my systems would make the deadlines of the first bigadv-16 project as is, I should be happy, because the change will allow me to get even more points than before compared to SMP, GPU and uniprocessor folders. To me, though, the points system has lost all fairness, and I'm not interested in "competing" when the system isn't fair.
It would be great to have a poll of a true cross-section of all folders to see where overall opinions lie. The most active, vocal folders are the most likely to buy larger and larger systems, but I think they are a tiny fraction of the overall user community.

Cry more?

You are right there is no reason why 2,500,000 atom units are valued more than 600 atom units.
 
I did not fabricate any data in the beta forum thread - it was all measured. I think what you meant was "if went off of 7im's and Punchy's opinions", but I never expressed any opinion about 8101 points, only that 6903/4 are overvalued. If they want to discourage GPU, uniprocessor and SMP folding, they certainly are doing it.

GPU folding will not improve anytime soon, while SMP and uniprocessor are fine.
 
You are right there is no reason why 2,500,000 atom units are valued more than 600 atom units.
Yes, that's exactly what I said (not, for the sarcasm impaired).

There has never been anything but hand-waving justification of the QRB. If their project is all about science, they should be able to prove that the QRB represents the scientific value correctly.
 
Yes, that's exactly what I said (not, for the sarcasm impaired).

There has never been anything but hand-waving justification of the QRB. If their project is all about science, they should be able to prove that the QRB represents the scientific value correctly.

They never would have bothered with a QRB without the need to get WU in ASAP.

They cannot make new WU without getting the current set finished. The faster that gets done, the faster the project can move forward.
 
Now I'm getting 160k p8101 instead of 230k on a 6903 so I can crymore if I want.
 
Now I'm getting 160k p8101 instead of 230k on a 6903 so I can crymore if I want.

You can only cry if you got the 8101 with bigadv flags set...
If you ran beta flags and pulled a beta units its your own fault.
 
They are increasing the gap between the "haves" and the "have-nots"... <snip> ...the change will allow me to get even more points than before...
Careful...the [H]ard facts will come back to gnaw on you like mutant Culicidae...

4P 6174 current PPD:

6901 - 303,118.67 PPD

6903 - 488,575.32 PPD

6904 - 476,341.85 PPD

8101 - 140,396.43 PPD


4P 6174 revised PPD:

6901 - 363,750.87 PPD

6903 - 488,575.32 PPD

6904 - 476,341.85 PPD

8101 - 322,916.07 PPD



6900/1 are going away, so they're irrelevant.

6903/4 unchanged - for now.

8101 roughly equal to 6901, 68% of 6904 and 66% of 6903


Right now my 4Ps get @475K because most assigns are 6903/4. If 8101 is released with no further changes, I'll get 33% LESS folding them.

And what happens when 6903/4 are converted to bigadv-16 projects?

Vijay Pande said:
Then, after the new bigadv-16 projects stabilize, we will bring the bigadv-12 projects into line (points, deadlines) with the bigadv-16 projects and convert all projects to bigadv-16.

If there are no further changes to 8101, and 6903/4 are brought in line with it, my 4P PPD drops 33%. Period.

There has never been anything but hand-waving justification of the QRB. If their project is all about science, they should be able to prove that the QRB represents the scientific value correctly.
Honestly, I think you've been around long enough to know this...but just in case you actually don't understand QRB...

Before QRB, running four instances of SMP on a quad-core yielded better PPD than just one. Why? Because there is no way FAH can scale perfectly across multiple cores. For example, if four clients returned 4,000 PPD, running just one w/90% scaling only nets 3,600 PPD. Everyone who knew this ran as many clients as they could to maximize PPD, slowing science to a crawl in the process. Linear QRB doesn't change this because it wouldn't be any different than raising base points - scaling issues would still encourage multiple clients. The only solution is an exponential QRB which scales faster than the drop in PPD from core scaling inefficencies.

Ask yourself this simple question. Is it better for science if I run one client that finishes a 6903 each day, or run twelve clients, with each taking about 11 days to finish their 6903s?

Answer honestly and you'll understand...
 
Thank you, I have also linked this post to the DAB so others can see this wasn't something "insane"


Careful...the [H]ard facts will come back to gnaw on you like mutant Culicidae...

4P 6174 current PPD:

6901 - 303,118.67 PPD

6903 - 488,575.32 PPD

6904 - 476,341.85 PPD

8101 - 140,396.43 PPD


4P 6174 revised PPD:

6901 - 363,750.87 PPD

6903 - 488,575.32 PPD

6904 - 476,341.85 PPD

8101 - 322,916.07 PPD



6900/1 are going away, so they're irrelevant.

6903/4 unchanged - for now.

8101 roughly equal to 6901, 68% of 6904 and 66% of 6903


Right now my 4Ps get @475K because most assigns are 6903/4. If 8101 is released with no further changes, I'll get 33% LESS folding them.

And what happens when 6903/4 are converted to bigadv-16 projects?



If there are no further changes to 8101, and 6903/4 are brought in line with it, my 4P PPD drops 33%. Period.


Honestly, I think you've been around long enough to know this...but just in case you actually don't understand QRB...

Before QRB, running four instances of SMP on a quad-core yielded better PPD than just one. Why? Because there is no way FAH can scale perfectly across multiple cores. For example, if four clients returned 4,000 PPD, running just one w/90% scaling only nets 3,600 PPD. Everyone who knew this ran as many clients as they could to maximize PPD, slowing science to a crawl in the process. Linear QRB doesn't change this because it wouldn't be any different than raising base points - scaling issues would still encourage multiple clients. The only solution is an exponential QRB which scales faster than the drop in PPD from core scaling inefficencies.

Ask yourself this simple question. Is it better for science if I run one client that finishes a 6903 each day, or run twelve clients, with each taking about 11 days to finish their 6903s?

Answer honestly and you'll understand...
 
If there are no further changes to 8101, and 6903/4 are brought in line with it, my 4P PPD drops 33%. Period.

+1

Kasson is very intelligent because he put the ppd of 8101 very low purposely in order to create a reaction of outrage . Then he said that they made a mistake and increased the ppd to satisfy folders .

But the reality is that we are still losing 33% of ppd for approximately the same WU.:rolleyes:
 
+1

Kasson is very intelligent because he put the ppd of 8101 very low purposely in order to create a reaction of outrage . Then he said that they made a mistake and increased the ppd to satisfy folders .

But the reality is that we are still losing 33% of ppd for approximately the same WU.:rolleyes:

I will mention "it is easier to add than take away"
And it is still in beta.
 
Kendrak, many thanks for working with the project on this.

Yeah, it kinda smells like manipulation, doesn't it? (vide: Biffa and Jeanjean)

But, conspiracy theories aside -- interesting bottom-line:

Given new points economy there's no point (no pun intended)
incentive to run -bigbeta at this time (when compared to regular -bigadv).
 
as much as i still think the current point system is flawed as heck when it comes to bigadv, its to far in to change it now. if it was to be fixed it should of been 2 years ago, fixing it now is just pointless.

so yes i understand your argument punchy but at this point fixing it will just cause more bitching and whining and threats of rage quitting. (last points value change pretty much proved that) but its been like that since the beginning with SMP. the bonus system opened the door that can never be shut.
 
You can only cry if you got the 8101 with bigadv flags set...
If you ran beta flags and pulled a beta units its your own fault.
Thanks, I figured that 6903 was going away shortly to be replaced with these new units. I will put -bigbeta back on when they are the preferred units points and stability wise.
 
I see some good points in the points references above from Amaruk - but there is one inconsistency. If 6900/1 are to be phased out, why bother increasing their points? It may be a red herring, but it certainly gives the appearance that points will be increasing. And if 6900/1 were increased to a certain level, why weren't 6903/4 decreased to the same level?

I think Kendrak hit the nail on the head there: "it is easier to give than to take away" particularly when it's done explicitly via a points change rather than implicitly by ending a certain project.

Oh, and the answer to Anurak's question about which is better for the science, is "it depends on how many clients of each type there are".
 
It doesnt matter. In the end, it comes down to sustainability. Stanford doesnt make money off of FAH. (well, there's no knowing for sure, we know EVGA and NVIDIA are heavy pushers, but there is no concrete evidence and thats not what I'm here to talk about)

Anyways, FAH isnt a company, its a project. They have to maintain some level of output to keep the project going. If they arent getting almost anything done because people are taking 11x longer (going off our example here just to keep it all in line) because it gets them more shiny points, then the results are going to take 11x longer. There was going to be a point where Stanford knew the project wouldnt sustain itself on its results anymore, why do you think they came up with the QRB and tight deadlines?

Now they can publish results 11x faster, and since its exponential, people will compete to return things even faster. Would we have 4P rigs if we were still going to be running 48 single clients? No, no one would have progressed their hardware without a reason, and if we didnt then the project never would have progressed.

Is it fair, I think it's as fair as its going to be. But life isnt fair. Cancer and disease isnt fair. Science has no "fair" law that I know of. They had to adjust and adapt to keep the project alive, and it's still alive. Hate it or love it, folding is sticking around for a while because of the decisions they made to keep the project afloat.
 
The only reason I believe that 6900/1 should NOT have the same points value per day as 6903/4 is the fact that it is not the same class of work unit. 6903/6904 take 2.3 times longer to complete and therefore bring much more risk of losing them. They are also bigger units requiring more memory to process them.

It would be like comparing the capacity 24 half ton pickup trucks to the capacity of an 18 wheeler... Sure, the 24 half ton pickup trucks can carry 48,000lbs combined, the same as the 18 wheeler. However the pickup trucks are not going to be as efficient in respect to fuel consumption (see electricity for computer hardware) and what if you need to carry four 12,000lb pieces of equipment or one 48,000lb piece of equipment? If the half ton pickups could carry it at say 2.5 MPH and the 18 wheeler can carry the same load at 60MPH. The pickups would take 24x times longer to get the load to the destination than the 18 wheeler. The same amount of work gets done in the same amount of time, but what if the jobsite needed a few of those machines weeks ago and the pickup trucks are still dragging their asses down the interstate completely overloaded?
 
Last edited:
bigger wu = larger contiguous sections > less calculation overlap = more points per section.
 
It doesnt matter. In the end, it comes down to sustainability. <snip>

Now they can publish results 11x faster, and since its exponential, people will compete to return things even faster. Would we have 4P rigs if we were still going to be running 48 single clients? No, no one would have progressed their hardware without a reason, and if we didnt then the project never would have progressed.

Is it fair, I think it's as fair as its going to be. But life isnt fair. Cancer and disease isnt fair. Science has no "fair" law that I know of. They had to adjust and adapt to keep the project alive, and it's still alive.

+1

Nice summary.

<snip>
It would be like comparing the capacity 24 half ton pickup trucks to the capacity of an 18 wheeler... Sure, the 24 half ton pickup trucks can carry 48,000lbs combined, the same as the 18 wheeler. However the pickup trucks are not going to be as efficient in respect to fuel consumption (see electricity for computer hardware) and what if you need to carry four 12,000lb pieces of equipment or one 48,000lb piece of equipment? If the half ton pickups could carry it at say 2.5 MPH and the 18 wheeler can carry the same load at 60MPH. The pickups would take 24x times longer to get the load to the destination than the 18 wheeler. The same amount of work gets done in the same amount of time, but what if the jobsite needed a few of those machines weeks ago and the pickup trucks are still dragging their asses down the interstate completely overloaded?

Hmm... makes me wanna go buy a bigger truck. :p
 
Kendrak, many thanks for working with the project on this.

Yeah, it kinda smells like manipulation, doesn't it? (vide: Biffa and Jeanjean)

But, conspiracy theories aside -- interesting bottom-line:

Given new points economy there's no point (no pun intended)
incentive to run -bigbeta at this time (when compared to regular -bigadv).


Now now, people will start calling me an old cynic :p

But I agree, its a strange turn of events, in the past running bigbeta gave you the option, albeit risky, to run massive work units for equally impressive points gains, now its the other way around.

Something not often brought up is the inherent risk involved in running these massive WU's, when you can drop 3-4 WU a day on SMP or GPU units, the risk is low if you have a problem with one. If you are running between 1 and 4 days per unit for a 6903/6904 you have a lot more to lose if there is a problem. So the bonus points are a bit like danger money :)

Well thats part of the equation, if you take that into account with the added value Stanford attach to having such large models returned quickly then I think the current points system is still valid and fair.
 
Now now, people will start calling me an old cynic :p

But I agree, its a strange turn of events, in the past running bigbeta gave you the option, albeit risky, to run massive work units for equally impressive points gains, now its the other way around.

Something not often brought up is the inherent risk involved in running these massive WU's, when you can drop 3-4 WU a day on SMP or GPU units, the risk is low if you have a problem with one. If you are running between 1 and 4 days per unit for a 6903/6904 you have a lot more to lose if there is a problem. So the bonus points are a bit like danger money :)

Well thats part of the equation, if you take that into account with the added value Stanford attach to having such large models returned quickly then I think the current points system is still valid and fair.

More like 20+ SMP units per day with some of these 48 core machines... It is quite a difference.
 
How many atoms the client can process in one day?

GPU Rig: GTX470 can process 5 x 1832 = 7,501 Atoms/day (P8032)

CPU Rig: Quad Opteron (48 cores) can process 1.17 x 2533797 = 2,576,871 Atoms/day (P6903)

CPU Rig = 343.5 x as many atoms/day than GPU Rig

GPU Rig = 15737 points/day

If 1 atom = 1 point then the CPU rig should get 15737 x 343.5 = 5,405,659.5 points/day

Under the current points system the CPU rig gets 495,749 points/day

Sounds like the CPU rig is losing out to me. :(

But of course that doesn't account for how valuable the work being done by each client is, after all, its possible that one atom isn't worth as much scientifically as another atom, I mean how on earth are we going to tell? ;)
 
Last edited:
Biffa, good calculations, but you're missing the "sponsorship bonus" multiplier in NVidia^W GPUs favour.
 
How many atoms the client can process in one day?

GPU Rig: GTX470 can process 5 x 1832 = 7,501 Atoms/day (P8032)

CPU Rig: Quad Opteron (48 cores) can process 1.17 x 2533797 = 2,576,871 Atoms/day (P6903)

CPU Rig = 343.5 x as many atoms/day than GPU Rig

GPU Rig = 15737 points/day

If 1 atom = 1 point then the CPU rig should get 15737 x 343.5 = 5,405,659.5 points/day

Under the current points system the CPU rig gets 495,749 points/day

Sounds like the CPU rig is losing out to me. :(

But of course that doesn't account for how valuable the work being done by each client is, after all, its possible that one atom isn't worth as much scientifically as another atom, I mean how on earth are we going to tell? ;)


This completely ignores the length of the simulation, which varies from a nanosecond in some of the cpu simulations to a to a microsecond in the gpu simulations.
 
Length of simulation and the way simulations are run are definitely factors in how valuable the data is.

However all we have to go on is what the project defines as being valuable, and the unit of value is currently the points system.
 
An inescapable reality soon they will be released from beta and you will not have a choice.:p

a fellow folder(we both fold for OCN) is doing the Bigadv-16 in 2.1 days with his high OCed 3930K, which is no surprice, this 6 core sandyB CPUs are awesome performers
 
Really, your going to come here and talk about using the core hack, gtfo with it, we frown on that hete
 
Really, your going to come here and talk about using the core hack, gtfo with it, we frown on that hete

but. but why? is he not doubling his performance by OCing his chip?

just use your math..

16 Threads(need it for BigAdv-16 Wus) at 2.5GhZ = a 8 Threads at 5GHZ

its not like he is keeping his cpu at stock clocks and faking more threads.
 
We're not all against it but there is a pretty big chunk that is. Franky it folds faster than my 2.0ghz 16core 2p build so I don't see a problem with it.

People were fine using HT on quad cores to fake 8 core for -bigadv. I dont feel using a core hack is really much diff.
 
No, he is overclocking his CPU and adding more threads. The adding more threads is the bloody issue. It is highly frowned upon by the majority of the members here, and not condoned, hell the last guy who came in bragging about it got flamed out.

but. but why? is he not doubling his performance by OCing his chip?

just use your math..

16 Threads(need it for BigAdv-16 Wus) at 2.5GhZ = a 8 Threads at 5GHZ

its not like he is keeping his cpu at stock clocks and faking more threads.
 
We're not all against it but there is a pretty big chunk that is. Franky it folds faster than my 2.0ghz 16core 2p build so I don't see a problem with it.
.
well his TPF times are around 31 mins

No, he is overclocking his CPU and adding more threads. The adding more threads is the bloody issue.
again he is just compensating for his gain in performance with more threads that´s all.


hell the last guy who came in bragging about it got flamed out.

well sorry if I upset you. I will no longer bring this subject again, I was just mentioning that some desktop CPUs are able to do this huge Wus... :p
 
Back
Top