New BIGADV point system

I agree the deadline should be shortened. But I also feel the point system was getting pretty far out of line. It does not take much to be a bigadv folder these days and that is not what it was originally designed for. It was originally designed to reward those that took that extra step with there hardware. There are several here on the [H] that would qualify in that category, and they are still making good PPD. And no I do not think those folding on GPU's should be getting a QRB yes they return WU's quickly but they are relativity small and very limited in the science they can do, as was pointed out earlier in this thread GPU's are already over-credited.

As I said above VJ said the points were right for the science on the bigadv but when you start mixing in human emotion and competitiveness (The H factor) in anything you will start having problems because everybody wants to be at the top of the pile, nobody wants to be on the bottom. Things are changing with there new openness at F@H / Stanford when you start letting people inject there thoughts and emotions into the system things are going to change. Prior to the change it was strictly dictated by science. And who knows if it is going to be good or bad for the science. ;)
 
I'm under the same assumption. Going to finish the bigadv I'm on now and switch to regular smp and see what the stats are like.

After running a bigadv and regular smp in linux here are my numbers after the adjustment

WU tpf credit PPD k-factor
2686 23:02 71291 44500 26.4
6062 2:08 4436 29900 2.1
6958 2:15 5097 32600 3.33

It seems I'm fortunate I switched to linux when I did as the increase in performance made up for the loss in credit. Also, bigadv is still better than regular smp, though not by much.
 
I think the key point from kasson's post is this:
We do important science with all classes of work units, however, and we want the points system to reflect that.
The points system is supposed to encourage people to run the F@H clients. However, with the recent points structure, nobody runs classic uniproc for the points, folding on PS3 is not really worth the electricity, and people avoid running GPU clients on bigadv systems. That doesn't include people who have given up on F@H because their hardware can't compete with bigadv. When the points system discourages people from running certain clients, then it's broken.

I agree that PG needs to be much better on communication. For example, if they would say "we have a ton of classic WUs that are not being done because people are switching to SMP" and then work with the DAB and beta team on point adjustments, things would go much smoother and cause less angst. I don't like how they adjusted the points of non-beta WUs while they were being folded. How hard would it be to stop issuing 26xx and 69xx WUs and release the same work with different numbers and points? Then it would just be a batch of new low-scoring WUs (like 2684) instead of an explicit reduction in points.
 
I don't like how they adjusted the points of non-beta WUs while they were being folded. How hard would it be to stop issuing 26xx and 69xx WUs and release the same work with different numbers and points? Then it would just be a batch of new low-scoring WUs (like 2684) instead of an explicit reduction in points.

I don't think that would have been any better. I think people would have figured out x number of projects disappeared, the same number of new projects appeared, and they all have approximately the same % PPD decrease. Then they would probably be even more frustrated because they would feel like PG was trying to disguise the change.
 
I think the key point from kasson's post is this:

The points system is supposed to encourage people to run the F@H clients. However, with the recent points structure, nobody runs classic uniproc for the points, folding on PS3 is not really worth the electricity, and people avoid running GPU clients on bigadv systems. That doesn't include people who have given up on F@H because their hardware can't compete with bigadv. When the points system discourages people from running certain clients, then it's broken.

I agree that PG needs to be much better on communication. For example, if they would say "we have a ton of classic WUs that are not being done because people are switching to SMP" and then work with the DAB and beta team on point adjustments, things would go much smoother and cause less angst. I don't like how they adjusted the points of non-beta WUs while they were being folded. How hard would it be to stop issuing 26xx and 69xx WUs and release the same work with different numbers and points? Then it would just be a batch of new low-scoring WUs (like 2684) instead of an explicit reduction in points.
if they're losing people working on units they deem important because they're not getting enough points then maybe they should up the points dynamically. They could easily shape people's folding behavior by emphasizing the work that they want to get done in a particular time frame. Say they're trying to finish a paper on the data being crunched on GPU, then they should issue an 'important work bonus' or something.. people will start seeing a big boost and switch back to GPU's from bigadv (from lower-end bigadv rigs). the new bigbeta units are still big enough for the SR-2's and G34 rigs to be given a nice healthy premium.. but just something to maybe make all those people running bigadv on i7's reconsider.

granted, this would be a nuissance for people running big farms of dedicated rigs, aka the kind of people who make hardware buying decisions based on future points. However, for someone who is folding on something like an i7 with a decent gaming GPU, this would be a solution to tailor what clients that person chooses to run. Plus, it might put a little excitement back into stuff.. Just look at how much work and thought goes into what tear, sfield, patriot, musky, MIBW etc are doing on bigadv. bigadv is the only thing getting major hype right now and maybe if SMP/GPU/UniProc were less predictable it might make the people not doing bigadv more interested and engaged.
 
Well the solution to dynamic values is to diversify the farm to include some kind of balance between most of the client/hw types.
 
Well the solution to dynamic values is to diversify the farm to include some kind of balance between most of the client/hw types.

I run uni clients on lower end hardware... worthless pts wise to me but I do it anyways... does help with wu count though... i know quite a few big guys who do this ... stanford needs to figure out how to repackage wus into other client types....

Gpus wont get normalized and are severely overvalued for the work they do...
 
Last edited:
GPUs can theoretically do a lot of work, but it doesn't seem like they're getting much. In any case VP said they're looking at extending QRB to basically all WUs (other than... PS3?)
 
GPUs can theoretically do a lot of work, but it doesn't seem like they're getting much. In any case VP said they're looking at extending QRB to basically all WUs (other than... PS3?)

nvidia gpus fail at large wu....
 
nvidia gpus fail at large wu....
meh. Either way I'd say that the change to bigadv was needed, but it was needed a while ago. I personally would have preferred a simultaneous across the board normalization or w/e (not that it would really affect me that much). I'm running uni clients only right now because my lappy doesn't like running SMP or GPU, but unis work just fine (not too much heat or resource hogging)
 
meh. Either way I'd say that the change to bigadv was needed, but it was needed a while ago. I personally would have preferred a simultaneous across the board normalization or w/e (not that it would really affect me that much). I'm running uni clients only right now because my lappy doesn't like running SMP or GPU, but unis work just fine (not too much heat or resource hogging)

Change was needed...this was not it...

The gap between smp and bigadv was a problem... however gpu is already overvalued and this increased that.. instead uni and smp should be bumped up....

base points should reflect the computational requirements of the wu also taking into account the overlap due to wu size

QRB should be linear and applied to all wu... though perhaps scalling the value based on wu size (upload requirements) ....
 
Last edited:
I think it needs to be looked at also from the perspective of monetary outlay. The cost of putting together a system that returns the BigAdv and BigBeta units in a timely manner (and worthy PPD) is no small matter. Building something with 12+ real cores can get expensive quickly. The points values assigned to the units are an incentive to get people to bring the hardware online to process these WUs.

I'm curious if they even realize (or care for that matter) that people spend alot of their hard earned money building machines to meet these requirements. When you dangle the PPD carrot to influence people, then snatch it away, of course you're gonna piss alot of people off. I'd be interested to see what the ratio of true servers running these WUs vs dedicated folders on personal systems (BigAdv).

Eh, I'm rambling....
 
Not sure if this goes with this topic, but I thought it might be because of the change to the points system.

My Q9300 just picked up a 7200 WU. I've processed these before and they've always been labeled as SMP. For some reason HFM is calling this one a "standard".

 
nvidia gpus fail at large wu....
You're equating atom-count with scientific value, and thats not really the right way to look at it. proteins are often constructed of several different "pieces" that take on a specific conformation before folding into a single coherent, functional protein. Some proteins are dimers, trimers and hexamers made up of 2,3,6, respectively, repeating identical units. They also have primary, secondary, tertiary and quartenary structures. I don't know how these simulations work, but its quite possible that GPU WU's identify the secondary structure of only a single part of the protein, which may only be a few kDa (aka only a few thousand atoms). This may be much more important than just its atom count, since they may then be able to plug that data into a larger model encompassing the protein as a whole, which could be several hundred to several thousand kDa.

So the bottom line is that, depending on what they're actually looking at, you really cant say atom count is equal to science done, especially since only a single mis-fold in a specific part of a specific protein could easily be the cause of certain diseases.. and they may find out a mechanism or a model system by only using a few thousand atoms. The other thing to think about is that a number of PG's published papers have been more on the programming side of things, AKA "we wrote a model to use GPU's to run some well-known equations" and these WU's might not really be doing much other than figuring out how known quantities get computed in their model. Thats still scientific progress, but its not the sort of "disease curing" that laypeople associate with the folding project. In the end, I'd bet that the impact of F@H will be more from providing modelling systems to broader community, more of who's job it is to actually do serious disease research.. not from PG itself
 
Last edited:
You're equating atom-count with scientific value, and thats not really the right way to look at it. proteins are often constructed of several different "pieces" that take on a specific conformation before folding into a single coherent, functional protein. Some proteins are dimers, trimers and hexamers made up of 2,3,6, respectively, repeating identical units. They also have primary, secondary, tertiary and quartenary structures. I don't know how these simulations work, but its quite possible that GPU WU's identify the secondary structure of only a single part of the protein, which may only be a few kDa (aka only a few thousand atoms). This may be much more important than just its atom count, since they may then be able to plug that data into a larger model encompassing the protein as a whole, which could be several hundred to several thousand kDa.

So the bottom line is that, depending on what they're actually looking at, you really cant say atom count is equal to science done, especially since only a single mis-fold in a specific part of a specific protein could easily be the cause of certain diseases.. and they may find out a mechanism or a model system by only using a few thousand atoms. The other thing to think about is that a number of PG's published papers have been more on the programming side of things, AKA "we wrote a model to use GPU's to run some well-known equations" and these WU's might not really be doing much other than figuring out how known quantities get computed in their model. Thats still scientific progress, but its not the sort of "disease curing" that laypeople associate with the folding project. In the end, I'd bet that the impact of F@H will be more from providing modelling systems to broader community, more of who's job it is to actually do serious disease research.. not from PG itself

It is possible... however highly unlikely... as stanford has often said that the gpu units are extremely simple units.

I do not doubt that they are important... as a single miss fold would be disastrous... but up the atom count up the possibility of that happening...

and keep in mind... its more like 4100 of said units in a bigadv wu... even putting all of them together is more work than the gpu does if it did none of the other work...
 
The analogy I have been given a couple times is that GPU folding is like a chainsaw, while CPU folding is like a scalpel. The GPU can simulate a lot really fast, but it cannot handle the complexity of some units like the CPU can. Both are essential parts of the project.
 
It is possible... however highly unlikely... as stanford has often said that the gpu units are extremely simple units.

I do not doubt that they are important... as a single miss fold would be disastrous... but up the atom count up the possibility of that happening...

and keep in mind... its more like 4100 of said units in a bigadv wu... even putting all of them together is more work than the gpu does if it did none of the other work...
Right, but the point is that even if atom counts are extremely high, it doesn't mean that they're being manipulated individually. When calculating the conformation of a short peptide chain, there are certain conformations that are lower in energy depending on the two adjacent residues. keep going down the chain and examining every permutation to find the lowest energy conformation is "simple" but probably the kind of thing thats perfect for a GPU since you can parallelize that easily. you can then take a small peptide segment and plug it into a larger protein model in the form of, say, an alpha helix or pleated sheet, well now the protein has a huge number of atoms, but you're only looking at the interaction of the sheets and helixes. its not as simple as saying there are 4100x atoms in a bigadv unit, so a bigadv unit is 4100x more "work". Its highly dependent on whats being manipulated. I don't know what PG is putting in these units, so this is speculative, but I hope you see my point of not generalizing based on atom count
 
Right, but the point is that even if atom counts are extremely high, it doesn't mean that they're being manipulated individually. When calculating the conformation of a short peptide chain, there are certain conformations that are lower in energy depending on the two adjacent residues. keep going down the chain and examining every permutation to find the lowest energy conformation is "simple" but probably the kind of thing thats perfect for a GPU since you can parallelize that easily. you can then take a small peptide segment and plug it into a larger protein model in the form of, say, an alpha helix or pleated sheet, well now the protein has a huge number of atoms, but you're only looking at the interaction of the sheets and helixes. its not as simple as saying there are 4100x atoms in a bigadv unit, so a bigadv unit is 4100x more "work". Its highly dependent on whats being manipulated. I don't know what PG is putting in these units, so this is speculative, but I hope you see my point of not generalizing based on atom count

Right... so for 4100x more atoms they get 30x ppd is that really so crazy?
So if its only 500x more work... it requires more complex systems that cost more $$ require more bandwidth... and in the end... is undervalued... meh
 
Right... so for 4100x more atoms they get 30x ppd is that really so crazy?
So if its only 500x more work... it requires more complex systems that cost more $$ require more bandwidth... and in the end... is undervalued... meh
me said:
its not as simple as saying there are 4100x atoms in a bigadv unit, so a bigadv unit is 4100x more "work".

I'm assuming you're mis-reading this sentence. its NOT necessarily that 4100x atoms = 4100x work. what i'm saying is that a protein might have 100 "arms". say each GPU unit might work on an arm thats 100 atoms, and a bigadv might looking at how the arms fit together. the GPU unit and the bigadv unit are both looking at the interaction of 100 things, but the bigadv unit has 10,000 atoms and the gpu only has 100
 
The analogy I have been given a couple times is that GPU folding is like a chainsaw, while CPU folding is like a scalpel. The GPU can simulate a lot really fast, but it cannot handle the complexity of some units like the CPU can. Both are essential parts of the project.

If GPU folding is like a chainsaw maybe I'll buy ay GTX 580 and have a badassed chainsaw. :rolleyes:

 
Last edited:
I'm assuming you're mis-reading this sentence. its NOT necessarily that 4100x atoms = 4100x work. what i'm saying is that a protein might have 100 "arms". say each GPU unit might work on an arm thats 100 atoms, and a bigadv might looking at how the arms fit together. the GPU unit and the bigadv unit are both looking at the interaction of 100 things, but the bigadv unit has 10,000 atoms and the gpu only has 100

no I read it... but by arguing your saying its not even 30x more work... 4100x more atoms but not even 30x the work... and that is basically what stanford said by reducing the points...
 
no I read it... but by arguing your saying its not even 30x more work... 4100x more atoms but not even 30x the work... and that is basically what stanford said by reducing the points...
It might be 4100x the atoms but only 30x the "complexity". If certain pieces are "fixed" in their conformation, it may be possible to ignore them, reducing overall complexity but leaving atom count high. Again, this is pure speculation. I guess what I'm sayin is, unless we actually know whats going on under the hood, your assertion that GPU's are already severely overvalued is based on an assumption that is probably false. Going on the number of atoms alone completely ignores the projects they're working on, and how urgent stanford deems them to be with regards to their next publication.. And frankly, how close they are to their next publication is probably how they deem things important.. it was in my research group

for the record, I don't think PG handled this well, and as a relatively small-time (compared to you guys!) bigadv folder with no GPU folding, this hits me pretty hard, so don't think i'm really defending PG's move. I think the inflation was getting a bit out of hand, if only because my SR-2 went from 150k PPD to 250k overnight on the new units. Thats kind of ridiculous and unfair to those grinding away. They should have upped them no more than 20% from the start over the 6900's.. If they hadn't blasted points up 66% overnight, my guess is that this never would have come to a head like it has. I just had a bit of an issue with your logical process is all :)
 
Last edited:
"Standard points" is what the project would receive for base points if standard SMP. "Old bigadv" is the old bigadv base points (50% bonus). "New bigadv" is the new bigadv base points (20% bonus).

Project Standard points Old bigadv New bigadv Preferred Final k-factor
2684 8529 12790 10235 4 6 26.4
2685 5970 8955 7164 4 6 26.4
2686 5970 8955 7164 4 6 26.4
2689 5970 8955 7164 4 6 26.4
2692 5970 8955 7164 4 6 26.4
6900 5970 8955 7164 4 6 26.4
6901 5970 8955 7164 4 6 26.4
6903 18923 28385 22708 7.2 12 38.05
6904 26284 39426 31541 10.2 17 37.31

I marked in RED which one is pissing me off. They just upgraded these units to a sane point total only a few short weeks ago, and now they nerf them again?

Seriously WTF?
 
I marked in RED which one is pissing me off. They just upgraded these units to a sane point total only a few short weeks ago, and now they nerf them again?

Seriously WTF?

ikr
I thought logic courses were mandatory these days... I guess not...
 
Here's something else to ponder, My asus dually now earns ~61-72k PPD on a regular bigadv. Can't run 6903/4 due to the shitty upload speed on my net connection , It earns a steady 55k on regular smp and puts a lower strain on my connection, i may end up switching this machine over to regular smp and its' slightly slower twin (online by the end of next week) may never see a -bigadv.

Now both machine are twin hex cores and by stanford's suggested requirement are more than capable of running all projects. Kinda screwed up if they can earn nearly the same doing the easier WU isn't it??
 
Here's something else to ponder, My asus dually now earns ~61-72k PPD on a regular bigadv. Can't run 6903/4 due to the shitty upload speed on my net connection , It earns a steady 55k on regular smp and puts a lower strain on my connection, i may end up switching this machine over to regular smp and its' slightly slower twin (online by the end of next week) may never see a -bigadv.

Now both machine are twin hex cores and by stanford's suggested requirement are more than capable of running all projects. Kinda screwed up if they can earn nearly the same doing the easier WU isn't it??

Isn't that low for a dual hex system? I can't afford such things, but I was under the impression that that dual hex Xeon systems are typically 100k+ PPD. EDIT: Nvm, that's probably pre-point-reduction numbers.

But according to Stanford they are more than capable of running all work units -- and they are. ~10k PPD is still a rather large difference IMO (300k a month - more than I make total!), although yes, I would expect a dual hex system to have a large difference between smp and bigadv.
 
Isn't that low for a dual hex system? I can't afford such things, but I was under the impression that that dual hex Xeon systems are typically 100k+ PPD. EDIT: Nvm, that's probably pre-point-reduction numbers.

But according to Stanford they are more than capable of running all work units -- and they are. ~10k PPD is still a rather large difference IMO (300k a month - more than I make total!), although yes, I would expect a dual hex system to have a large difference between smp and bigadv.

Before points reduction it earned 75-92k PPD, also bear in mind this is a stock server board, no overclocking allowed so its pretty much at the max of what it can do (fastest chip it will support is x5675)

Both were purchased for exclusive running of -bigadv and specced well above stanford's suggested minimum of 8 real cores @2.4Ghz. I'm still undecided about rig 2, i may have a play with linux and see what that will bring to the party, if not then it will be standard SMP & possibly a gpu
 
Before points reduction it earned 75-92k PPD, also bear in mind this is a stock server board, no overclocking allowed so its pretty much at the max of what it can do (fastest chip it will support is x5675)

Both were purchased for exclusive running of -bigadv and specced well above stanford's suggested minimum of 8 real cores @2.4Ghz. I'm still undecided about rig 2, i may have a play with linux and see what that will bring to the party, if not then it will be standard SMP & possibly a gpu

Linux makes them into whole new machines, you will score more on big-bigadv than you did on regular bigadv before the point drop.
 
Linux makes them into whole new machines, you will score more on big-bigadv than you did on regular bigadv before the point drop.

Yeah, i think i will give it a try on rig 2 when it is up and running, rig 1 will stay on windows for now as i like to have a back up machine ready in case my main rig goes down, not only that but win7 licenses are not cheap - even OEM ones. Wish stanford would release a windows 64bit client/core
 
evgabucksvtecpng.png
 
Have you seen our chart, Damn thing looks like it fell off a cliff:mad:

That makes sense though. It took seven days for our EOC 24 hour average to reflect the new points system. It should level off now rather than continue to drop.
 
That makes sense though. It took seven days for our EOC 24 hour average to reflect the new points system. It should level off now rather than continue to drop.

Hope so, we were making good gains on EVGA, thats not happening anymore:(
 
a few red coats in our midst?

suspicious since it happend right around the time that the evga bucks kicked in......

If I keep up with these conspiracy theories, I'll be shopping for a tinfoil hat, instead of a 2600K, soon.
 
On second thought, that chart displays actual daily production, not daily average.
Our point production yesterday was half of what it was on 6/30. :(
 
a few red coats in our midst?

suspicious since it happend right around the time that the evga bucks kicked in......

I'm sure there are some ;)

However, I do not believe that is a large part of the drop off. The number one reason is obviously the -bigadv points reduction.
 
Dropper and Patriot together lost about 1 million PPD due to the nerf.

I'm sure it's costing our team 2-3 million PPD.
 
PG gave it to us from behind. Expect them to up the GPU points next. I wonder who that will help?
 
And 9/11 was fake. :rolleyes:

I think the conspiracy theory is ridiculous. Unfortunate and coincidental timing, yes. A conspiracy? No. A thread made by a -bigadv folder on the foldingforum brought the issue to PG's attention.

Edit: spelling, missing word
 
Last edited:
Just curious but did the [H] have any representation at the DAB meeting that the point's drop was discussed at. ? I know there was very little involvement in the thread over at the FF from the bigadv folders from [H]. Stanford could only go on what information they received from the folding public. I was involved in the thread but I really do not care about points so I am not a very good rep of bigadv folders. To me points just add a little fun to it and lets me compare what I am doing compared to others. So no matter what the point value is it does not matter because it is all still relevant, we are all receiving the same values just different numbers.

Anyway there is no conspiracy there was just a few very vocal folders that thought the point system was unfair and bias towards bigadv folders. There was really no great amount of opposition from the bigadv folders so if I were Stanford I probably would have come to the same conclusion. ;)
 
Back
Top