So how long.....

Kendrak

[H]ard|DCer of the Year 2009
Joined
Aug 29, 2001
Messages
21,142
How much longer before Stanford does a points adjustment.

Some people are starting to get into the range of a billion total points.

When are they going to move the decimal point over a few places for everything?

I'm been thinking about this off and on for the last few days.

Like for me....

I have 25 million points. Stanford does an adjustment and now I have 25k points and am making 70 ppd instead of 70k.

 

lassiterb

[H]ard|DCer of the Month - June 2009
Joined
Oct 10, 2005
Messages
811
that would put the classic uniprocessor client at around .3 ppd ?:D
 

sirmonkey1985

[H]ard|DCer of the Month - July 2010
Joined
Sep 13, 2008
Messages
22,230
doubt it will ever happen.. i think heads would roll and people would quit left and right if that happened..
 

Jathanis

[H]ard|DCer of the Month - Feb. 2013
Joined
Apr 22, 2008
Messages
985
Well, it isn't that bad compared to the points ATI cards put up on the Milkyway & Collatz BOINC projects. An ATI 4850 can do an easy 40K ppd - I did 10M BOINC points in Milkyway in just about 6 months. With one GPU. They start messing around with more stuff, they'll just alienate more people, IMO.
 

amdgamer

Supreme [H]ardness
Joined
Oct 27, 2004
Messages
4,880
For some reason, people like big numbers. I would in fact be in favor of a points adjustment because I like my numbers to be manageable, but that won't happen for the reasons you guys stated.
 

Mr. Pedantic

[H]ard|Gawd
Joined
Sep 19, 2009
Messages
1,707
People like big numbers because it gives them a sense of progress - they feel like they're getting somewhere. I do just over 10k ppd right now; it sure sounds a heck of a lot better than 100ppd. I also suspect it's because people like saying "million", "billion" and the like. It's not like they have enough money to talk in millions and billions, so maybe it's a way of compensating...
 

capreppy

[H]ard|DCer of the Month - April 2009
Joined
Jul 4, 2007
Messages
3,410
Given the number of peeps who have recently joined due to recruiting efforts (not just our team, but many others), it is unlikely they would do a massive restructuring. People get used to a certain number of points and a change like that would eliminate the incentive for a lot of people.

People in general are points whores. I am one of them. I currently have hundreds of thousands of airline miles and hotel points. At one point in time, I actually had a million of each (not quite a Ryan Bingham). I like my points :D

 

C7J0yc3

[H]ard|Gawd
Joined
Dec 27, 2009
Messages
1,353
What they should do is a prestige system like CoD. Once you hit say 10 million points, you reset to 0, however you get a star next to your name, or you WUs get 2x the points or something. As you hit 10 million again you get a new color star, and your WUs are worth 2.5x and so on and so fourth. This way you are able to race to a goal, and once you hit it instead of just saying "Oh isn't that nice" you get rewarded, and peoples overall points stay within 8 digits.
 

APOLLO

[H]ard|DCer of the Month - March 2009
Joined
Sep 17, 2000
Messages
9,089
Unless it somehow interferes with Stanford's processing of the stats, I don't see why they'd alter the system in the foreseeable future. Each order of magnitude takes a lot more time to achieve. All Stanford has to do is ease up on their granting of high credit to new WUs, and that would work for many years to come. It is a far bigger leap to make from 1 billion to 1 trillion than it is from 1 million to 1 billion. So too, will the jump from 1 trillion to 1 quadrillion be assuming the credit system stays the way it is.

Of course there is also the option of another reboot of the project...that would definitely take care of it.
 

Mr. Pedantic

[H]ard|Gawd
Joined
Sep 19, 2009
Messages
1,707
Unless it somehow interferes with Stanford's processing of the stats, I don't see why they'd alter the system in the foreseeable future. Each order of magnitude takes a lot more time to achieve. All Stanford has to do is ease up on their granting of high credit to new WUs, and that would work for many years to come. It is a far bigger leap to make from 1 billion to 1 trillion than it is from 1 million to 1 billion. So too, will the jump from 1 trillion to 1 quadrillion be assuming the credit system stays the way it is.

Of course there is also the option of another reboot of the project...that would definitely take care of it.
That wouldn't work, because you still have the uniprocessor client, and everything is relative to that because that defines the 'baseline'. When we get to a future where everything is multithreaded and we can discontinue the uniprocessor client (except maybe for MIDs...?) then SMP can become the new baseline and we can all do a frame shift downwards.

As for increases in orders of magnitude, that is partly true. However, since Stanford is continually giving out more and more points for what is increasingly consumer- rather than server-level hardware, it makes sense that rate of points accumulation will increase. Couple that with the increase in members in the project and you have a huge increase in the rate of points accumulation over time. Granted, it won't be base-10 exponential so it won't mean that from a billion to a trillion points the time will be the same as from a trillion to a quadrillion, but it will mitigate the time somewhat.
 

APOLLO

[H]ard|DCer of the Month - March 2009
Joined
Sep 17, 2000
Messages
9,089
That wouldn't work, because you still have the uniprocessor client, and everything is relative to that because that defines the 'baseline'. When we get to a future where everything is multithreaded and we can discontinue the uniprocessor client (except maybe for MIDs...?) then SMP can become the new baseline and we can all do a frame shift downwards.
I'm not sure I'm following you in the first part of your post. What exactly wouldn't work? What do you mean by 'frame shift downwards?'

As for increases in orders of magnitude, that is partly true. However, since Stanford is continually giving out more and more points for what is increasingly consumer- rather than server-level hardware, it makes sense that rate of points accumulation will increase. Couple that with the increase in members in the project and you have a huge increase in the rate of points accumulation over time. Granted, it won't be base-10 exponential so it won't mean that from a billion to a trillion points the time will be the same as from a trillion to a quadrillion, but it will mitigate the time somewhat.
Yes, assuming the increase Standford has been granting to new projects (ex. nVidia) and new WUs (ex. -bigadv) continues in the future, the jump in orders of magnitude will not prove nearly as formidable a goal/milestone to achieve. The steady increase in participation and the smaller advancements in technology contribute to the diminishing in the difficulty of reaching higher orders of magnitude. I had assumed in my post, a leveling off in Standford's recent points increase with new WUs and roughly similar contributions or slight increases from the total numbers of people involved for each team.

All in all, however, I do not see this as a major issue in the foreseeable time frame, but admittedly, I have considered it to be one up until recently. I just don't think Stanford will change anything that will result in a large scale modification of the points system, even though they have in the past.
 

Parja

[H]F Junkie
Joined
Oct 4, 2002
Messages
12,619
How much longer before Stanford does a points adjustment.

Some people are starting to get into the range of a billion total points.

When are they going to move the decimal point over a few places for everything?

I'm been thinking about this off and on for the last few days.

Like for me....

I have 25 million points. Stanford does an adjustment and now I have 25k points and am making 70 ppd instead of 70k.


What would be the point of doing that? It's not like we're going to run out of numbers.
 

SmokeRngs

[H]ard|DCer of the Month - April 2008
Joined
Aug 9, 2001
Messages
17,470
There will not be a dramatic change in the point system any time soon if ever. As someone stated already, the only way you would see this is if the project was "finished" like the F@H1 project was. For those who don't know, we're actually on F@H2 right now. I was still running another project when F@H1 ended and F@H2 started but the points level was a hell of a lot lower at the beginning. Even starting out a bit after F@H2 was already going, I remember hitting 300PPD regularly and that was a pretty big accomplishment as it was well above the average for the team. I also remember when 100k points would easily guarantee you a spot in the top 100 in the team.

I'm actually surprised Stanford has kept F@H2 going this long. I expected it to have ended long before now and started a F@H3 project.

I know I'm dating myself a bit here, but I still miss the "dumps" in Genome@Home.

 

Mr. Pedantic

[H]ard|Gawd
Joined
Sep 19, 2009
Messages
1,707
I'm not sure I'm following you in the first part of your post. What exactly wouldn't work? What do you mean by 'frame shift downwards?'
The original concept of points advantages with running SMP and GPU was that the points were a reward for being able to run better hardware with greater setup and maintenance costs. This points bonus is relative to the uniprocessor client. Therefore, to keep the whole thing fair, i.e. to have the same ratio of "normal" and "high performance" clients, either everything needs to be compressed (like dividing all points production by 10 or 100, as some have suggested) or to make something like the normal SMP the baseline. So the SMP client would be about 150ppd depending on CPU, around about the same for GPUs (NVidia ones at least), 300+ for bigadv, etc. That's what I mean.
 

APOLLO

[H]ard|DCer of the Month - March 2009
Joined
Sep 17, 2000
Messages
9,089
The original concept of points advantages with running SMP and GPU was that the points were a reward for being able to run better hardware with greater setup and maintenance costs. This points bonus is relative to the uniprocessor client. Therefore, to keep the whole thing fair, i.e. to have the same ratio of "normal" and "high performance" clients, either everything needs to be compressed (like dividing all points production by 10 or 100, as some have suggested) or to make something like the normal SMP the baseline. So the SMP client would be about 150ppd depending on CPU, around about the same for GPUs (NVidia ones at least), 300+ for bigadv, etc. That's what I mean.
I follow what you're saying now but that is completely different from what I meant in my original post. What I was referring to by rebooting the project was elaborated upon by SmokeRngs. We were both here since the G@H days. The current project is F@H 2.0, which superseded the initial F@H project that was later labeled by some as a 'beta.' If another reboot of the project ever happens (incredibly unlikely) then everything will be zeroed, including all team points accumulated thus far, as happened from the previous switch to F@H 2.0. There will be no more concern for increasingly unmanageable large numbers, and increasingly large points+bonus for some WUs. Everything including the value of WUs would change in such a scenario.

I highly doubt that will occur because the world is different now, and different players are involved. If Stanford ever went over to a hypothetical 3.0 of the project, EVGA would instantly become #1 and all teams would be rated according to the PPD they're producing at the date the project migrated. I could just see the raucous that would cause... :eek:

Again, I want to state that I do not believe there is actually a problem so long as the stats servers can deal with the larger numbers and stats pages can handle them with ease. It's just a perceptual change that some have to get accustomed to, akin to the increasing processor frequencies and storage space of media, etc. in the computer industry as time goes by.
 

sirmonkey1985

[H]ard|DCer of the Month - July 2010
Joined
Sep 13, 2008
Messages
22,230
The original concept of points advantages with running SMP and GPU was that the points were a reward for being able to run better hardware with greater setup and maintenance costs. This points bonus is relative to the uniprocessor client. Therefore, to keep the whole thing fair, i.e. to have the same ratio of "normal" and "high performance" clients, either everything needs to be compressed (like dividing all points production by 10 or 100, as some have suggested) or to make something like the normal SMP the baseline. So the SMP client would be about 150ppd depending on CPU, around about the same for GPUs (NVidia ones at least), 300+ for bigadv, etc. That's what I mean.


if thats the case then why not just move F@H to boinc.. since thats basically how the point system is in boinc and is a shit load more stable to run then F@H.. there they just use a set limit on points for the WU and what you get per WU is determined by how fast you complete it..
 

Mr. Pedantic

[H]ard|Gawd
Joined
Sep 19, 2009
Messages
1,707
if thats the case then why not just move F@H to boinc.. since thats basically how the point system is in boinc and is a shit load more stable to run then F@H.. there they just use a set limit on points for the WU and what you get per WU is determined by how fast you complete it..
But that's not the point. Because with BOINC, your ppd is proportional to the speed of your system. Whereas with FAH, the points system is a lot more judicious as with MD there are a lot more possibilities in terms of dynamics, models, and approximations with using fewer quick systems than with using lots of slow systems. So it's not just a case of Stanford rewarding people, it's kind of a move to identify people with faster hardware and using their full potential, as well as partly an incentive to get people to upgrade to give Stanford more options.
 
Top