Suggested Change to the PPD System

Moderators: Site Moderators, FAHC Science Team

Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: PPD Bonus Scheme

Post by Grandpa_01 »

#1 Who knows the scientific value of a WU.
Answer: Stanford

#2 Who then should determine the value of a WU and the value of the speed in which it is returned.
Answer: Stanford

The points debate is nothing but an emotional reaction to a given persons e-peen. Stanford knows the values of the science and the value they place upon them is the value. Now some of us think we know how to run the project better than Stanford and voice our opinions of how we think it should be run. I am just as guilty as the next when it comes to that. Stanford has created a little bit of this problem by bowing to our demands which most likely was a mistake on there part, it opened a door which may never be shut. But there is one thing that is for sure the points system works Stanford's F@H project is growing the number of people participating grows on a daily basis. Some people come here and say XX% of there team is quitting F@H and moving to other projects but if you check their stats their active membership is actually going up and in most cases so are their points. People have come and gone since the beginning of the project and the always will the important thing is that they continue to grow and guess what the stats say they are.

Anyway I firmly believe Stanford has assigned the proper value to the WU's after all what do they have to gain by artificiality assigning a larger value to a given WU. Just because you or I do not believe it is the right value does not mean the assigned value is wrong. It just means we did not like it. Why is it people have such a hard time believing Stanford know what they are doing. :?:
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
SASinUtah
Posts: 13
Joined: Thu Jan 19, 2012 3:08 am
Location: Middle of Nowhere, Utah

Re: PPD Bonus Scheme

Post by SASinUtah »

Please forgive a novice and possibly naive comment. The PPD, QRB, etc, is a fun way to measure work accomplished; I enjoy watching my points grow (just over 1 million for my team in only 4 years!!). I enjoy upgrading equipment to see how much of an effect it has on point production (four different video cards, one PC, three laptops, one laptop with a processor upgrade). I also enjoy watching certain high ppd teams and individuals fly along, passing one team after another. Yet, the teams I most respect are teams such as one I just passed, Foldtron. Only 106 ppd and closing in on 15,000 work units processed; I have yet to process 2100. Who, between us, has done more beneficial work for F@H, Foldtron or I? My vote is for Foldtron.

May we keep our perspective as to what the goal really is: advancement in science, so people like verlyol won't have to watch friends like Benoit die so young with so much life potentially in front of him to live (viewtopic.php?f=15&t=20420). Let us please keep our egos in check and our dialog civil so that we may not in a moment of anger or displeasure cease to help others in our respective ways. All ideas are relevant, for in what one may perceive as an off-the-wall suggestion another may see the genesis of a breakthrough concept.

Fold on, my friends.
Last edited by SASinUtah on Fri Mar 16, 2012 12:09 pm, edited 2 times in total.
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

Grandpa - I agree 100% with you that Stanford knows best. I don't actually think that would change under this new proposal. After all, all they would be doing is normalising the value of each point based on the advancement of technology.

In fact, under my proposal, there is no change to the relative difference in points - your 4p server will still earn 9x as many points as my i7, or whatever the current rate is. There will still be an incentive to go big. What would change is that that the nominal value would not increase exponentially. Hell, under my proposal I'm not suggesting any change to the QRB OR the premium assoicated with BA. I'm not arguing that differentials in PPD is reducing people's participation in the project, or that person x is quitting.

I have never been anywhere near the top 1000 and I am never going to be anywhere near the top 1000. I know that over the lifespan of the current hardware I am running, it is going to peak and then decline relative to other computers, as mine gets older and theirs gets younger. Eventually I will purchase a new hardware and I'll start heading up the charts again. In the long term, once you even out the peaks and troughs, I'll basically hit a stable value, or slowly increase as other folders come and go and I stay committed.

The reason for performing this normalisation, that I have suggested is because if there is no change to the points system then people are going to be measuring points in terms of trillions of points and at some point, an adjustment will need to be made. Particularly because people look at Grandpa's PPD and say "1,300,000ppd! That's a ton more than my 100,000!" Yet in reality, all we are talking about is 13:1 - that is the exact same ratio/proportion, but it's a hell of a lot less off-putting.

Therefore, all my proposal is doing is acounting for the improvement in computational power. It's saying "we know that computational power increases as time goes on, but rather than let that inflate points exponentially untill we're discussing points in terms of 1.28^34 instead of 1.43^35, lets account for that ever increasing rate of CPU power and instead frame it in terms of the relative amount of effort point in at any given time." Because that is, at the end of the day, what it is about, rewarding how much effort someone puts in at a given point in time. It shouldn't matter when you contributed, what should matter is how much you contributed at the point in time that you contributed and that if you stop contributing, or contribute less, you will fall down the rankings.

At the end of the day, my argument is not about preserving my e-peen, it's about this: What matters is not the fact that computers are getting ever faster, what matters is the relative performance at a given point in time and extrapolating it out over the entire FAH project. Rather than reward the fact that computers are getting ever more powerful, lets reward the amount of effort people put into the project. We should be rewarding the fact that in 2012 Grandpa invested a lot of money and computational energy into his 4p system and in 2011 he invested a lot of time and effort and money into whatever previous format he invested in. He should get more reward for that than I do for investing proportionately less money and computational energy at the same given time.

After all, when we look at economic growth, we don't look at nominal growth, we look at real growth and THAT is what my proposal is measuring. I have given ideas around how to normalise it, although I am sure that Stanford has the data to make a much more educated normalisation than I can. If we can't use "a $1000 computer because there are different types of $1000 computers" then that is fine, but lets have a discussion as to how we can normalise it or account for the ever increasing computational improvement. The median/mean example is a valid method of normalising for computational improvement - if measured in an appropriate manner. But perhaps there are better ideas. But unfortunately, discussion as to what is an appropriate method of normalising ppd cannot be developed or explored if people think that ever increasing points should result mostly from technological improvements.

Unfortunately, I am not particularly sure that people are going to understand my way of thinking, and instead in 5 years time we are going to end up with truely rediculous PPD (not because of QRB) and people will measure their PPD (and overall points) in terms of how many trillion points they have, and the database size of stats sites are going to either be measured in terms of 1.33453645475367456756753456356^32 or 1.33^32 (and basically wipe out everyone with less than 1^30 points.)
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

SASinUtah - I appreciate your post!
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: PPD Bonus Scheme

Post by Grandpa_01 »

k1wi

My post was not aimed at you or MtM you 2 are actually having a intelligent conversation about the points system. It was meant to make some people stop and think before they post there e-peen comments about the points system. There is a difference and there has been plenty of them posted, they are all basically the same over and over again. It is the (H) factor that really has no place in science it only serves to detour from the project. Stanford surely has better thing to do than worry about the folding public being upset. Unfortunately they do have to worry about it because we are humans and we are donating our time and equipment to what we believe to be a worthy cause we are also completive by nature and we want to feel valued. To some of us it is more important than it is to others and the only way for some to place a given value on there worth is by points. I understand this I have been guilty of exhibiting the (H) factor in the past and most likely will be guilty of it in the future. But the thing we all need to remember is that Stanford knows what they need and uses the tools they have available to direct us.

I also understand mores law and realise that as time goes on the value of the work done by my equipment will de-valuate over time, and I expect this. The unfortunate thing is how rapidly it happens in the technology field. I do not know how many times I have been at the leading edge in production only to have it de-valuated in a short amount of time. But I do not expect Stanford to wait for me to catch up if the did they would not make much progress in the science so I have to catch up if I wish to stay as productive as possible. I am actually amazed by how many people claim to want to help science but in the same breath want to penalise those who take steps forward. Stanford says we will give you this carrot for this WU done in this amount of time. So I look at the big carrot and say that work must be really important and I shoot for as close as I can come to that carrot. Well some believe that carrot is to big because they know better than Stanford what Stanford wants and needs and what the value of that WU is. I am sure you have seen the names I and others have been called and the comment's made over time (the privileged few, Points chasers, Points whores, all they care about is points etc.) Sometime it gets a little discouraging God forbid some of us actually care about the science.

Anyway VJ asked before for anybody that had an Idea on a better points system to post it with formulas for a working system and he would have a look at it and consider it. So go ahead and give him something to look at who knows Stanford might adopt it. :ewink: We just don't need a bunch of e-peen post in a serious discussion about point's systems. It tends to distract from the intelligent discussion.

And by the way SASinUtah - I also appreciate your post. :wink:
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: PPD Bonus Scheme

Post by ChasR »

Grandpa, I suppose I'm one of the e-peeners to whom your refer. One who thinks the QRB is unsustainable. One who thinks you shouldn't get a 2x bonus for beating the preferred deadline by 1 second. One who truly doubts that BA16 work rewards truly reflect the value of the science and is in fact so high that it encourages the production of less science for more ppd. The WUs were benched on a rig that doesn't qualify for the work..

If all the CPU projects were benched on one machine, say a 2P or 4P M-C server, the uniprocessor on 1 core, the normal smp on 8 cores and the BA16 on 16 cores and the values reset (normalized) back to i5 smp production, I'd probably be happy.

To take my e-peening out of the discussion, what is your production on regular smp WUs on your 4P machine? Compare that to BA16 work. Are both values fair? Wouldn't most of the 4Pers howl on protest if their production were reduced to that of normal smp? BA16 ought to be worth more than regular smp work, but 4x is absurd. With the QRB, there is always a powerfull incentive to increase the number of cores, without setting the value of the BA16 work so high that it creates disinterest on all other forms of folding.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

ChasR, so what if there's the incentive to go big on core's? Aren't there enough people who will not be able to do so and would just have to accept donating what they can? What you're saying is that no charity should ever except donations over 5 dollars, as that would be the average most could afford without there being a chance of discouraging them from donating in the first place.

Also, again someone who's going to inflate the numbers to sway opinions. Which donor has doubled his wu's worth by beating the deadline by one second? :roll:

You're claiming to better know the scientific value better then PG? I know you where one of the people who used to run two instances of smp on one system, and one who has debated against the QRB from the start, it holds no value unless you can proof PG is wrong and time isn't as important in a serial flow of work units.

You keep trying to say people double their points if they are just a tad quicker, but never proof it. The comments in previous posts about kfactor and deadline's for any particular project which are set correctly for the spread in speed for the machines they will be assigned to will not or at least should not allow machines to be situated to close to the hockey stick.

If that's your problem, you should ask for better methods of predicting this spread in speed for any particular project so they can set up kfactor and deadline's in a way which prevents the above from happening.

Or proof you know better and QRB is wrong from the start.

@Kiwi, I asked you a bunch of questions in the previous posts, you never answered one of them. If you don't/can't, your proposal already falls apart.
7im
Posts: 10179
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: PPD Bonus Scheme

Post by 7im »

@ k1wi - I agree, conceptually, and like ChasR has stated, the need for a benchmark update is long long overdue.
2nd, the current points systems is based in a Celeron 500. Every benchmark change since that first machine, has been tied to or normalized back to the PPD of that Celeron 500. Edit: And that's how they've maintained a direct link from scientific value to points, even through several benchmark changes.

@ Jesse - Yes, somewhat disruptive, but is necessary from time to time. We've had several disruptions in the past, and more to come, but always necessary. Celeron 500, to P4 2.8 GHz, to i5, now it's well past due for a 16 core benchmark machine. PG does try to minimize the number of disruptions for obvious reasons, but k1wi has hit squarely upon why we need another disruption.

@MTM - Yes, only PG knows the actual numbers, but one does not need exact figures to recognize a problem. Common sense value judgements may be flawed, but are usually on the mark, especially when they come from people active in the project with lots of folding experience. Besides, as you said, PG knows best, so when they do move to a 16 core benchmark, whatever the PPD is set to then, it will be scientifically based. Are you arguing against PG updating the benchmark computer?

We can't shift the benchmark annually as suggested because that is too disruptive. Even 18 months to follow CPU ticks is pushing it. But every 2 - 3 years is very doable (as has been done before). PG buys hardware a little ahead of the curve (like they've often done in the past) and it lengthens the time span of how long the benchmark hardware stays current. It also helps stablize the points system by disrupting it as infrequently as possible. But the class of hardware in the benchmark PC must match the current work units. Just as we had to move from the Celeron 500 to the P4 to allow benching with SSE and SSE2 performance, and moved from the p4 to the core 2 benchmark to allow benching for SMP work units, we must now move from a 4 core machine up to at least 16 cores to more accurately benchmark current hardware against current work units.

And that, as has been done in the past, temporarily delays the hockey stick problem, again. ;)
Last edited by 7im on Fri Mar 16, 2012 5:53 pm, edited 1 time in total.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

Conceptually the hockey stick problem lies in two sections here, one: the exponential increase in computing capability, two: the QRB providing steep increases in credit assigned to machines which due to being much faster then other machines also assigned the same projects.

The first concept is something I'm saying should not be normalized. I would like to know my contributions are matched against the scientific value, and you can't claim the value of something I did one year ago on a machine with let's say half the computational power is the same as something I might be doing one year in the future using a machine with roughly the same purchase price, as been suggested.

I don't want to normalize this because some people don't want to be discouraged, I am actually encouraged to see others getting so much more points as it shows the incease in computational/scientific throughput.

The second concept I never claimed to be 100% correct, I do say that I believe in the QRB's concept very much. There should be more benchmarking categories, and if not, projects should be more carefully matched against the spread in speed of the machines they will be assigned to ( increase client/server logic ). This way, you're preventing machines from being on the very steep part of the curve.

That means you do not have to let loose the normalization against the original celeron benchmark system, as that is the base against the scaling of computational/scientific throughput is done. You can alter the way it is displayed by using the suggestion I made earlier, which would prevent the ppd needing to be factorised in the listings in the near future, and would still allow me to compare the listed ppd as actual computational/scientific input. You can extend that more bencharking machines using the same scheme.
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

MtM - Stop being so militant - your use of language such "as falls apart" is really not very conducive to finding a solution. You seem to be playing devils advocate nit-picking minute details because you don't think there is a fundamental issue that needs to be addressed. Attacking the little picture because you do not agree with the big picture. Never mind the fact that you are nit-picking a detail that I already admitted a number of times is hypothetical. When I started this thread I was hoping that there would be constructive feedback, not negative feedback. I'm much more interested in how you would improve my solution, not that you want to nit-pick the small detail on something that is really big picture.

I gave you a whole page of answers to your questions (some of which you edited to the point where they resembled little of their original post), it's the middle of the night on a Friday here in New Zealand and your post got buried amongst a number of other posts, that I thought were much more interesting/progressive and therefore prioritised answering. Please don't take my lack of answering as the admission of a fundamental flaw in the design. I will answer your questions when I have the time.

k1wi

- 7im,

I appreciate your comments. The reason why I am supportive of a more frequent adjustment policy is because I'd much rather more smaller adjustments and once a year or once every 18 months is a big adjustment - the main issue with that though is it faces is it's a major change in conceptualisation that people like MtM don't seem to grasp. People look at "oh my PPD has reduced by 3% PG, doesn't value my input", instead of looking at it as "computational power improved 3% (because the proportion of newest hardware increased, increasing the average performance of computers on a continuous rate), therefore a point takes 3% more work to maintain the same level of difficulty." When what really matters is this new system preserves the relative productivity amongst users. My own personal opinion is that if the policy of continuously adjusting for computational improvement on a known schedule, as the Federal Reserve has for setting interest rates (which are largely based on controlling inflation and promoting real growth), then people know that points will constantly reflect the change in computational power, but it will retain the relative productivity of individual A vs. individual B. I guess personally I prefer regular 3% adjustments than infrequent 50% or 100% adjustments because they are really large shocks to the points system.

But upon reflection, if we stay at a point on the hockey stick that is not at the steep end of the curve (say around square 5 or 6 of the chessboard if you will) then these 'big adjustments' are not really big adjustments.


-MtM not normalising for improvements in computational is rewarding the improvements in computational efficiency, not rewarding the relative effort of donors. You seem to think that we should only reward improvements to technology, not the relative proportionate effort of folders at any point in time. It says future folders will earn disproportionately more points SIMPLY because they will fold on more technologically advanced hardware. The analogy is looking at a person's increasing nominal wage rate and saying "they got a 5% raise - that's great" when a more accurate analysis would be "they got a 5% raise, but inflation increased by 5%, so actually, their real wage did not increase."
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

I'm not militiant but I find those 'nitpicking' details pretty important to the big picture. You're the one who's becoming militant here and I would like you to stop that and start answering the questions if you're looking for constructive feedback. Not giving answers as you would rather keep in discussions without touching on solid facts, maybe also because those other posts fall in line with what you're trying to advocate, is not making your points more valid at all.

I also already gave a counter proposal which I supplied with reasoning and effect, which is more then you have done.

And that includes the reasoning behind not believing in rewarding the relative effort of donors without retaining a tie between actual scientific value.

Stop making it out like I don't understand your concept, it's making you look like you can't support your argument in other ways then attacking those who don't agree with you.

Also I did not edit any of my post to the point where they no longer reflect the same intent as they had when they were posted. Some posts were edited because you seemed to avoid answering them, some were edited because you seemed to have a problem with my language or arguments. None was changed because I 'changed my mind'.

Edit:

I read back the entire thread and I did find I missed some sentences burried in replies which seemed to be only towards other people. Amongst those is the one about the dollar value not being a solid measurement. You asked for a different proposal, I think I had given one before: benchmark the computational increase ( the effect it has on scientific value ), normalize points using this speedup in reverse but publish the speedup factor so people like me can who want to be able to see the effective scientific value can look those up.

I also am kind of suprised it was so late during parts of this thread's discussions, I made posts around 3 at night yesterday I would have thought that would be mid day for you ( since I'm literally almost half way around the world ).

Maybe we both were up late. Sorry that I missed some of your answers, I think you missed the point in some of my posts as well and that we didn't notice the actual feedback being given because of how they were presented. Let's burry the hatchet?
7im
Posts: 10179
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: PPD Bonus Scheme

Post by 7im »

Please stop talking about each other's form of response, stick to the content and topic, and bury the hatchet, or a mod will bury a hatchet in this thread as just another wasteful points complaint thread. Thanks.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: PPD Bonus Scheme

Post by Grandpa_01 »

Grandpa, I suppose I'm one of the e-peeners to whom your refer. One who thinks the QRB is unsustainable. One who thinks you shouldn't get a 2x bonus for beating the preferred deadline by 1 second. One who truly doubts that BA16 work rewards truly reflect the value of the science and is in fact so high that it encourages the production of less science for more ppd. The WUs were benched on a rig that doesn't qualify for the work..
Yes you are one of those who has used descriptive names for people who have more productive equipment that is just frustration and the (H) factor, I actually think you are a good person just trying to protect your position. And it is kind of ironic because at 1 time you were one of them I would look at your production with envy and say 1 day I want to be able to do that. Thanks for the inspiration :D

You are stating that Stanford does not know the value of the a given WU or the value of it being returned in X amount of time. The value is what Stanford has determined it to be not what you or I want it to be. So are you saying you know the value of the science better than they do.? And where does the 1 sec thing come from are you talking about the preferred and final deadlines. If you are do you believe there should be no cut off point. ? And if so why not ?
If all the CPU projects were benched on one machine, say a 2P or 4P M-C server, the uniprocessor on 1 core, the normal smp on 8 cores and the BA16 on 16 cores and the values reset (normalized) back to i5 smp production, I'd probably be happy.
Just curious but where would the incentive be for people to buy the machinery and to run the bigadv WU's if it were normalised across the board. The purpose of the QRB was to encourage quick returns. Guess what it works just look at all the 4P out there right now and there are quite a few being planned and built. Why would I or anyone else run a bigadv WU if we could make the same PPD off of a smp with far less risk.
To take my e-peening out of the discussion, what is your production on regular smp WUs on your 4P machine? Compare that to BA16 work. Are both values fair? Wouldn't most of the 4Pers howl on protest if their production were reduced to that of normal smp? BA16 ought to be worth more than regular smp work, but 4x is absurd. With the QRB, there is always a powerfull incentive to increase the number of cores, without setting the value of the BA16 work so high that it creates disinterest on all other forms of folding.
It is not 4X a smp 6940 = 00:35 = 246,000 PPD, a smp 7163 = 00:35 = 238,000, a 6904 = 16:59 =620,000, a 6901 = 06:13 = 382,000 you do tend to exaggerate a little when it come to the numbers.

I do believe there are some problems with the current benchmark system but it is not the numbers assigned to different classes of WU, the problem lies within the classes of WU's. I believe all smp should be normalised to a smp standard all bigadv should be normalised to a bigadv standard and all GPU fermi normalised etc.etc.etc. each WU within a class of WU should receive retaliatively the same amount of PPD as the next wu in that class currently that is not the case. There is an allowance of +- 15% which is an actual of 30% which is far to much.

Anyway I am not here to argue about it, it will make very little difference. If Stanford decides to bow to public pressure again then they do. I believe they know and have set the value to what it actually is to them, but it will not matter to me I will continue to fold if it for 0 PPD. :wink:
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

I would like to bury the hatchett, for what it is worth I appreciate that you want the best outcomes for folding, and in many respects, you represent the very people that my argument has to convince, for it to go anywhere. My last post (prior to the one a couple of hours ago) was 6pm on a Friday evening, I stopped posting because we had a family evening planned, not because I was burying my head in the sand. My most recent post was at 4:39am. I care very passionately, hence why I am prepared to basically put my standing in the community on the line in order to resolve the issue that is at hand, but 'real life' has to come first and foremost.

I had already dedicated quite a substantial amount of time to discussions with you and your posts deserved a proper reply, therefore I prioritised the other posters because I had the time in order to do so. I don't want to give you a 5 minute reply when your questions deserve a 20 minute considered reply. I'd rather use those five minutes to push out a reply and then get back to yours when I have time.

The reason why I started this thread, is because the current points system does have the wheat/chessboard problem and it does reward users for the efforts of people at Intel and AMD and NVidia that result in increasing computational. If we do not account for the ever increasing power in computers, we end up with ever increasing absolute values, until the point where the absolute values are massive - I don't think that's efficient. We cannot prevent this exponential growth unless the address the fundamental underlying problem - that computer power is improving exponentially. This exponential growth is not an issue when we are talking PPD in the 10s or 100s or 1000s, because we are dealing with small numbers. Indeed, the shape of the curve at this point looks exactly the same as it does when we look at it over a longer scale. After all 2 is double 1 just as much as 2,000,000 is double 1,000,000. What is an issue is that absolute values quickly become massive and that has a huge psychological impact. I am pretty sure that people would have less of an issue with someone earning 16 points a day when they are only earning 1 point a day, but I do think they would have more of an issue with someone earning 160,000,000 ppd when they are only earning 16,000,000.

Unfortunately, we cannot address the fundamental underlying problem without first letting go of the idea that PPD should always increase as the power of computers increases. That requires a massive mind shift, and is perhaps, in my mind, the biggest challenge to any revision of the points system. That is a hard thing to adopt - much like moving to a fiat currency, but what is important is making sure that the points system encourages competition, not that it increases exponentially each year.

That is why I purposely did not give a 'hard fact' equation to implement my view, until I was pushed to do so, because until we address the big picture, as to whether we should or should solve the problem by normalising PPD, the actual nitty-gritty is just adding an extra layer of complexity that we aren't even at. I don't want to have to defend my belief that there is a fundamental flaw in the points system and that there is a need to somehow normalise PPD, while simultaneously hacking out the detailed formula for how we do normalise. But I also admit that I don't have the exact formula, and I don't pretend to have the exact formula, what I want is for input from people so that we can push this concept into a finished product, that we can say "this is what we as a community have come up with."

If people don't believe in it, that is fine, I have no problem with that. In the end how we normalise the points system is basically doing the same thing. Whether it is by manipulating K factors or using factors, but I believe that those two options are less than optimal because they either do not address it optimally, or they do not in fact apply any normalisation.

If it's by adjusting the K factor of projects then the only way we can normalise points is by continuously making each new project worth less than the previous project, in line with either the growth in the power of computers or some unknown rate. My issue with that is that it results in different projects running concurrently with different point values. I would much rather apply any ongoing normalisation across ALL projects, theoretically, ALL projects including those no longer running. The idea being, if for some reason we did bring out an ancient project and folded it today, it would still get proportionately the exact same reward for the amount of effort, relative to other projects that it requires. I hope that my big picture justification for that has been properly conveyed.

If we do it by factorising the points system then we are in effect making it far less meaningful for the average person to understand. 1.22^23 compared with 6.32^21 is a complex concept to understand and it, in my mind, needs to be simpler than this. Furthermore, the stats databases will still have to store the absolute value, which will lead to exponentially large databases.

I would like to stress that I don't think that the QRB is at fault in this instance, hence why I am not changing it! I'm not actually for changing the relative point-in-time allocation of points at all. Where the QRB plays a role is that it is effectively turning what is a doubling in PPD every 18 months into a ^4 change every 18 months. That's a whole other fundamental kettle of fish. Look at the graph on page one, regardless of the QRB we are still moving along the curve/chessboard. All this is doing is making this discussion ever more pertinent. In my mind, lets create a solution to one problem - put QRB to one side - and then make any changes that we need to that one problem, and then look at solving the next problem. If we try to solve them all at the same time we're going nowhere but personal criticism!
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

Add: actually, I don't think my final paragraph is correct. I don't think the QRB is turning it into a ^4... Furthermore, I suspect that we can retain the principles behind the QRB and it's intended outcomes and resolve the issue if we solve the fundamental issue I am attempting to solve.
Post Reply