Bigadv points change

Moderators: Site Moderators, FAHC Science Team

mdk777
Posts: 480
Joined: Fri Dec 21, 2007 4:12 am

Re: point system is getting ridiculous...

Post by mdk777 »

And you try to make it less dire than it really is.

1. The FAQ I linked stated the i5 PPD was 6189 PPD. 850,000 PPD / 6000 PPD = about 140 systems to make that much PPD. How can the bonus should be a lot higher if the FAQ states the PPD IS 6189 PPD?
Answered here
What, exactly, is the relative value of the various CPU clients?

THIS FAQ gives us some indication.

classic PPD = 100
SMP PPD = 1760
bigadv PPD = 2640

This would mean the classic to bigadv ratio is 1:26.4

If we divide the aforementioned 48 core bigadv machine's PPD (850,000) by that ratio we get the equivelent classic PPD value, which is 32,196.97 PPD

So how many classic folders would it take to get that PPD? I have an AMD @2.8 GHz that gets 575.96 PPD on classic Project 10720. That works out to 56 classic clients.

Does a single bigadv client = 56 classic clients?

Is a folder that turns in one massive WU every 20 hours equal to 56 Classic clients that each turn in one WU every 3.74 days?

In the time those 56 classic clients complete each of their WUs the bigadv machine will finish 4.5 WUs

Does that single bigadv WU = 12.5 classic WUs?

When you separate the points from the time and work done in that time you get very skewed results.
This is the second time you have made this error to show how ridiculous the increase in points has become.

By definition, ANY time bonus whatsoever will result in an exponential increase in PPD :!:

The way you present the disparity in ppd, the only conclusion is that you are against any kind of QRB.

However, you say that you do support it and supported it when it was announced. Hence since nearly the very start of this thread VJ asked for people to crank the numbers, show the equations, and defend the results, rather than just pointing out the obvious; that the QRB returns exponential increases in points with the reduction of time. :mrgreen:

Image

I really really do not like the shape of this curve :!: :lol:

However, just because I dislike exponential functions does not mean that they do not accurately reflect certain phenomenon. :wink:
Transparency and Accountability, the necessary foundation of any great endeavor!
Amaruk
Posts: 254
Joined: Fri Jun 20, 2008 3:57 am
Location: Watching from the Woods

Re: point system is getting ridiculous...

Post by Amaruk »

7im wrote:Please summarize why your formula is better.
It's not really my formula. ;)

There are only a few actual formulas in the previous 15 pages, and no one has yet taken the time to flesh any of them out. I simply took what appeared to be the best one and did the math.

As for it being 'better', that is not for me to decide. That responsibility lies solely with the Pande Group.

My purpose is to provide the data necessary to allow a debate of the above formula based on fact.



MtM wrote:It does fix the current problem some seem to have with the exponential increase in value with increasingly shorter return times...
I must confess that it really doesn't even do that. :(


When I get the time I will run the numbers on changing the benchmark machine.
Image
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: point system is getting ridiculous...

Post by MtM »

Doesn't it 'fix' it by limiting the diverication to be expected from the average returns compared to the fastest returns? You're not getting that far to the right of the curve if you bench the project on comperative hardware.

Doesn't the whole problem originate from the difference in performance between the i5 and the setups actually running bigadv?

If you don't add diversication then you need to look at something 7im said before, use a cap based on actual performance ( so not cores please, use something like flops but then in the form of some real f@hcore action so it reflects the proper expected performance ). That cap limits people's interest in getting as far right as possible so I don't feel it's the best method. I think the forumula shows that we should encourage people to get as far to the right as possible.

Adding a benchmark machine makes that easier to do by splitting of the really different bigadv and normal smp/classic client ( and maybe it should even be done with classic and smp ). You would control the extreme's much better, and you can still count on people wanting to show of their fastest machines on the folding@home project.

Edit: spelling + addition

If you would accept the current points being valid. anyone familiar enough with the actual systems running bigadv ( and I do mean the real numbers, total numbers ) could say what the average configuration should look like and determine the new base points using the existing formula ( so that it reflects the current points getting awarded to that systems being equal to what it is getting now ). The issue with putting those numbers to it is that 7im will argue that it doens't change anything about the extreme differences between normal smp and bigadv.

And I can only say that's untill PG decides the differences aren't right and lower the base points for bigadv.... the ball is in their court not ours how can we really judge how they should value time? Allot of people have argued this but I must repeat: anything which is aimed to promote faster returns need to be exponential. You can flatten the curve only if you know for sure that the value on the y axis isn't as important as the line shows... the only one who really commented on this is VP himself and he confirmed the original formula is correct. He would consider any alternative but has stated his reasons for doing so would be to ensure there wouldn't be allot of people feeling treathed unfairly. Changing the QRB formula in that light should be avoided I feel and I think atleast some would agree.

The thing a diveriscation in the benchmark machines will do is ensure no one in that group will get to much out of the expected range.

If that fixes how people feel when comparing their ppd from a quad core running smp with someone running a quad 6c intel setup, probably not. But fixing epeen envy, sorry the bluntness, isn't the task at hand, certainly not if it means to lower the value of the top producers artificially with a cap which prevents people to get as far right as possible. Only PG could at some time lower base points for bigadv so that it won't get as much points attributed, and if they don't then I don't think it's because they don't know people are wondering about it being a true reflection of the contributions made. All I seen so far is changes to bigadv project which were out of range with other bigadv projects, not an adjustment downwards for all bigadv projects. If there has been, it indicates another isn't that unlikely, if there has not, it should show that after this many pages of discussion, no one has decided their worth is reflected wrongly. Maybe it even indicates adding a new machine is not needed, maybe it says the right side of curve does it's work as it's intended to do. How unfair it all could seem to someone being dissapointed with their own points production, it's more unjust to take away from someone else his contribution to make that other person feel better.

I have to apologize in advanced I done a format and no spellchecker installed yet :oops:

Edit: more spelling
7im
Posts: 10179
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

mdk777 wrote:
And you try to make it less dire than it really is.

1. The FAQ I linked stated the i5 PPD was 6189 PPD. 850,000 PPD / 6000 PPD = about 140 systems to make that much PPD. How can the bonus should be a lot higher if the FAQ states the PPD IS 6189 PPD?
Answered here
What, exactly, is the relative value of the various CPU clients?

THIS FAQ gives us some indication.

classic PPD = 100
SMP PPD = 1760
bigadv PPD = 2640

snip...
Amaruk was wrong, so now you're doubly Wrong. Not answered.

As I keep saying... check your facts.

CPU is 110 PPD. 220 PPD with Big WUs.
SMP WAS 1760 PPD, on the OLD benchmark system, not on the new i5. You still haven't read that FAQ I linked? Try again. And it was only the base PPD, not with QRB.
bigadv PPD = 2640

That last one is also the old number, pre QRB.

New numbers are...
the new quad-core base PPD [is] 1130
Our Core i5 benchmark machine gets 6189 PPD.
(with bonus)

http://folding.stanford.edu/English/FAQ-PointsNew

Although it seems that "1.5x the SMP points" thing for bigadv has changed to the formula we've debated the last few pages. ;)


@ Amaruk. Okay, not your formula. ;) That credit probably goes to ChasR.

Still, if you put it forth, with examples, you must have an opinion on it. What is your opinion? How is this formula change an improvement?
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Haitch
Posts: 34
Joined: Tue Dec 04, 2007 4:34 pm

Re: point system is getting ridiculous...

Post by Haitch »

I've been plugging numbers into the formulas, and thinking about the whole situation, and my conclusion is nothing needs to be changed.

The two main reasons for that:

1) People are looking more at PPD rather than Points/WU. The QRB is designed to encourage the quick return of WU's. Take this scenario:

I rent a 480 core supercomputer that can knock out 6903 in 1:00 TPF. In the 24 hours I have it, I can do 14 for an impressive 33,143,227 points for the day.

After my rental is over I go back to using my usual 12 core, that does 40:00 TPF on 6903. So those same 14 unit's take my 40 days to complete, for 5,240,400.

So the supercomputer get 6x the points for doing the work 40x as fast? Does that seem unreasonable to anyone? It doesn't to me and apparently it doesn't to PG.

Extrapolating this out to where the super computer can do a 6903 at 1 second TPF, you might bitch about the 15.5 billion points for the day, but it's still only 65x as many points for doing the work 2,400x faster (it'll take the 12 core 6.5 years to do the same work ....)

2) I've heard it said repeatedly that the curve doesn't scale up to the monster machines. Well after plugging the numbers in on a 6903, I can tell you it does scale pretty perfectly.

Every 50% decrease in TPF, equates to a 40% increase for the point FOR THE WU. Increase the turnaround efficiency by 100%, you get 40% more points for the WU - and are the WU points the measure of the science, not the PPD, the Points per WU.

A 50% decrease in TPF is equal to a doubling of CPU performance, and the 41% holds whether it's 80:00 to 40:00 or 1:15 to 0:38. Is 40% increase in points for a WU reasonable for a doubling of CPU resources? Hell - I'd have given it more.

Its scales the same going from 1P->2P, 2P -> 4P ... 24P -> 48P , 48P to 96P ....


So, for me any way, the current system makes sense and doesn't need tweaking at all.

H.

edit - added extrapolation.
Last edited by Haitch on Wed Jun 22, 2011 10:25 pm, edited 1 time in total.
VijayPande
Pande Group Member
Posts: 2058
Joined: Fri Nov 30, 2007 6:25 am
Location: Stanford

Re: point system is getting ridiculous...

Post by VijayPande »

An update on what's been going on internally at Stanford: somewhat paralleling this thread, we've been discussing this internally, taking in comments made here and in other threads. We have a meeting scheduled for Monday and I have some ideas to propose. If the FAH/Pande group like the proposal, we'll run it by a sampling of donors to get feedback and if they like it, we'll make a more public announcement of what we have in mind. This may take a few iterations of back and forth, but my main point here is that we are actively working on this issue. Another aspect of this is that donors should be prepared for changes in the point system that will result from this (and this often affects donors in different ways, some positively some negatively).

While I am sure the new plan, whatever it is, won't solve all issues, I agree there are some fundamental issues that need to be resolved and that likely can be with some changes.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: point system is getting ridiculous...

Post by ChasR »

Haitch wrote:I've been plugging numbers into the formulas, and thinking about the whole situation, and my conclusion is nothing needs to be changed.

The two main reasons for that:

1) People are looking more at PPD rather than Points/WU. The QRB is designed to encourage the quick return of WU's. Take this scenario:

I rent a 480 core supercomputer that can knock out 6903 in 1:00 TPF. In the 24 hours I have it, I can do 14 for an impressive 33,143,227 points for the day.

After my rental is over I go back to using my usual 12 core, that does 40:00 TPF on 6903. So those same 14 unit's take my 40 days to complete, for 5,240,400.

So the supercomputer get 6x the points for doing the work 40x as fast? Does that seem unreasonable to anyone? It doesn't to me and apparently it doesn't to PG.

Extrapolating this out to where the super computer can do a 6903 at 1 second TPF, you might bitch about the 15.5 billion points for the day, but it's still only 65x as many points for doing the work 2,400x faster (it'll take the 12 core 6.5 years to do the same work ....)

2) I've heard it said repeatedly that the curve doesn't scale up to the monster machines. Well after plugging the numbers in on a 6903, I can tell you it does scale pretty perfectly.

Every 50% decrease in TPF, equates to a 40% increase for the point FOR THE WU. Increase the turnaround efficiency by 100%, you get 40% more points for the WU - and are the WU points the measure of the science, not the PPD, the Points per WU.

A 50% decrease in TPF is equal to a doubling of CPU performance, and the 41% holds whether it's 80:00 to 40:00 or 1:15 to 0:38. Is 40% increase in points for a WU reasonable for a doubling of CPU resources? Hell - I'd have given it more.

Its scales the same going from 1P->2P, 2P -> 4P ... 24P -> 48P , 48P to 96P ....


So, for me any way, the current system makes sense and doesn't need tweaking at all.

H.

edit - added extrapolation.
I'm sorry, this just doesn't compute. PPD is the measure of performance not points per WU. Your rental supercomputer is 40x as fast as your 12 core yet is awarded 252x the ppd.
Image
Haitch
Posts: 34
Joined: Tue Dec 04, 2007 4:34 pm

Re: point system is getting ridiculous...

Post by Haitch »

ChasR wrote:
I'm sorry, this just doesn't compute. PPD is the measure of performance not points per WU. Your rental supercomputer is 40x as fast as your 12 core yet is awarded 252x the ppd.
But WU's completed are the measure of performance for PG - more WU's completed quicker is more valuable to them, hence the QRB. Getting the same amount of work done in 1/40th of the time, is worth 6x as many points, earned in 1/40th the time.

Ask the researchers at PG - is a project completed in 10 days 252x more valuable than that same project taking 400 days ? I think they'd say it's even more valuable.

H.
Haitch
Posts: 34
Joined: Tue Dec 04, 2007 4:34 pm

Re: point system is getting ridiculous...

Post by Haitch »

Let me try and put this another way,

Points are the measure of science PG assign to each WU you've contributed to PG. They assign a sliding scale of points depending on how fast you return it.

Getting it back twice as fast is worth 41% more.
Getting it back in a quarter of the time is worth 98% more.
Getting it back 40x as fast is worth 6.5x as much.
Getting it back 2,400x as fast is worth 65x as much.

If your folding box contributes three times as much science (points) running twice as fast as my folding box you get three times the PPD.

If you contribute 252x as much science by running 40x faster, you earn 252x PPD.

It's NOT you're twice as fast as me therefore you earn twice what I do, it's the value of the science you contribute, as judged by PG, is three times as valuable as mine over time.

You get 41% more points for the WU than I do because you fold them 2x faster than I do, you also fold 2x the WU's I do because you're twice as fast me. You get a linear bonus for folding faster than me, you get an additional exponential bonus because PG rate those faster returned WU's to be of more worth per WU. Isn't this the exactly what PG stated ?

As I see it, there were no issues with bigadv, and the really bigadv units until the 48 core folders turned up scoring major points. If you assume that a) the base points of bigadv and really big adv are ok, and there is no evidence against that, and b) that the 41% bonus per 50% TPF you reduce is ok, again no evidence against that - the only issue is the really high high PPD these machines generate - but again, ask the researchers are the improvements in TPF worth the points increase. I continue to believe that they would assign higher bonus factors than PG assign. Basically - it's TPF & PPD envy.

H.

typo 1 - extrapolation
typo2 - actual typo
Last edited by Haitch on Thu Jun 23, 2011 2:37 am, edited 3 times in total.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: point system is getting ridiculous...

Post by MtM »

A graphical illustration:

Image

The range you would like is the one on the left, the one the right is the one you might not want.

What happens when you flatten the curve? You lower the increases on the left side where this is not wanted.

Anything you do with this curve leaves some unhappy, for me the question is what is really the value of time. Any formula would be related to a specific range in such a curve. But you can't keep one benchmark machine if you know the expected performance by far exceeds the range you want and you want to make it as seemingly profitable to decrease your return times. You could also do with one curve and move far to the left and lower the base points back to the days where 100ppd was hard to get, it would be much easier but might not incite enough donors to upgrade and lower their tpf since the percieved gains would be way less.
Leonardo
Posts: 260
Joined: Tue Dec 04, 2007 5:09 am
Hardware configuration: GPU slots on home-built, purpose-built PCs.
Location: Eagle River, Alaska

Re: point system is getting ridiculous...

Post by Leonardo »

VijayPande wrote:An update on what's been going on internally at Stanford: somewhat paralleling this thread, we've been discussing this internally...we are actively working on this issue...[it] won't solve all issues, I agree there are some fundamental issues that need to be resolved and that likely can be with some changes.
Dr. Pande, thank you very much for commenting. We trust you and Pande Group. I think the aggregate of the concerns expressed in this thread could be distilled down to this: We request that Pande Group maintains the production points awards system to be an optimal set of compromises to reflect the value of contributions to science, to cultivate high producing members, and to grow the project through gaining new, reliable members.
Image
Amaruk
Posts: 254
Joined: Fri Jun 20, 2008 3:57 am
Location: Watching from the Woods

Re: point system is getting ridiculous...

Post by Amaruk »

To be honest 7im, I did play around with a number of variations on the current points formula shortly after it was unveiled last year.
At the time it was purely an intellectual exercise. What can I say, I like numbers.

One of those formulas was essentially the same as the one I posted above. Strictly speaking, in literal terms one would consider it mine since I am the one who wrote it. But in a philosophical sense it is not mine, as I personally don't have any issues with the current points system. ;)

I'm playing a bit of a Devil's advocate here, because in a sense I am arguing for a position that I do not personally advocate. The problem is that while there are a number of individuals who have voiced concerns over the current point system, no one has really taken the time to come up with a complete, working proposal. Capping schemes and arbitrarily modifying values are not a viable solution. The points system must be viewed as a whole, with an understanding of how it affects all clients. The current formula is actually a fairly elegant solution, IMHO.

While I do not agree with your position (that the points system needs to change), I do recognize that there is some validity to it. But without any potential new formula, there can be no solid basis for discussion. In essence, those asking for change lose by default without a good proposal. Unfortunately I did not save any of my work from last year, so I had to break out the calculator and crunch numbers for a few hours to regenerate the data. The result is the proposed formula outlined above.
7im wrote:Still, if you put it forth, with examples, you must have an opinion on it. What is your opinion? How is this formula change an improvement?
I think I've made my opinion clear. Rather than address potential improvements, I'll focus on it's strengths.

First, it's simple. Single modification to formula, single data point modified for each WU.

Second, it's universal. Applies to all clients and fits into current unified benchmark scheme.

Lastly, and perhaps most important, it provides the necessary data needed to form a basis for statistical analysis.


For example, here is a graph showing the differences between the two formulas:

Image



Given the above, the question is fairly straightforward. Which of these curves best describes the relationship between scientific value and time, the red line or the green one?
Image
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: point system is getting ridiculous...

Post by MtM »

Green and red still have to cover the sempron running classic client on the one side and bigadv on the other side? How do you think that affects the changes not on the extreme left ( in above example ) but the range where you would expect the largest part of the donors would be? Would it still give enough bonus to quicker return times?

No one seems to really give me an answer to my questions, I must be either just so wrong no one feels called to point out the obvious or I'm not making sense to anyone but myself at this point :oops:
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: point system is getting ridiculous...

Post by Grandpa_01 »

MtM wrote:Green and red still have to cover the sempron running classic client on the one side and bigadv on the other side? How do you think that affects the changes not on the extreme left ( in above example ) but the range where you would expect the largest part of the donors would be? Would it still give enough bonus to quicker return times?

No one seems to really give me an answer to my questions, I must be either just so wrong no one feels called to point out the obvious or I'm not making sense to anyone but myself at this point :oops:
You forgot #3 nobody knows the answer to the question.
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: point system is getting ridiculous...

Post by MtM »

Well we can speculate as been done till now, 7im wanted us to do so and we haven't been asked to stop. Infact the mention that this thread has been followed should encourage people to not stop.

If we speculate yes, the answer is 'simple' with a flatter curve. If we speculate no, what then? Everyone knows the asnwer I have, I want to hear from people why I'm wrong. Or why they don't think I'm wrong, but why they still have another solution. I'm very interested in that.

Interested enough to try and make it easier for people to give me the reasons for being wrong... it's just a form with some controls, the project info side I have already so dropping it in is 5 minutes, all the rest I have to build from scratch. The most challenging will be how to handle the mathematical equation solving ( though it looks like a spreadsheet is a good thing to use for that ). If that doesn't work parsing the string and the operators + value's and trying to use something as the external eval function might be needed?

I want to be able compare different projectinfo sets using differenent qrb formula's with different reference system(s), and have it output the graphical images and csv data.

The more I think about it, the more I think it might not be as simple as I hoped when I started putting some controls on a form :lol: ;)


Image


mhm.. if a spreadsheet doesn't work it's easier to use office and some vba forms.
Post Reply