Ideas for new point system

Moderators: Site Moderators, FAHC Science Team

Qinsp
Posts: 216
Joined: Sun Oct 17, 2010 2:34 pm

Re: Ideas for new point system

Post by Qinsp »

Like I said, it's not important, but saying it doesn't exist is weird.

PS - Please clue me in on the hardware that will get same PPD on P8101 and P8102. I'll go out and config a system to match exactly.

PPD ain't a big deal.

Correct response is:

Yes, the PPD varies a lot between WU's. While we try to keep it equal, it doesn't always work. The good news is that regardless of PPD, all the work is put to good use.
Quality Inspection - Corona, CA, USA
Dimensional Inspection Laboratory
Pat McSwain, President
mdk777
Posts: 480
Joined: Fri Dec 21, 2007 4:12 am

Re: Ideas for new point system

Post by mdk777 »

That variation exists because your hardware is different than the standard benchmark machine. If you have the exact same system as the standard benchmark machine, you will always get the same PPD regardless of what Projects you are running.
Come now. :?:

It is fine to express this as an ideal.

However, everyone who has folded for any period of time know that the ideal is not reality.

Over the years, I have posted examples of WU that were well beyond 30% variation.

Let's all agree that the point system is more art than science. It has many variables and many complexities.
There is nothing wrong with admitting that variation is inherent in the process.

I agree that starting a six sigma program to reduce that variation is probably not the best utilization of resources.

Like Qinsp implies, denying reality to defend an ideal is likewise a waste of time. :lol:
Transparency and Accountability, the necessary foundation of any great endeavor!
Joe_H
Site Admin
Posts: 7927
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: Ideas for new point system

Post by Joe_H »

The reality is that the base points fairly easily can be shown to fall within that +/-10% range. The variation once you get away from the reference benchmark machine is getting the multiple parameters related to the differing projects for the QRB formula to give similar results far away from the baseline performance. Mathematically it is not easy, I am at times surprised they manage as well as they do. I mentioned the issue to my son who is a statistics major, he was glad it was someone else's problem.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
PantherX
Site Moderator
Posts: 6986
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: Ideas for new point system

Post by PantherX »

Last time I checked (a while back), there was a detailed specification of the benchmark machine. However, now it seems that it's very brief:
We have a single benchmark machine, its most important component is its processor: a Intel(R) Core(TM) i5 CPU 750 @ 2.67GHz. The machine's OS is Linux
http://folding.stanford.edu/English/FAQ ... sNew#ntoc9

I am not saying that the current system is perfect nor I am denying the shortcomings. It exists and we all are aware of that. Having said that, they are working towards a points system that is more with their philosophy of equal points for equal work:
Note that GPU projects are now being benchmarked on the same machine, but using that machine's CPU. By using the same hardware, we want to preserve our goal of "equal pay for equal work". Our GPU methods have advanced to the point such that, with GPU FahCore 17, we can run any computation that we can do on the CPU on the GPU. Therefore we've unified the benchmarking scheme so that both GPU and CPU projects use the same "yardstick", which is our i5 benchmark CPU.
http://folding.stanford.edu/English/FAQ ... sNew#ntoc9

As Joe_H stated, the base points do fall within the 10% range. However, that difference is increased once the QRB comes in play and that might be an issue. If you do have valid solutions (which aren't already repeated here), I am sure that PG Members will consider it :)
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: Ideas for new point system

Post by Grandpa_01 »

The 10% is not really 10% it is plus or minus 10% at base which is actually 20% variation at base line which when you add QRB to some of these it can get quit out of range. I do believe the standard should be raised and maybe a little more time taken in benching, recently they have been listing to the beta team a little more and adjusting accordingly which is a good thing. All in all the point system works no matter what we all think about it.

I am pretty sure it does what it was designed to do which is give us some sort of direction as to what is needed and which way to go when we get new equipment. As far as I am concerned there is no need to change a system that works the way it was intended to work. It may need some fine tuning but that is about it. :wink:
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
7im
Posts: 10179
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: Ideas for new point system

Post by 7im »

Points are the one topic fah gets the most feedback about.

Let's remember the QRB points curve is exponential, so even a very small variation in base points gets greatly enlarged in the QRB total over a very small time scale. This is especially exaggerated if your system is much faster than the benchmark computer because your system hits the slope on a much steeper part of the points curve. Even simple WU variations within the same project can cause points variations in the 100s to 1000s of PPD.

Those facts are not a denial of any potential problems. However, until those fact are understood very clearly (which often are not by new comers) it is very difficult to discuss/debate the points system on an equal and informed basis.

FAH has had 10+ years to find a better points system. And so far, no new suggested change to one part has been a greater good for all parts. A lot of smart people have tried. Most suggestions add to one group of folders while taking away from other groups. Most points changes are dictated by changes in the science and hardware, and science equating to points as a primary standard isn't about to change any time soon.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Qinsp
Posts: 216
Joined: Sun Oct 17, 2010 2:34 pm

Re: Ideas for new point system

Post by Qinsp »

Points are the distance from the carrot to the mouth on the donkey. It controls the donkey speed. The donkey's concern about this distance is natural.

Did a quick check.

It's not the non-linear curve AFAIK that is at fault, nor PPD, nor variation in hardware.
Using 6 different kinds of HW, both Intel and AMD, there is a 19% to 28% variation in TPF on jobs with identical deadlines vs. the average, on BigAdv. This is using P8103 as average, and P8101 as the worse case scenario. When comparing the two extremes (8101-8102) the numbers are higher.

This is not unique to BigAdv, it's just easier to monitor.

And again, no biggie, but it's not the test hardware, not the non-linear QRB, or anything else but base points vs TPF vs deadline.

Yes, the points system works. No, it's not mission critical to tune it better.

Oddly enough, I use the 8101 as my metric for configuring hardware. It runs nice, and pops up often. But saying it does run ±10% from the average on some kind of system is probably false.

Sadly, while I have i5's I'm not smart enough to run a P8101 and P8103 on them. Nor would it represent the correct system for these jobs anyhow.

Bumping base points for 8101 should have been done during Beta. While it would suck for me, (all my calcs are on 8101), bumping base points would make a lot of folk happy.
Quality Inspection - Corona, CA, USA
Dimensional Inspection Laboratory
Pat McSwain, President
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Ideas for new point system

Post by bruce »

Uniprocessor WUs were benchmarked on a P4. The FahCore went through several revisions, some of which altered the speed and others of which did not.

Recent SMP WUs have been benchmarked on the i5 and once again, the FahCore(s) have gone through several revisions.

In both cases, baseline points are established on the benchmark machine with whatever version of the FahCore is current. Now suppose a revised FahCore is faster. Those who are running WUs that were benchmarked under the old core will be pleased since their (baseline) PPD will go up. New projects will be added after that and they'll be benchmarked on the new FahCore so their benchmark will NOT get the bump in PPD. Eventually, old projects will finish and the "proper" PPD will apply to all active projects.

The same thing happens with GPU projects and BA projects. If a new FahCore is introduced that changes the speed, some projects will be faster than others. Old projects are (almost) never re-benchmarked using the new FahCore. [Oh, and a new FahCore can be either faster or slower ... depending on the scientific reason for the new FahCore, though most will be about the same speed.]
johnerz
Posts: 30
Joined: Thu Jun 19, 2008 3:21 pm
Hardware configuration: Intel 2600K @ stock
EVGA GTX 970 FTW+ @ stock
8gb Samsung green @ 1.55v 2040 9,10,10,1T
Asus P67 Sabertooth bios version 3209
Corsair hx 1000 psu
WD Black 500 GB
Win 7 64, updated,Microsoft Security Essentials - updated daily

SupermicroH8QGi+-F, 4 X AMD 6168 @ 1.9 no OC
Corsair HX 850 PSU 16 x 2GB HyperX 1600 ram
Ubuntu 12.04, using the musky/tear mods

Updated 03 Feb 2015

Re: Ideas for new point system

Post by johnerz »

bruce wrote:Uniprocessor WUs were benchmarked on a P4. The FahCore went through several revisions, some of which altered the speed and others of which did not.

Recent SMP WUs have been benchmarked on the i5 and once again, the FahCore(s) have gone through several revisions.

In both cases, baseline points are established on the benchmark machine with whatever version of the FahCore is current. Now suppose a revised FahCore is faster. Those who are running WUs that were benchmarked under the old core will be pleased since their (baseline) PPD will go up. New projects will be added after that and they'll be benchmarked on the new FahCore so their benchmark will NOT get the bump in PPD. Eventually, old projects will finish and the "proper" PPD will apply to all active projects.

The same thing happens with GPU projects and BA projects. If a new FahCore is introduced that changes the speed, some projects will be faster than others. Old projects are (almost) never re-benchmarked using the new FahCore. [Oh, and a new FahCore can be either faster or slower ... depending on the scientific reason for the new FahCore, though most will be about the same speed.]


This implies that the BA work units are also bench marked, can you confirm this, and if they are not, how do you pull the base line points up?

Generaly I agree with Qinsp on this issue
johnerz

Intel 2600K @ stock
EVGA 670 FTW @ stock
12GB 1600
Asus P67 Sabertooth bios version 3209
Corsair hx 1000 psu
WD Black 500 GB

Win 7 64, updated
Microsoft Security Essentials - updated daily

Updated 4 Dec 2012
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Ideas for new point system

Post by bruce »

BA and GPU projects are also benchmarked. I'm not sure about the details, though.

Baseline points are shown in http://fah-web.stanford.edu/psummary.html ("Credit")
johnerz
Posts: 30
Joined: Thu Jun 19, 2008 3:21 pm
Hardware configuration: Intel 2600K @ stock
EVGA GTX 970 FTW+ @ stock
8gb Samsung green @ 1.55v 2040 9,10,10,1T
Asus P67 Sabertooth bios version 3209
Corsair hx 1000 psu
WD Black 500 GB
Win 7 64, updated,Microsoft Security Essentials - updated daily

SupermicroH8QGi+-F, 4 X AMD 6168 @ 1.9 no OC
Corsair HX 850 PSU 16 x 2GB HyperX 1600 ram
Ubuntu 12.04, using the musky/tear mods

Updated 03 Feb 2015

Re: Ideas for new point system

Post by johnerz »

OK thanks, I was aware of the psummary page.

My understanding was (tbo I can't remember where I picked up the info) that BA units have not been Bench marked, and I was looking for some conformation about that, and if they had not been bench marked how the point were set. As you say they have been then thats a good part answered :)
johnerz

Intel 2600K @ stock
EVGA 670 FTW @ stock
12GB 1600
Asus P67 Sabertooth bios version 3209
Corsair hx 1000 psu
WD Black 500 GB

Win 7 64, updated
Microsoft Security Essentials - updated daily

Updated 4 Dec 2012
PantherX
Site Moderator
Posts: 6986
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: Ideas for new point system

Post by PantherX »

Regarding bigadv benchmark, here you go:
kasson wrote:The deadlines were by benchmark numbers, but again the standard benchmark machine has been less than predictive for bigadv performance. What I did was set initial numbers via standard benchmarking (with bigadv numbers) and then adjust PPD to match 8103 on a bigadv-capable machine. Since PPD don't scale perfectly, we may have to adjust a bit more in testing.
Source -> viewtopic.php?p=240481#p240481
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Post Reply