Page 1 of 38
point system is getting ridiculous...
Posted: Sat Jun 04, 2011 5:11 pm
by soya_crack
I mean seriously, the point system was ridiculous before, but with these big betas it's getting even more absurd.
If i am right a SR-2 rig is doing 250k PPD on this bigbetas. That means that it is doing ~13x as much science as a 580GTX does. If I follow software development correctly, actually a 580 is doing like 13 more work than a Xeonrig (looking at GPUcomputing to which f@h surely belongs.
Please get this right or you will discourage a lot of young folders.
I seriously like F@h for it's technology but the point system is like WTF?
feel free to discuss
soya
Re: point system is getting ridiculous...
Posted: Sat Jun 04, 2011 6:00 pm
by Grandpa_01
A GPU is not even coming close to doing the same amount of work as the CPU's are. Below is a list of WU's and there atom sizes as you can see there is quite a difference. And by the way a 580GTX does not come close to the cost of a rig that is required to fold the new bigadv WU's. As you can see there is quite a bit of difference in the WU sizes so if we use your theory of value for work completed just a smp WU should receive 55x as many points (around 28000 per) as a GPU.
The largest GPU WU is 1392 atoms
p5716_ACBP_ff03_300K 1392 atoms
The smallest smpWU is atoms
p6020_Protein in POPC 76900 atoms
The smallest regular bigadv WU is 1098185 atoms
p2686_SINGLE VESICLE in water 1098185 atoms
The new larger bigadv is 2533797 atoms
6903 ha_shooting 2533797 atoms
Re: point system is getting ridiculous...
Posted: Sat Jun 04, 2011 6:08 pm
by GreyWhiskers
I keep looking at the
Client statistics by OS page, and presume that the point system is designed to encourage a relatively small number of the Windows/Mac/Linux population to provide the resources to execute enormous projects that belie the difference in TFLOPS production. From the stats below, I see that 434,534 active CPUs are contributing 681 x86 TFLOPS. The GPU and PS3 contributors produce enormously more TFLOPS with only 52,789 active "CPUs".
I'm not sure where I'm going with this - but I keep trying to understand the motivational structure that the metrics and point system have ended up with. I guess that the GPU/PS3 population wouldn't produce even more GPUs/PS3s given more motivation. This thread explored that subject.
Is Big Adv possible on GPU3? {Nope, Bigadv is for CPUs only}
This is folding at HOME, and the great majority of the contributions are from people contributing cycles - either spare or purposely dedicated - from their own resources - or during dead time in school computer labs. There probably aren't very many who have a stack of quad socket xeon or opteron servers in their dens. But the science does benefit and need such in order to solve some of the hardest individual problems.
Code: Select all
OS Type Native TFLOPS* x86 TFLOPS* Active CPTotal CPUs
Windows 331 331 318,074 3,803,531
Mac OS X/PowerPC 3 3 3,963 145,639
Mac OS X/Intel 126 126 30,624 149,484
Linux 221 221 81,873 622,885
ATI GPU 1,784 1,882 12,566 166,397
NVIDIA GPU 2,491 5,256 15,669 257,935
PLAYSTATION®3 692 1,460 24,554 1,118,550
Total 5,648 9,279 465,979 6,264,421
Re: point system is getting ridiculous...
Posted: Sat Jun 04, 2011 9:41 pm
by John_Weatherman
USA Today, April 13 2011,
PC market takes beating from iPads "The data also mark the first year-over-year worldwide PC decline in six quarters. The story for PCs in the United States was even more grim: a 6.1% decline from a year ago." So we're moving to smaller, more energy efficient machines and Stanford's trying to run bigger and bigger WUs. One can only hope that increased computational power will compensate for a reduced number of machines. How it will be if GPUs are replaced with CPU chips with inbuilt separate GPUs, time will only tell.
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 12:09 am
by Dinkydau
Why do big CPU wu's give so many points and small ones (standard 1 CPU client) not?
By the way, my GPU gives many more points working 24/7 than my CPU when using SMP client with 4 cores, 2,66 GHz.
GPU, Nvidia 8800 GTS 512MB only is about 5000 points per day, SMP more like 1700, could be 200 points off, but alot less, not 13 times more.
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 3:38 am
by John_Weatherman
see viewtopic.php?t=10697 for big WUs explaination
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 5:33 am
by 7im
I think we should go back to the original QRB where the bonus was capped at 10x the base points. If that's not enough incentive, then I don't know what would ever be enough.
Or mayb 1x per real CPU core. 12core would have a 12x bonus cap. Too simple I guess...
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 6:35 am
by soya_crack
@grandpa
I don't measure science by calculated atoms. That would be too easy I guess.
@John_Weatherman
Seriously that has nothing to do with the topic. servers won't get replaced by ipads.
@Dinkydau
Yeah, basically everything I said is valid for the normal SMP client, too.
@John_Weatherman
They can explain me anything. the point system is still absurd.
@7im
I like cap-limit idea. I think 10x would still be a good motivation to fold bigger units.
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 7:45 am
by k1wi
I didn't realise the cap limit was removed... I guess the main "issue" with the QRB for bigadv and big-bigadv is that, for a given i7 @4.2Ghz:
QRB 'bonus' for bigadv is 100% that of smp
QRB 'bonus' for big-bigadv is at this stage 80% higher than bigadv (I know this is still in testing)...
The bonus over smp vs. CPU is a bit more fraught - now that we're moving to the A4 'unified' core for uni-proc & smp, I can see the validity in running 1x smp over 4x uni-procs clients (four times quicker return). I guess this won't be an issue for me once all projects (or perhaps once all new uni-proc projects) are eventually moved over to the A4 core.
Take into account the removed cap and you have for some serious point inflation. I know that point inflation occurs with Moore's Law, but this greatly increases it. I would argue that bigadv isn't really particularly onerous any more (particularly since the high memory usage was solved), and I don't think it really qualifies for such a big 'premium' over regular SMP. The same can probably be said to a lesser degree for big-bigadv; the premium certainly shouldn't be as big as it is.
I don't mind people with serious hardware getting serious points, but perhaps the 'premium' could work like this: the cap of 10x applies to all 'scales', so that if your 4p machine is maxed out at 10x bonus on regular smp work units there is an incentive to switch to bigadv, or big-bigadv, where points it would earn for big-bigadv would be fairly equal to what it would earn on regular smp if there was no cap (because it still falls under the 10x bonus for big-bigadv...). I don't know if that makes sense.
i.e. Machine A earns 100,000ppd raw on smp, which equals a hypothetical 20x QRB
Machine A running smp only earns 50,000ppd after the 10x QRB cap is applied (we're talking hypthetical numbers here)
If Machine A switches to bigadv, it earns 100,000ppd raw with bigadv, which equals a hypothetical 13xQRB
Machine A running bigadv only earns 80,000 after the 10x QRB cap is applied
Machine A switches again to big-bigadv, it earns 100,000ppd raw with big-bigadv, which equals a hypothetical 6xQRB
Machine A running big-bigadv earns the full 100,000 after the 10x QRB cap is applied, because they didn't get capped...
That would still provide an incentive to folder the bigger projects and would still encourage people to purchase the big rigs (because so long as there is a QRB the incentive is there to return projects quickly and a faster machine will always return a project faster).
Regarding GPUs, they're going to be well out of favour until they too get a QRB (the logical next step one would imagine now that it has been rolled out to some new uni-proc projects). I'd love to hear feedback as to whether there is an intention to roll out GPU QRB and where the ppd will come out, i.e. will 1TFlop(Native/x86) GPU = 1TFlop CPU in terms of ppd? Or are GPUs not producing as 'valuable' science?
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 9:23 am
by John_Weatherman
Servers won't get replaced by ipads but people buying ipads instead of PCs/laptops won't be contributing to folding@home - which was my point. Therefore the top end machines will have to do more science in the future, and so I understand the extra incentive. As for the points, I've done over 1,000 CPU WUs since 2006 and now one machine can earn more points in one day then I've accumulated in 5 years.
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 10:16 am
by soya_crack
yeah but that is a fact based on technology advancement and not based on the point system.
1 Year ago before bigadv was introduced, a nehalem did about ~16kppd. I think it was the 285 GTX which was the top card in those days, it did about 10k ppd.
Then out of a sudden bigadv was released. Nehalem did then, from one day to another, 35k ppd. That's the problem I am talking about. As westmere was introduced, the bonus got sicker and sicker. These new big betas are driving it to the absurd.
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 2:26 pm
by orion
7im wrote:I think we should go back to the original QRB where the bonus was capped at 10x the base points. If that's not enough incentive, then I don't know what would ever be enough.
Or mayb 1x per real CPU core. 12core would have a 12x bonus cap. Too simple I guess...
I agree with you on this. 48 real cores would have a 48x bonus cap.
So put it forth to the DAP and let us know how it goes.
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 3:20 pm
by Grandpa_01
soya_crack wrote:@grandpa
I don't measure science by calculated atoms. That would be too easy I guess.
@John_Weatherman
Seriously that has nothing to do with the topic. servers won't get replaced by ipads.
@Dinkydau
Yeah, basically everything I said is valid for the normal SMP client, too.
@John_Weatherman
They can explain me anything. the point system is still absurd.
@7im
I like cap-limit idea. I think 10x would still be a good motivation to fold bigger units.
So I am trying to figure out what you are baseing your argument on. ? Clearley based on the amount of science done based on the newer Openmm GPU vs new bigadv (New bigadv = 2,533,797 atoms / Openmm = 292 atoms = 912 points) So if we take the new bigadv 2,533,797 and divide that by the new GPU 292 that = 8677 new GPU WU's have to be completed to = 1 new bigadv. (
GPU Science completed = bigadv Science completed) so 8677 GPU WU's x 912 points ea. = 7,913,424 points according to the amount of the science completed the point value of the GPU is absurd it should be around 120 points. So that argument is unfounded. So is your argument based on emotion or what.
Do not get me wrong here I believe the points on these are a little high also but that is based on an emotional response knowing that some people are going to be upset about the point spread. But in saying that I also wish to point out that I am sitting here looking at around $15,000 worth of equipment and using about $175 a month worth of electricity to fold those new bigadv WU's.
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 3:43 pm
by mdk777
Servers won't get replaced by ipads but people buying ipads instead of PCs/laptops won't be contributing to folding@home - which was my point.
Valid point in the long run. As the cloud does the heavy lifting, "free" client Cycles are going to become scarce.
My Android phone does better voice recognition that any computer I have ever owned.
In the long run, distributed computing will face a wall unless people can donate their unused "cloud" compute capacity.
Who wants to guess on the time frame? 5 ...10 years from now?
Re: point system is getting ridiculous...
Posted: Sun Jun 05, 2011 4:22 pm
by k1wi
Grandpa_01 wrote:So I am trying to figure out what you are baseing your argument on. ? Clearley based on the amount of science done based on the newer Openmm GPU vs new bigadv (New bigadv = 2,533,797 atoms / Openmm = 292 atoms = 912 points) So if we take the new bigadv 2,533,797 and divide that by the new GPU 292 that = 8677 new GPU WU's have to be completed to = 1 new bigadv. (GPU Science completed = bigadv Science completed) so 8677 GPU WU's x 912 points ea. = 7,913,424 points according to the amount of the science completed the point value of the GPU is absurd it should be around 120 points. So that argument is unfounded. So is your argument based on emotion or what.
I thought it was always unwise to measure purely on the number of atoms - what if the GPU work units were 7 times more steps?
Plus Openmm is newer code than bigadv and they've mentioned they're using projects with small atom-sizes to test it, I guess you chose them on that basis over GPU2 so that it supports your claim (measuring just on atoms)?
I guess what you're saying grandpa is that GPUs are terribly inefficient at realising their theoretical computational power? and are thus being overcompensated in points?