Page 9 of 11

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 2:55 am
by JBurton57
FordGT90Concept wrote: You're basically telling me there is a flaw in the system then. WUs should be prioritized and high priority WUs should end up on the fastest hardware available to calculate it. The Pentium IIs should only receive low priority WUs. As a contributor, that really isn't my problem. If I make my Pentium II available for folding, Pande Group has to decide how to best utilize it, if at all. The software/service itself should notify the contirbutor after X number of days of not being assigned work that it is better to just no longer contribute via that hardware. Again, I just make the time available, I don't have anything to do with how it is used.
No, that's not what I'm telling you. The original question was whether or not a 386 folding is better than not folding. To which I believe you said something to the effect of, "Is something not better than nothing?"

My answer is that there are cases where something is worse than nothing. A 386 is a good example, because it can't complete the WUs on time.

It might be nice if we had WUs that were such low-priority that a 386 could crunch them out, but I doubt that'll ever happen due to the nature of the work. A protein that 386s could crunch out would likely be so small, folded over such a short time, that they're not scientifically interesting. Or it would take too long to complete the research. For real science to be done, we need proteins that are big, folded over a long period of time, and replicated in sufficient quantity that the results are statistically significant. And it must be done in such a time that researchers can publish.

Maybe WUs like that exist. I'm not a molecular biologist, so I wouldn't know. But to answer the original question, at least right now with the system set up the way it is, there are processors that FAH neither needs nor wants despite the best of intentions.

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 3:03 am
by FordGT90Concept
JBurton57 wrote:My answer is that there are cases where something is worse than nothing. A 386 is a good example, because it can't complete the WUs on time.
I'm telling you that it isn't the contributors fault. Contributors only make hardware available. They very little say in how it is used.

JBurton57 wrote:It might be nice if we had WUs that were such low-priority that a 386 could crunch them out, but I doubt that'll ever happen due to the nature of the work. A protein that 386s could crunch out would likely be so small, folded over such a short time, that they're not scientifically interesting. Or it would take too long to complete the research. For real science to be done, we need proteins that are big, folded over a long period of time, and replicated in sufficient quantity that the results are statistically significant. And it must be done in such a time that researchers can publish.
If the servers were designed to work in a prioritizing way to best utilize the available hardware, what you are talking about wouldn't be an issue at all. If the system were actually smart in assigning work, the only way a 386 could be assigned work is if everything faster was already in use. In which case, something is better than nothing.


So, if getting work done quickly is the priority, why isn't the system designed in a way to prioritize WUs? And again, this is out of my control. You can't blame someone on a 386 for holding up linear processing when the application (client and server) should have been developed to assume there are high end computers and there are low end computers out there. That is a serious design flaw which, in no way, should land on a donors lap.

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 3:19 am
by mdk777
If the system were actually smart in assigning work, the only way a 386 could be assigned work is if everything faster was already in use. In which case, something is better than nothing.
See, you are just being contrary again,

“If ifs and ands were pots and pans there'd be no work for tinkers”

The servers are not all knowing neural nets, knowing when you're sleeping and awake. If they had that kind of computing power, why would they even need a DC project?

Your arguments are just hypothetical wishes,

LEARN TO DEAL WITH REALITY!

Embrace the truth and the truth will set you free from your fantasy land of wishful thinking.

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 3:22 am
by JBurton57
FordGT90Concept wrote: If the servers were designed to work in a prioritizing way to best utilize the available hardware, what you are talking about wouldn't be an issue at all. If the system were actually smart in assigning work, the only way a 386 could be assigned work is if everything faster was already in use. In which case, something is better than nothing.
But does the system know what's available? Is it even possible for it to know? Sure, I'm folding with my computer today. But will I tomorrow? Maybe tomorrow I'm going on vacation, and I'm unplugging my computer. Or maybe I'm folding on a laptop, and I'm away from a wall socket. Or maybe my computer crashes. A distributed computing program can't base its decisions on whether or not computers are going to be available tomorrow, because they might be. Or a new computer might enter the mix. What if I buy a new computer, with 3x9800GTXs? It doesn't know that the computers are coming online tomorrow.

How is a system going to schedule hardware that may or may not exist, day-to-day? Or hardware that has yet to come into existence?

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 4:01 am
by FordGT90Concept
mdk777 wrote:The servers are not all knowing neural nets, knowing when you're sleeping and awake. If they had that kind of computing power, why would they even need a DC project?

Your arguments are just hypothetical wishes,

LEARN TO DEAL WITH REALITY!

Embrace the truth and the truth will set you free from your fantasy land of wishful thinking.
I'm a programmer and database designer. I could code that kind of system with my eyes close. When a client requests work from the server, it only has to state the processor (if CPU client), video adapter (if GPU client), and amount of memory allotted to it (gathered during setup). The processor takes these figures, runs it against a database (perhaps built from past WU work on similar hardware), and gives it work that is suited to the hardware. I doubt any contributor would have any problems indulging that information to the F@H project as it vastly increases the productivity of the system. I assure you, this is completely within the realm of reality. In fact, it is quite humorous such a system hasn't already been implemented. It is simple, effectively, and doesn't conflict with privacy interests.

Pretty much all the information needed for CPUs can be taken from system environmental variables PROCESSOR_LEVEL, PROCESSOR_REVISION, and PROCESSOR_ARCHITECTURE. Version 6 of the software might already collect this information. I know the GPU2 client does identify the graphics card.

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 4:02 am
by Clintonio
FordGT90Concept wrote:
Clintonio wrote:Now, do I care about points? No. To me they're just a little reminder of how much I have done, not a way to beat everyone else or get the most points.
You only gets points if a) it was folding and b) the work was turned in.
Thanks for pointing out the obvious and skipping the rest of my post.

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 4:13 am
by FordGT90Concept
JBurton57 wrote:Sure, I'm folding with my computer today. But will I tomorrow? Maybe tomorrow I'm going on vacation, and I'm unplugging my computer. Or maybe I'm folding on a laptop, and I'm away from a wall socket. Or maybe my computer crashes.
That is an issue with donated time no matter what. If you are intelligently assigning work to hardware more than capable of handling it in a timely fashion, you can reasonably expect the work to be handed in within X number of hours. If it is not handed in, it can be handed off to another client. Which ever gets it done first is used and as the others turn in later, it can double check the initial results with the tardy results. All results are used and it performs about as fast as feasibly possible on the ultra-high priority WUs.

JBurton57 wrote:What if I buy a new computer, with 3x9800GTXs? It doesn't know that the computers are coming online tomorrow.

How is a system going to schedule hardware that may or may not exist, day-to-day? Or hardware that has yet to come into existence?
The WUs are given to the best available hardware for the job. If that 3 x98800 GTX system is ready and able, it would give it a high priority WU. If a high priority WU is not available, it would give it a lower priority WU knowing it should be completed quicker. On high priority WUs, should the deadline not be met, it will give it to another client to work on. If neither meet the new deadline, it is given to a third available client and so on.

I would try to avoid keeping records on users because that is a little too personal and can make the system biased towards contributors; however, such is the way to get maximized performance out of distributed computing.

Clintonio wrote:Thanks for pointing out the obvious and skipping the rest of my post.
I read the whole thing but only felt compeled to reply to that little bit. Sometimes the most obvious of things fail to catch our attention just because they aren't "out of place," so to speak. A lot of arguments on the Internet have been waged because one party did not catch what the other party thought was obvious. Better safe than sorry.

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 4:35 am
by mdk777
I'm a programmer and database designer. I could code that kind of system with my eyes close.
Yet you don't think faster and better hardware should get more points?

All I can say is you have issues with technology that go much deeper than can be solved in this forum.

The world is not "fair" in the sense that we always get what we wish.

If you are looking to right some great societal injustice, the point system on a contributed computer system is not really the best place to start.

Have you thought of protesting against the killing of baby fur seals, or the occupation of Tibet?

The flat earth society is always looking for volunteers!

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 4:47 am
by FordGT90Concept
mdk777 wrote:Yet you don't think faster and better hardware should get more points?
I said it before and I'll say it again: I really don't care about points--I care about donated time.

mdk777 wrote:The world is not "fair" in the sense that we always get what we wish.
My requests are not unreasonable. If fully implemented (basically F@H 2.0) it could yield a substantial gain in overall productivity. Obviously if no one is prepared to make those changes then these are notes that should go in the books for a future version.

mdk777 wrote:If you are looking to right some great societal injustice, the point system on a contributed computer system is not really the best place to start.

Have you thought of protesting against the killing of baby fur seals, or the occupation of Tibet?

The flat earth society is always looking for volunteers!
I think everything happens because of the events that precede it. I'm here because I spotted what I believe to be an issue which could and should be addressed. If people kill seals for their fur, there is probably a reason. Likewise, China has a reason, which they feel is not unreasonable, to occupy Tibet. The Flat Earth Society is pretty close to the pinnacle of human ignorance.

If there is one thing I have learned in life, it is that there is no such thing as right and wrong, only consequences.

Re: Unbalanced Scoring

Posted: Sat Aug 09, 2008 9:35 am
by HuntWarrior
Gentlemen,

With great intersest I read this thread.

I see it very simple (and I am)

The amount of science is most important to me, so I invest in my computer to deliver.

Points are stimulating, but not the main object (for me) to contribute.

Points are important for a lot of other folders.

There will always be people that doubt the fairness of the system, stimulate and educate folders (as you do) and motivation is not only points, but a clear explanation of what is being done for what disease or at least a basic more non scientific explanation.

For instance Alzheimer is the main research direction at the moment , what do we hear about it? next to nothing..........
Huntington's ? do not hear a thing still waiting for a paper (1995 I believe)

Leave the pointsystem as it is and do more about:

Information gives more Motivation!

Re: Unbalanced Scoring

Posted: Sun Aug 10, 2008 3:29 am
by Mitsimonsta
FordGT90Concept wrote:I said it before and I'll say it again: I really don't care about points--I care about donated time.
Well, I hate to break it to you, but you will never get a fair and balanced points system in your mind unless we go to everyone using the same specc'd computer with no overclocks allowed, all running Windows XP and only running the V6 Uniprocessor client. Only then you can have a WU worth 1 point and the points awarded would reflect donated time (ie 1 WU/Machine/Day or whatever).

Here's a pic from a Conference Presentation made by PandeGroup that the announcement of the nVidia client: http://xtreview.com/images/folding@Home ... rd%204.jpg

Fact is that the SMP client does alot more science than the Uniprocessor client. It can generate a longer simulation time than the Uniprocessor (approx 10x but I am sure 7im will correct me, but having the standard client benched at 100ppd and the SMP's benched at 1,000ppd this kind of holds true). The nVidia GPU client can get about 20x the simulation time (that is science crunched) of the SMP client in a day, and about 5x that of a PS3's daily simulation time.

Now, Stanford has announced they are basing their points on Scientific Output and/or Value. If the nVidia GPU's produce 20x the simulation time per day over an SMP client but the results are only half as valuable, I still make that 10x more output and the nVidia units should be benchmarked at 10,000 points per day (SMP is benched at 1,000ppd), not the current 1,500ppd. Even if the results are only 1/4 as valuable scientifically, then it would be benched at 5,000ppd.

Based on the PS3 doing ~1,000ppd and the nVidia card doing 5x the output with a similar scientific value, then they should be benched at 5,000ppd. And yet they are only benched at 1,500ppd. And you think that Stanford has 'Disowned CPU folders'??? Wake up to yourself.... they should be going against CPU's even more and giving the GPU's even more points. They are not disowning CPU folders but under-valuing the GPU in order to look after their SMP folders who do have a wider range of ability than GPU's but are slower. Based on nVidia units that SHOULD be benchmarked at 5Kppd, then my 88GT/98GT's should be getting over 15Kppd each. Do you think that would be fair compared to what you have now?

Yes, every contribution is valued, but some contributions are more valuable to the project than others.
FordGT90Concept wrote:If fully implemented (basically F@H 2.0) it could yield a substantial gain in overall productivity.
No it would not. It would see much slower machines crunching units which slows the return of the results back to Stanford. These results determine the next crop of work units to get sent out within that project number. So what you are doing is actually SLOWING the whole project. The idea behind F@H is speed... do it distributed instead of running a supercomputer for 20 years.

Not that I wish to open the Carbon-footprint discussion either, but it would see slower machines doing less science per unit of electricity, therefore lowering efficiencies. I'd rather a GPU chewing twice the power for 6 times the scientific output than two CPU's running with the same amount of electricity for less output.

Re: Unbalanced Scoring

Posted: Sun Aug 10, 2008 4:49 am
by FordGT90Concept
Mitsimonsta wrote:Not that I wish to open the Carbon-footprint discussion either, but it would see slower machines doing less science per unit of electricity, therefore lowering efficiencies. I'd rather a GPU chewing twice the power for 6 times the scientific output than two CPU's running with the same amount of electricity for less output.
Had you actually read what I said, you would understand the software would be instructing users to no longer fold on slower computers by not being assigned work for say, 30 days...


You need to read more than just the most recent post. :P

Re: Unbalanced Scoring

Posted: Sun Aug 10, 2008 6:18 am
by Mitsimonsta
It would be helpful if you had opened your eyes to what others were saying in the beginning instead of being so blinkered by the percieved injustice of the current points situation then we may have had a resolution to the issue.

You seemed to be adamant the points should be about the time donated regardless of the hardware that the client runs on. When the GPUs donate more simulation time than anything else by a massive margin, I think the current points system is just fine, and verging on not rewarding GPU folders enough. All I saw from you was that it was unfair that your '11 CPUs' (no, you have 4 CPUs, 11 cores) was unable to compete with a 8800GTS/320Mb. Through the whole thread you are coming off as being bitter that a single-core P4 machine worth about $200 secondhand and a $100 GPU can outpoint your Clovertowns.

For once, I actually whole-heartedly agree with 7im's reply to your OP, and he actually did write it in a nice way instead of his usual 'This is the way it is, so tough luck' attitude. The world has gone pear shaped now. :?

If you have folders like the Clovertowns and are not running the SMP client and are now complaining that you are not getting enough points, then it is your own fault. Stop acting like a petulant child stamping their foot and complaining that it is not fair. You are comparing a high performance client GPU client to a standard CPU client which is hardly a fair comparison. They attract bonus points for their 'betaness' and lack of maturity (hence need a large amount of monitoring and hand holding by the donator), and also provide a massive computational boost to the project. I think that the extra effort (daily checking, updating, new cores etc) is being rewarded. You simply set and forget and let it crunch.... well that should be worth less in my book. You are not putting the time into your machines to keep them running and contributing to the project. Oh, hang on, that was your point to start with. Oh well, was it nice to be pwned?

Your Clovers should get somewhere over 2500ppd each on a single SMP client, so your first point about needing 6x X5482 CPU's to compete is factually incorrect. You could spend about $1000 on a Q6600 system with a 8800GS, overclock both a bit and get 7Kppd from a GPU client and an SMP running on the remaining 3 cores, and useing a hell of alot less power than your rack of X5482's. What is stopping you from going out and grabbing an 8800GS/9600GSO for about $100 and adding 4Kppd to your output? NOTHING. If you can't beat them, join them. I am not all that happy that my highly OC'd quads (that I sunk alot of money into BTW) were being killed by a single 8800GT. So I went and bought a few GPU's, dropped the second SMP instance off and now my boxen do double their previous output.

I did read most of the thread yesterday, but obviously we moved on a bit. Now you have morphed your temper tantrum into the old 'client benchmarking' issue where the server does not really understand if the client can finish the WU within deadlines that it is being assigned... or finding the best WU for the hardware. This argument has been going on for years - ever since I have been contributing, and years before that. Since I have moved to SMP (and now GPU also) exclusively at home over 12 months ago, I have never have had much issue with WU assignments as you agree in using the clients that they have short deadlines. Sure, some are very tight but most C2D's will achieve them. They also tend to give longer deadline units to dual-core machines which is perfect.

I have about a dozen work boxes (P4-2.4Ghz/512Mb) running V6 clients with zero issue... they just soak up some more spare cycles to the project. They certainly do complete the WU's dished out well within deadlines, but there is one simple solution to slow machines. Tighten deadlines so that lesser machines cannot achieve them, then lock that Install ID from being assigned any more WU's after 5 missed deadlines. This has the affect of requiring a new install of the client to return it to active duty. I'd also like to see V5 clients not assigned any more work units in about 6 months time too.

Re: Unbalanced Scoring

Posted: Sun Aug 10, 2008 7:50 am
by Guru
Mitsimonsta wrote:For once, I actually whole-heartedly agree with 7im's reply to your OP, and he actually did write it in a nice way instead of his usual 'This is the way it is, so tough luck' attitude. The world has gone pear shaped now. :?
It's retarded that people have been complaining about this for a long time and everyone is still sitting on their hands... If there wasn't a problem, people wouldn't be complaining...
Mitsimonsta wrote:You are comparing a high performance client GPU client to a standard CPU client which is hardly a fair comparison.
Yet the scores are supposed to be fair, when the scoring system hasn't been modified to compensate for this? Um, yea..... >.>
Mitsimonsta wrote:Your Clovers should get somewhere over 2500ppd each on a single SMP client, so your first point about needing 6x X5482 CPU's to compete is factually incorrect. You could spend about $1000 on a Q6600 system with a 8800GS, overclock both a bit and get 7Kppd from a GPU client and an SMP running on the remaining 3 cores, and useing a hell of alot less power than your rack of X5482's. What is stopping you from going out and grabbing an 8800GS/9600GSO for about $100 and adding 4Kppd to your output? NOTHING. If you can't beat them, join them. I am not all that happy that my highly OC'd quads (that I sunk alot of money into BTW) were being killed by a single 8800GT. So I went and bought a few GPU's, dropped the second SMP instance off and now my boxen do double their previous output.
You're completely missing the point. I don't think very many people here are confused as to how you can get more points. The problem is that the software isn't maximizing the strength of the processors, nor is the point system a fair comparison.
Mitsimonsta wrote:Now you have morphed your temper tantrum into the old 'client benchmarking' issue where the server does not really understand if the client can finish the WU within deadlines that it is being assigned... or finding the best WU for the hardware.
lol It's a temper tantrum because the scoring system is broken and someone disagrees with you? lol...
Mitsimonsta wrote:I have about a dozen work boxes (P4-2.4Ghz/512Mb) running V6 clients with zero issue... they just soak up some more spare cycles to the project. They certainly do complete the WU's dished out well within deadlines, but there is one simple solution to slow machines. Tighten deadlines so that lesser machines cannot achieve them, then lock that Install ID from being assigned any more WU's after 5 missed deadlines. This has the affect of requiring a new install of the client to return it to active duty. I'd also like to see V5 clients not assigned any more work units in about 6 months time too.
How does preventing the machines from doing work help find a cure for cancer? If client A is a GPU and client B is a CPU, client A is going to complete a certain number of WUs regardless of whether or not client B is working on WUs. However, if you take client B away, that's a certain number of WUs that aren't going to be used in order to contribute to the project. How does reducing the amount of added work help? That doesn't make any sense. Is scoring more important than finding a cure? You guys are hopeless.

Re: Unbalanced Scoring

Posted: Sun Aug 10, 2008 8:49 am
by John Naylor
Guru wrote:The problem is that the software isn't maximizing the strength of the processors
It is! The Pande Group took the fastest unoptimised (as in... no SIMD optimisations) MD package on the planet at the start of the project (TINKER) and then rebuilt it by hand to make it faster still. Then when it became viable to use Gromacs as more processors began to include SSE processing units, they took that and did the same. They also did the same for specialist packages which would further their research (AMBER, CPMD [for QMD]). It runs on the SSE units of a processor if possible (huge speed boost). Where possible it runs on the SSE2 units of a processor (twice as fast as SSE). The Pande Group have done everything they can to use not only as much of the processor as possible but also to use what they are lent as efficiently as possible. It would be counterproductive for them not to do that, so they have done it. And if anyone develops an MD package which is faster (but still useful for the Pande Group's research) then no doubt they will try and get permission to make a new core out of that. The software does maximise the strength of the processors.