Page 2 of 2
Re: Timeout/Expiration limits and specific work units.
Posted: Thu Feb 06, 2014 1:09 am
by twizzle
7im wrote:...
FAH looks at your hardware config, and client settings, and sends work that it expects the client can complete by folding an average amount of hours per day. If you have slower hardware, or only fold part time, or run other applications at the same time, the defininition of average is not the same as if you fold on faster hardware, or fold full time, and don't run other applications. That puts the time management in the hands of the donor.
...
- If FAH looks at the hardware config, why would "slower hardware" cause problems?
- Why has my old P4 being assigned a WU that requires 24x7 processing to meet the timeout?
- Where is "average" defined?
- I've been folding for about 6 months, and difficulty meeting the timeout is a recent issue. Did someone move the goal posts?
Re: Timeout/Expiration limits and specific work units.
Posted: Thu Feb 06, 2014 6:24 am
by codysluder
The goal posts have not been moved for uniprocessor / dual processor assignments.
Remember that the old P4 has hyperthreading which means it's a single core system that's masquerading as a dual core system. If you treat it as a single-core system, it will not need to run 24x7 although there still is a certain minimum number of hours per day required to meet the deadlines, tho I don't remember how many. If you treat it as a dual core system, it will be assigned more difficult assignments with shorter deadlines even though you don't have a system that runs twice as fast.
Re: Timeout/Expiration limits and specific work units.
Posted: Thu Feb 06, 2014 8:48 pm
by 7im
The goal posts haven't moved for any work units.
The deadline and benchmark formulas are documented on the Points FAQ.
Re: Timeout/Expiration limits and specific work units.
Posted: Mon Feb 10, 2014 1:01 am
by twizzle
I read the FAQ's, there's nothing that I would call "definitive" re. expected hours, hardware selection etc. After going through the logs on the three machines I "donate" time from, only the 24x7 box has had much chance of completing work recently, the dual AMD box at home (which probably gets more hours than the work box due to people turning it on in the morning and leaving it on all day) missed the deadline because the WU was processing at ~ 1% per hour.
I've now reconfigured ALL of my boxes as single CPU slots, the timeout/expiration times are significantly more realistic for a "home" (non 24x7) user - and isn't that the whole point of "folding@home"? One interesting results is that on my work desktop, running three slots means that all four processors are fully utilized - but the box is still responsive, which isn't what was happening when it was configured as a single slot with NT=2 or NT=3.
Re: Number of active folders going down?
Posted: Mon Feb 10, 2014 1:11 am
by twizzle
Out of interest - and because I don't know where to look - what is the %age of WU's being assigned that are missing the timeout/expiration by month? I've only been folding since last August but have moved up into the top 10% of contributors - but recently most of my efforts have been wasted because the WU's are missing both the timeout and expiration deadlines. I've just reconfigured my three boxes as multiple single-cpu slots to stop them being assigned workloads which would require 24x7 availability to complete within the timeout.
Re: Timeout/Expiration limits and specific work units.
Posted: Mon Feb 10, 2014 1:28 am
by P5-133XL
I know of nothing like that document that is accessible to the public.
The best I can do is give a lifetime % success rate for a specific passkey, and that requires that you PM a moderator with your folding username and passkey.
Re: Number of active folders going down?
Posted: Mon Feb 10, 2014 2:14 am
by PantherX
twizzle wrote:...what is the %age of WU's being assigned that are missing the timeout/expiration by month?...
There isn't any listing available by month, however, you could possibly find a value (not as a percentage) on a per server basis if you visit the Server Status page (
http://fah-web.stanford.edu/pybeta/serverstat.html) and look under the
WUs E column. The WUs E column shows the number of WUs that have expired (crossed the Final Deadline) and will be reissued.
Re: Timeout/Expiration limits and specific work units.
Posted: Mon Feb 10, 2014 5:48 am
by twizzle
I'd actually made that post in the "number of active folders going down?" thread, as I was wondering if this was because the failure rate had increased leading to people dropping out. ie. If Fahclient V6 defaults to single threading and works, then people install fahclient7 with SMP and it stops "producing the goods" as it were, the work won't be carried out. And with the release of Core17 for GPU's, this stopped my boxes dead - so I stopped providing GPU slots.
Since making the post, however, I found the points report and see that the top few percent of folders are providing almost all of the system resources. So... people like me with some spare processing time don't really affect the throughput given that the people at the pointy end contribute more in a few hours than I have contributed in six months.
A quick look at the stats from the daily user summary... 4.0% of contributors supplied 82.1% of the "new credit". But given that most of the contributions are from GPU's, it would be very interesting to see how much home users are contributing to CPU/GPU folding, and if the loss of contributors is having a real effect on the ability to carry out research.
Re: Timeout/Expiration limits and specific work units.
Posted: Mon Feb 10, 2014 6:19 am
by PantherX
twizzle wrote:I'd actually made that post in the "number of active folders going down?" thread, as I was wondering if this was because the failure rate had increased leading to people dropping out. ie. If Fahclient V6 defaults to single threading and works, then people install fahclient7 with SMP and it stops "producing the goods" as it were, the work won't be carried out. And with the release of Core17 for GPU's, this stopped my boxes dead - so I stopped providing GPU slots...
No single factor can account for the huge drop of active folders. They are many (which are already mentioned in that thread). This could play a part but to what extent, is anybody's guess. If you want to get Non-FahCore_17 WUs on your GPU, you can still download GPU3 v6.41 Client (
http://folding.stanford.edu/home/download2011). From what I have read online, it seems that the upgrade from v6 to V7 happens only because of new features or bug fixes. Thus, there are still plenty of donors using v6. Moreover, for new donors, there is a new release being tested which makes it easy for them to see how their system is performing (viewtopic.php?f=94&t=25657).
twizzle wrote:...Since making the post, however, I found the points report and see that the top few percent of folders are providing almost all of the system resources. So... people like me with some spare processing time don't really affect the throughput given that the people at the pointy end contribute more in a few hours than I have contributed in six months...
What I would say is that the Science counts. You are contributing valid science, just like any other donor thus all your contributions are valued. To put it in another manner, a billionaire could donate a million Dollars to a charity organization while an average Joe, a few bucks. The bottom line is that both made a contribution to charity and thus, have made a difference. This is much better option than to not give charity at all.
twizzle wrote:...But given that most of the contributions are from GPU's, it would be very interesting to see how much home users are contributing to CPU/GPU folding, and if the loss of contributors is having a real effect on the ability to carry out research.
The definition of home user varies from person to person. A home system can be an ultra-portable for one but a multi-GPU rig for another. Since the FLOPS are roughly constant, I would say that so far, it hasn't impacted the ability to do Science (this might be an oversimplification).
Re: Timeout/Expiration limits and specific work units.
Posted: Tue Feb 11, 2014 12:44 am
by twizzle
PantherX wrote:This could play a part but to what extent, is anybody's guess.
And there is the problem - the data to measure what is going on across the whole community doesn't seem to exist.
I contribute three machines, and two of them have been dumping work units because they miss the expiration time - what percentage of contributors are having the same issues? If I hadn't checked the logs to see how fahclient was coping with a 2-way VM running on the box, I wouldn't have noticed that there was a problem, how many users out there are looking at their logs or checking their stats?
As a comparison, my IBM Z196 mainframe cuts ~ 30Gb of performance data a day. Within reason, we try and make everything measureable, because we have thousands of users out there who will complain that the "system is slow" and we have to be able to prove them wrong... and often they are right. We generate reports that look for trend changes to see (for example) if CPU usage, end user wait times and network transport times are changing - because change is constant and it's usually not for the better.
Re: Timeout/Expiration limits and specific work units.
Posted: Tue Feb 11, 2014 2:04 am
by bruce
Actually, the data does exist, but it takes a trained eye to interpret it.
take a look at
http://fah-web.stanford.edu/pybeta/serverstat.html. Appropriate columns have totals and if you find a total that is unusual, you can easily work back to see which server is doing something strange. Of course from the numbers you see there, it's impossible to pick out a detail as small as what's happening on a specific home computer but the system-wide trends are there once you get used to picking out the right numbers.
Re: Timeout/Expiration limits and specific work units.
Posted: Thu Apr 24, 2014 1:05 am
by twizzle
Thread resurrection.
Just an update - on my dual-core or PIII machines, I changed the CPU slot to be uni-processor then over-ride the NT parm using the expert configuration setting to be NT=2. On my work desktop, the cpu slot is set to two threads and I over-ride it to be NT=4. BUT - if it's a core A3 WU and I want to use VM's, I have to make it NT=2 or turn folding off - because more often than not it will cause a BSOD on a Win7 VM (watchdog timer). With these changes, everything has been smooth sailing and no more dumped WU's.