Page 1 of 1

Work units going slowly, p3064_lambda5_2003 & CMF in water

Posted: Tue Feb 23, 2010 8:02 pm
by gannett
Hi

These work units p3064_lambda5_2003 (Run 2, Clone 400, Generation 36) & project: 5101, "CMF in water" (Run 0, Clone 69, Generation 128)
seems to be getting low cpu utilization and taking about a 1000s per % Normally this system gets a % done in 500s or less.
p3064_lambda5_2003 (Run 2, Clone 400, Generation 36)

Code: Select all

 From InCrease Queue Dump  
   Index 2: folding now 1753.00 pts (63.365 pt/hr) 3.12 X min speed; 99% complete
   server: 171.64.65.63:8080; project: 3064, "p3064_lambda5_2003"
   Folding: run 2, clone 400, generation 36; benchmark 0; misc: 500, 200, 12 (le)
   issue: Mon Feb 22 16:02:11 2010; begin: Mon Feb 22 16:03:08 2010
   expect: Tue Feb 23 19:43:01 2010; due: Fri Feb 26 06:27:08 2010 (4 days)
   preferred: Wed Feb 24 11:15:08 2010 (43 hours)
   core URL: http://www.stanford.edu/~pande/Linux/AMD64/Core_a1.fah (V1.74)
   core number: 0xa1; core name: GRO-SMP
   CPU: 16,0 AMD64; OS: 4,0 Linux
   smp cores: 4; cores to use: 4
   tag: P3064R2C400G36
   flops: 1063664286 (1063.664286 megaflops)
   memory: 3232 MB
   client type: 3 Advmethods
   assignment info (le): Mon Feb 22 16:01:58 2010; B84C6036
   CS: 171.67.108.17; P limit: 524286976
   user: Gannett; team: 1971; ID: ?????????????  ; mach ID: 4
   work/wudata_02.dat file size: 609920; WU type: Folding@Home
WU gave 3.396 Gflops on a 3GHz Q6600 quad core that usually gives about 8 GFlops or higher. CPU utilization was 50,30,30,30 % across the 4 FahCore_a1.exe processes when I looked.

Caught this one next after unit change over.

Code: Select all

Index 3: folding now 2165.00 pts
   server: 171.64.65.64:8080; project: 5101, "CMF in water"
   Folding: run 0, clone 69, generation 128; benchmark 0; misc: 500, 200, 12 (le)
   issue: Tue Feb 23 19:47:41 2010; begin: Tue Feb 23 19:48:52 2010
   due: Sun Feb 28 19:48:52 2010 (5 days)
   preferred: Sun Feb 28 19:48:52 2010 (5 days)
   core URL: http://www.stanford.edu/~pande/Linux/AMD64/Core_a1.fah (V1.74)
   core number: 0xa1; core name: GRO-SMP
   CPU: 16,0 AMD64; OS: 4,0 Linux
   smp cores: 4; cores to use: 4
   tag: P5101R0C69G128
   flops: 1062926990 (1062.926990 megaflops)
   memory: 3232 MB
   client type: 3 Advmethods
   assignment info (le): Tue Feb 23 19:47:28 2010; B84FEBEF
   CS: 171.67.108.25; P limit: 524286976
   user: Gannett; team: 1971; ID: ??????   ; mach ID: 4
   work/wudata_03.dat file size: 739676; WU type: Folding@Home
Looks like is going the same way with top showing 38..54 cpu utilization across a the quad core. Is this normal for these WUs ?

Thanks

Gannett

Re: Work units going slowly, p3064_lambda5_2003 & CMF in water

Posted: Tue Feb 23, 2010 9:47 pm
by Nathan_P
Unfortuneately yes this is normal performance, the a1 core is old and inefficent compared to a2 & a3, all we can do is fold them as fast as possible and get them out the system. To give you some idea, on windows my athlon 620 folds at 4000ppd on a2, 3500ppd on a3 and 1100ppd on a1 and took 40 hours to complete, needless to say i was not amused :(

Re: Work units going slowly, p3064_lambda5_2003 & CMF in water

Posted: Tue Feb 23, 2010 9:56 pm
by gannett
Thanks Nathan_p.
Will squeeze in a small -oneunit to soak up that bandwidth. Added a foldable GTX-275 GPU to system a few weeks ago and that did really well over the last weekend. There was none of the usual delay between unit completion, upload and getting started on next. That and 58sec per % It was rocking..
Gannett

Re: Work units going slowly, p3064_lambda5_2003 & CMF in water

Posted: Tue Feb 23, 2010 11:45 pm
by bruce
gannett wrote:There was none of the usual delay between unit completion, upload and getting started on next. That and 58sec per % It was rocking..
Gannett
I'm glad things are going well for you
. . . but I wouldn't call the delay "usual"

There was a major disruption in the way the servers handled GPU results beginning a couple weeks ago. In the 10 year history of FAH there have been occasional minor disruptions but I've never seen anything this bad.