Page 1 of 1

Project: 6067 (Run 0, Clone 76, Gen 207) EUE

Posted: Sun Feb 20, 2011 11:40 pm
by gannett

Code: Select all

11:04:32] Project: 6067 (Run 0, Clone 76, Gen 207)
[11:04:32] 
[11:04:32] Assembly optimizations on if available.
[11:04:32] Entering M.D.
Starting 2 threads
NNODES=2, MYRANK=1, HOSTNAME=thread #1
NNODES=2, MYRANK=0, HOSTNAME=thread #0
Reading file work/wudata_03.tpr, VERSION 4.0.99_development_20090605 (single precision)
Making 1D domain decomposition 2 x 1 x 1
starting mdrun 'Mutant_scan'
104000008 steps, 208000.0 ps (continuing from step 103500008, 207000.0 ps).
[11:04:38] Completed 0 out of 500000 steps  (0%)
[11:21:01] Completed 5000 out of 500000 steps  (1%)
[11:37:24] Completed 10000 out of 500000 steps  (2%)

-------------------------------------------------------
Program mdrun, VERSION 4.0.99-dev-20100429-354a3
Source code file: /Users/kasson/a3_devnew/gromacs/src/mdlib/pme.c, line: 563

Fatal error:
4 particles communicated to PME node 0 are more than 2/3 times the cut-off out of the domain decomposition cell of their charge group in dimension xThis usually means that your system is not well equilibrated
For more information and tips for trouble shooting please check the GROMACS website at
http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

Thanx for Using GROMACS - Have a Nice Day

[11:52:08] mdrun returned 255
[11:52:08] Going to send back what have done -- stepsTotalG=500000
[11:52:08] Work fraction=0.0290 steps=500000.
[11:52:12] logfile size=13136 infoLength=13136 edr=0 trr=25
[11:52:12] logfile size: 13136 info=13136 bed=0 hdr=25
[11:52:12] - Writing 13674 bytes of core data to disk...
[11:52:12]   ... Done.
[11:52:13] 
[11:52:13] Folding@home Core Shutdown: EARLY_UNIT_END
[11:52:13] CoreStatus = 72 (114)
[11:52:13] Sending work to server
[11:52:13] Project: 6067 (Run 0, Clone 76, Gen 207)
Got the above yesterday on the same recently snow Leoparded machine that failed earlier
Over the weekend had been stable ..... so is not completely broken

[23:37:18] Project: 6025 (Run 1, Clone 155, Gen 374)
[03:49:56] Folding@home Core Shutdown: FINISHED_UNIT
[03:49:56] Project: 6025 (Run 1, Clone 155, Gen 374)

[04:09:09] Project: 6024 (Run 0, Clone 88, Gen 379)
[07:11:17] Folding@home Core Shutdown: FINISHED_UNIT
[07:11:18] Project: 6024 (Run 0, Clone 88, Gen 379)

[07:27:38] Project: 6050 (Run 1, Clone 170, Gen 268)
[11:02:57] Folding@home Core Shutdown: FINISHED_UNIT
[11:02:57] Project: 6050 (Run 1, Clone 170, Gen 268)

[11:04:32] Project: 6067 (Run 0, Clone 76, Gen 207)
[11:52:13] Folding@home Core Shutdown: EARLY_UNIT_END
[11:52:13] Project: 6067 (Run 0, Clone 76, Gen 207)
[11:52:27] Project: 6025 (Run 0, Clone 178, Gen 457)
up to 46 %

Let me know if you want be to retire this C2D Mac Mini from from folding. I don't want it doing bad science and I have some other bigger boxes folding perfectly.

Gannett

Re: Project: 6067 (Run 0, Clone 76, Gen 207) EUE

Posted: Sun Feb 20, 2011 11:58 pm
by bruce
You didn't post the part of the log that says

*------------------------------*
Folding@Home Gromacs SMP Core
Version X.XX (XXX. XX, 20XX)

Re: Project: 6067 (Run 0, Clone 76, Gen 207) EUE

Posted: Mon Feb 21, 2011 8:24 am
by gannett
Sorry about that Bruce c2D Mac Mini is running

$ grep Version FAHlog.txt
Folding@Home Client Version 6.29r1
[23:41:24] Version 2.21 (Mar 7 2010)
[23:59:16] Version 2.21 (Mar 7 2010)
.. repeated

For all the units.

Gannett

Re: Project: 6067 (Run 0, Clone 76, Gen 207) EUE

Posted: Mon Feb 21, 2011 10:19 am
by PantherX
This is what you got:
Hi Gannett (team 1971),
Your WU (P6067 R0 C76 G207) was added to the stats database on 2011-02-20 04:10:43 for 13.94 points of credit.
However another Donor completed successfully so it isn't a bad WU.

Re: Project: 6067 (Run 0, Clone 76, Gen 207) EUE

Posted: Mon Feb 21, 2011 7:00 pm
by bruce
Development is planning to upgrade to that version of the core to one which is expected to solve a variety of issues. Unfortunately I have no prediction when that might be available. Please bear with us.