Page 8 of 13

Re: List of SMP WUs with the "1 core usage" issue

Posted: Mon Aug 31, 2009 6:26 pm
by ChasR
Project: 2677 (Run 7, Clone 25, Gen 40)
Project: 2671 (Run 52, Clone 43, Gen 82)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 12:22 am
by uncle fuzzy
Already on the list

Project: 2669 (Run 3, Clone 67, Gen 66)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 1:45 am
by HayesK
p2671 (52-43-82) caught on a different rig today and popular with several other posters
p2671 (12-40-88)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 1:48 am
by Bartzero
Project: 2677 (Run 12, Clone 57, Gen 35)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 6:57 am
by 58Enfield
Two more, 1st is multiple repeat:

Project: 2671 (R52 C43 G82)

2nd may be new IDK:

Project: 2677 (R36 C86 G38)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 8:18 am
by bollix47
Project: 2671 (Run 24, Clone 41, Gen 91)

compressed_data_size=1492108

Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 12:34 pm
by SKeptical_Thinker
Found this thread after posting elsewhere:

Project: 2677 (Run 38, Clone 44, Gen 31)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 5:23 pm
by pvh
2677 (Run 30, Clone 77, Gen 33)
size: 1495676

Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 7:26 pm
by road-runner
2671 (Run 52, Clone 43, Gen 82)

Code: Select all

[18:15:39] Thank you for your contribution to Folding@Home.
[18:15:39] + Number of Units Completed: 293

[18:15:41] - Preparing to get new work unit...
[18:15:41] Cleaning up work directory
[18:15:41] + Attempting to get work packet
[18:15:41] - Connecting to assignment server
[18:15:44] - Successful: assigned to (171.67.108.24).
[18:15:44] + News From Folding@Home: Welcome to Folding@Home
[18:15:45] Loaded queue successfully.
[18:15:56] + Closed connections
[18:15:56] 
[18:15:56] + Processing work unit
[18:15:56] At least 4 processors must be requested; read 1.
[18:15:56] Core required: FahCore_a2.exe
[18:15:56] Core found.
[18:15:56] Working on queue slot 02 [September 1 18:15:56 UTC]
[18:15:56] + Working ...
[18:15:56] 
[18:15:56] *------------------------------*
[18:15:56] Folding@Home Gromacs SMP Core
[18:15:56] Version 2.07 (Sun Apr 19 14:51:09 PDT 2009)
[18:15:56] 
[18:15:56] Preparing to commence simulation
[18:15:56] - Ensuring status. Please wait.
[18:15:56] Called DecompressByteArray: compressed_data_size=1506342 data_size=24008597, decompressed_data_size=24008597 diff=0
[18:15:56] - Digital signature verified
[18:15:56] 
[18:15:56] Project: 2671 (Run 52, Clone 43, Gen 82)
[18:15:56] 
[18:15:56] Assembly optimizations on if available.
[18:15:56] Entering M.D.
[18:16:06] Run 52, Clone 43, Gen 82)
[18:16:06] 
[18:16:06] Entering M.D.
NNODES=4, MYRANK=2, HOSTNAME=Apollo-Quad-Office
NNODES=4, MYRANK=3, HOSTNAME=Apollo-Quad-Office
NNODES=4, MYRANK=1, HOSTNAME=Apollo-Quad-Office
NNODES=4, MYRANK=0, HOSTNAME=Apollo-Quad-Office
NODEID=2 argc=20
NODEID=3 argc=20
NODEID=0 argc=20
                         :-)  G  R  O  M  A  C  S  (-:

                   Groningen Machine for Chemical Simulation

                 :-)  VERSION 4.0.99_development_20090307  (-:


      Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
       Copyright (c) 1991-2000, University of Groningen, The Netherlands.
             Copyright (c) 2001-2008, The GROMACS development team,
            check out http://www.gromacs.org for more information.


                                :-)  mdrun  (-:

Reading file work/wudata_02.tpr, VERSION 3.3.99_development_20070618 (single precision)
NODEID=1 argc=20
Note: tpx file_version 48, software version 64

NOTE: The tpr file used for this simulation is in an old format, for less memory usage and possibly more performance create a new tpr file with an up to date version of grompp

Making 1D domain decomposition 1 x 1 x 4
starting mdrun '22887 system in water'
20750000 steps,  41500.0 ps (continuing from step 20500000,  41000.0 ps).

-------------------------------------------------------
Program mdrun, VERSION 4.0.99_development_20090307
Source code file: nsgrid.c, line: 357

Range checking error:
Explanation: During neighborsearching, we assign each particle to a grid
based on its coordinates. If your system contains collisions or parameter
errors that give particles very high velocities you might end up with some
coordinates being +-Infinity or NaN (not-a-number). Obviously, we cannot
put these on a grid, so this is usually where we detect those errors.
Make sure your system is properly energy-minimized and that the potential
energy seems reasonable before trying again.

Variable ci has value -2147483269. It should have been within [ 0 .. 9464 ]

For more information and tips for trouble shooting please check the GROMACS Wiki at
http://wiki.gromacs.org/index.php/Errors
-------------------------------------------------------

Thanx for Using GROMACS - Have a Nice Day

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun on CPU 0 out of 4

gcq#0: Thanx for Using GROMACS - Have a Nice Day

[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0

-------------------------------------------------------
Program mdrun, VERSION 4.0.99_development_20090307
Source code file: nsgrid.c, line: 357

Range checking error:
Explanation: During neighborsearching, we assign each particle to a grid
based on its coordinates. If your system contains collisions or parameter
errors that give particles very high velocities you might end up with some
coordinates being +-Infinity or NaN (not-a-number). Obviously, we cannot
put these on a grid, so this is usually where we detect those errors.
Make sure your system is properly energy-minimized and that the potential
energy seems reasonable before trying again.

Variable ci has value -2147483611. It should have been within [ 0 .. 256 ]

For more information and tips for trouble shooting please check the GROMACS Wiki at
http://wiki.gromacs.org/index.php/Errors
-------------------------------------------------------

Thanx for Using GROMACS - Have a Nice Day

Error on node 3, will try to stop all the nodes
Halting parallel program mdrun on CPU 3 out of 4

gcq#0: Thanx for Using GROMACS - Have a Nice Day

[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[0]0:Return code = 255
[0]1:Return code = 0, signaled with Quit
[0]2:Return code = 0, signaled with Quit
[0]3:Return code = 255
[18:16:18] CoreStatus = FF (255)
[18:16:18] Sending work to server
[18:16:18] Project: 2671 (Run 52, Clone 43, Gen 82)
[18:16:18] - Error: Could not get length of results file work/wuresults_02.dat
[18:16:18] - Error: Could not read unit 02 file. Removing from queue.
[18:16:18] - Preparing to get new work unit...
[18:16:18] Cleaning up work directory
[18:16:18] + Attempting to get work packet
[18:16:18] - Connecting to assignment server
[18:16:22] - Successful: assigned to (171.67.108.24).
[18:16:22] + News From Folding@Home: Welcome to Folding@Home
[18:16:22] Loaded queue successfully.
[18:16:34] + Closed connections
[18:16:39] 
[18:16:39] + Processing work unit
[18:16:39] At least 4 processors must be requested; read 1.
[18:16:39] Core required: FahCore_a2.exe
[18:16:39] Core found.
[18:16:39] Working on queue slot 03 [September 1 18:16:39 UTC]
[18:16:39] + Working ...
[18:16:39] 
[18:16:39] *------------------------------*
[18:16:39] Folding@Home Gromacs SMP Core
[18:16:39] Version 2.07 (Sun Apr 19 14:51:09 PDT 2009)
[18:16:39] 
[18:16:39] Preparing to commence simulation
[18:16:39] - Ensuring status. Please wait.
[18:16:48] - Looking at optimizations...
[18:16:48] - Working with standard loops on this execution.
[18:16:48] - Files status OK
[18:16:49] - Expanded 1506342 -> 24008597 (decompressed 1593.8 percent)
[18:16:49] Called DecompressByteArray: compressed_data_size=1506342 data_size=24008597, decompressed_data_size=24008597 diff=0
[18:16:49] - Digital signature verified
[18:16:49] 
[18:16:49] Project: 2671 (Run 52, Clone 43, Gen 82)
[18:16:49] 
[18:16:49] Entering M.D.
NNODES=4, MYRANK=1, HOSTNAME=Apollo-Quad-Office
NNODES=4, MYRANK=2, HOSTNAME=Apollo-Quad-Office
NNODES=4, MYRANK=3, HOSTNAME=Apollo-Quad-Office
NNODES=4, MYRANK=0, HOSTNAME=Apollo-Quad-Office
NODEID=0 argc=20
                         :-)  G  R  O  M  A  C  S  (-:

                   Groningen Machine for Chemical Simulation

                 :-)  VERSION 4.0.99_development_20090307  (-:


      Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
       Copyright (c) 1991-2000, University of Groningen, The Netherlands.
             Copyright (c) 2001-2008, The GROMACS development team,
            check out http://www.gromacs.org for more information.


                                :-)  mdrun  (-:

Reading file work/wudata_03.tpr, VERSION 3.3.99_development_20070618 (single precision)
NODEID=2 argc=20
NODEID=3 argc=20
NODEID=1 argc=20
Note: tpx file_version 48, software version 64

NOTE: The tpr file used for this simulation is in an old format, for less memory usage and possibly more performance create a new tpr file with an up to date version of grompp

Making 1D domain decomposition 1 x 1 x 4
starting mdrun '22887 system in water'
20750000 steps,  41500.0 ps (continuing from step 20500000,  41000.0 ps).

-------------------------------------------------------
Program mdrun, VERSION 4.0.99_development_20090307
Source code file: nsgrid.c, line: 357

Range checking error:
Explanation: During neighborsearching, we assign each particle to a grid
based on its coordinates. If your system contains collisions or parameter
errors that give particles very high velocities you might end up with some
coordinates being +-Infinity or NaN (not-a-number). Obviously, we cannot
put these on a grid, so this is usually where we detect those errors.
Make sure your system is properly energy-minimized and that the potential
energy seems reasonable before trying again.

Variable ci has value -2147483611. It should have been within [ 0 .. 256 ]

For more information and tips for trouble shooting please check the GROMACS Wiki at
http://wiki.gromacs.org/index.php/Errors
-------------------------------------------------------

Thanx for Using GROMACS - Have a Nice Day


-------------------------------------------------------
Program mdrun, VERSION 4.0.99_development_20090307
Source code file: nsgrid.c, line: 357

Range checking error:
Explanation: During neighborsearching, we assign each particle to a grid
based on its coordinates. If your system contains collisions or parameter
errors that give particles very high velocities you might end up with some
coordinates being +-Infinity or NaN (not-a-number). Obviously, we cannot
put these on a grid, so this is usually where we detect those errors.
Make sure your system is properly energy-minimized and that the potential
energy seems reasonable before trying again.

Variable ci has value -2147483269. It should have been within [ 0 .. 9464 ]

For more information and tips for trouble shooting please check the GROMACS Wiki at
http://wiki.gromacs.org/index.php/Errors
-------------------------------------------------------

Thanx for Using GROMACS - Have a Nice Day

Error on node 3, will try to stop all the nodes
Halting parallel program mdrun on CPU 3 out of 4
Error on node 0, will try to stop all the nodes
Halting parallel program mdrun on CPU 0 out of 4

gcq#0: Thanx for Using GROMACS - Have a Nice Day

[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0

gcq#0: Thanx for Using GROMACS - Have a Nice Day

[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3



Re: List of SMP WUs with the "1 core usage" issue

Posted: Tue Sep 01, 2009 11:50 pm
by ikerekes
Project: 2671 (Run 12, Clone 40, Gen 88)
compressed_data_size=1506846

Re: List of SMP WUs with the "1 core usage" issue

Posted: Wed Sep 02, 2009 1:02 am
by geokilla
I got two of them but it only remembered to write down one of them. My PPD went from the usual 1600 during idle to 300PPD. I use the Notfred client.

Code: Select all

--- Opening Log file [August 29 16:14:21] 


# SMP Client ##################################################################
###############################################################################

                       Folding@Home Client Version 6.02

                          http://folding.stanford.edu

###############################################################################
###############################################################################

Launch directory: /etc/folding/1
Executable: ./fah6
Arguments: -local -forceasm -smp 2 

Warning:
 By using the -forceasm flag, you are overriding
 safeguards in the program. If you did not intend to
 do this, please restart the program without -forceasm.
 If work units are not completing fully (and particularly
 if your machine is overclocked), then please discontinue
 use of the flag.

[16:14:21] - Ask before connecting: No
[16:14:21] - User name: geokilla (Team 38296)
[16:14:21] - User ID: 1F0664FF1ECA36E1
[16:14:21] - Machine ID: 1
[16:14:21] 
[16:14:21] Loaded queue successfully.
[16:14:21] 
[16:14:21] + Processing work unit
[16:14:21] At least 4 processors must be requested.Core required: FahCore_a2.exe
[16:14:21] Core not found.
[16:14:21] - Core is not present or corrupted.
[16:14:21] - Attempting to download new core...
[16:14:21] + Downloading new core: FahCore_a2.exe
[16:14:22] + 10240 bytes downloaded
[16:14:22] + 20480 bytes downloaded
[16:14:22] + 30720 bytes downloaded
[16:14:22] + 40960 bytes downloaded
[16:14:22] + 51200 bytes downloaded
[16:14:22] + 61440 bytes downloaded
[16:14:22] + 71680 bytes downloaded
[16:14:22] + 81920 bytes downloaded
[16:14:22] + 92160 bytes downloaded
[16:14:22] + 102400 bytes downloaded
[16:14:22] + 112640 bytes downloaded
[16:14:22] + 122880 bytes downloaded
[16:14:22] + 133120 bytes downloaded
[16:14:22] + 143360 bytes downloaded
[16:14:22] + 153600 bytes downloaded
[16:14:22] + 163840 bytes downloaded
[16:14:22] + 174080 bytes downloaded
[16:14:22] + 184320 bytes downloaded
[16:14:22] + 194560 bytes downloaded
[16:14:22] + 204800 bytes downloaded
[16:14:22] + 215040 bytes downloaded
[16:14:22] + 225280 bytes downloaded
[16:14:22] + 235520 bytes downloaded
[16:14:22] + 245760 bytes downloaded
[16:14:22] + 256000 bytes downloaded
[16:14:22] + 266240 bytes downloaded
[16:14:22] + 276480 bytes downloaded
[16:14:22] + 286720 bytes downloaded
[16:14:22] + 296960 bytes downloaded
[16:14:22] + 307200 bytes downloaded
[16:14:22] + 317440 bytes downloaded
[16:14:22] + 327680 bytes downloaded
[16:14:22] + 337920 bytes downloaded
[16:14:22] + 348160 bytes downloaded
[16:14:22] + 358400 bytes downloaded
[16:14:22] + 368640 bytes downloaded
[16:14:22] + 378880 bytes downloaded
[16:14:22] + 389120 bytes downloaded
[16:14:22] + 399360 bytes downloaded
[16:14:22] + 409600 bytes downloaded
[16:14:22] + 419840 bytes downloaded
[16:14:22] + 430080 bytes downloaded
[16:14:22] + 440320 bytes downloaded
[16:14:22] + 450560 bytes downloaded
[16:14:22] + 460800 bytes downloaded
[16:14:22] + 471040 bytes downloaded
[16:14:22] + 481280 bytes downloaded
[16:14:22] + 491520 bytes downloaded
[16:14:22] + 501760 bytes downloaded
[16:14:22] + 512000 bytes downloaded
[16:14:22] + 522240 bytes downloaded
[16:14:22] + 532480 bytes downloaded
[16:14:22] + 542720 bytes downloaded
[16:14:22] + 552960 bytes downloaded
[16:14:22] + 563200 bytes downloaded
[16:14:22] + 573440 bytes downloaded
[16:14:22] + 583680 bytes downloaded
[16:14:22] + 593920 bytes downloaded
[16:14:22] + 604160 bytes downloaded
[16:14:22] + 614400 bytes downloaded
[16:14:22] + 624640 bytes downloaded
[16:14:22] + 634880 bytes downloaded
[16:14:22] + 645120 bytes downloaded
[16:14:22] + 655360 bytes downloaded
[16:14:22] + 665600 bytes downloaded
[16:14:23] + 675840 bytes downloaded
[16:14:23] + 686080 bytes downloaded
[16:14:23] + 696320 bytes downloaded
[16:14:23] + 706560 bytes downloaded
[16:14:23] + 716800 bytes downloaded
[16:14:23] + 727040 bytes downloaded
[16:14:23] + 737280 bytes downloaded
[16:14:23] + 747520 bytes downloaded
[16:14:23] + 757760 bytes downloaded
[16:14:23] + 768000 bytes downloaded
[16:14:23] + 778240 bytes downloaded
[16:14:23] + 788480 bytes downloaded
[16:14:23] + 798720 bytes downloaded
[16:14:23] + 808960 bytes downloaded
[16:14:23] + 819200 bytes downloaded
[16:14:23] + 829440 bytes downloaded
[16:14:23] + 839680 bytes downloaded
[16:14:23] + 849920 bytes downloaded
[16:14:23] + 860160 bytes downloaded
[16:14:23] + 870400 bytes downloaded
[16:14:23] + 880640 bytes downloaded
[16:14:23] + 890880 bytes downloaded
[16:14:23] + 901120 bytes downloaded
[16:14:23] + 911360 bytes downloaded
[16:14:23] + 921600 bytes downloaded
[16:14:23] + 931840 bytes downloaded
[16:14:23] + 942080 bytes downloaded
[16:14:23] + 952320 bytes downloaded
[16:14:23] + 962560 bytes downloaded
[16:14:23] + 972800 bytes downloaded
[16:14:23] + 983040 bytes downloaded
[16:14:23] + 993280 bytes downloaded
[16:14:23] + 1003520 bytes downloaded
[16:14:23] + 1013760 bytes downloaded
[16:14:23] + 1024000 bytes downloaded
[16:14:23] + 1034240 bytes downloaded
[16:14:23] + 1044480 bytes downloaded
[16:14:23] + 1054720 bytes downloaded
[16:14:23] + 1064960 bytes downloaded
[16:14:23] + 1075200 bytes downloaded
[16:14:23] + 1085440 bytes downloaded
[16:14:23] + 1095680 bytes downloaded
[16:14:23] + 1105920 bytes downloaded
[16:14:23] + 1116160 bytes downloaded
[16:14:23] + 1126400 bytes downloaded
[16:14:23] + 1136640 bytes downloaded
[16:14:23] + 1146880 bytes downloaded
[16:14:23] + 1157120 bytes downloaded
[16:14:23] + 1167360 bytes downloaded
[16:14:23] + 1177600 bytes downloaded
[16:14:23] + 1187840 bytes downloaded
[16:14:23] + 1198080 bytes downloaded
[16:14:23] + 1208320 bytes downloaded
[16:14:23] + 1218560 bytes downloaded
[16:14:23] + 1228800 bytes downloaded
[16:14:23] + 1239040 bytes downloaded
[16:14:23] + 1249280 bytes downloaded
[16:14:23] + 1259520 bytes downloaded
[16:14:23] + 1269760 bytes downloaded
[16:14:23] + 1280000 bytes downloaded
[16:14:23] + 1290240 bytes downloaded
[16:14:23] + 1300480 bytes downloaded
[16:14:23] + 1310720 bytes downloaded
[16:14:23] + 1320960 bytes downloaded
[16:14:23] + 1331200 bytes downloaded
[16:14:23] + 1341440 bytes downloaded
[16:14:23] + 1351680 bytes downloaded
[16:14:23] + 1361920 bytes downloaded
[16:14:23] + 1372160 bytes downloaded
[16:14:23] + 1382400 bytes downloaded
[16:14:23] + 1392640 bytes downloaded
[16:14:23] + 1402880 bytes downloaded
[16:14:23] + 1413120 bytes downloaded
[16:14:23] + 1423360 bytes downloaded
[16:14:23] + 1433600 bytes downloaded
[16:14:23] + 1443840 bytes downloaded
[16:14:23] + 1454080 bytes downloaded
[16:14:23] + 1464320 bytes downloaded
[16:14:23] + 1474560 bytes downloaded
[16:14:23] + 1484800 bytes downloaded
[16:14:23] + 1495040 bytes downloaded
[16:14:23] + 1505280 bytes downloaded
[16:14:23] + 1515520 bytes downloaded
[16:14:23] + 1525760 bytes downloaded
[16:14:23] + 1536000 bytes downloaded
[16:14:23] + 1546240 bytes downloaded
[16:14:23] + 1556480 bytes downloaded
[16:14:23] + 1566720 bytes downloaded
[16:14:23] + 1576960 bytes downloaded
[16:14:23] + 1587200 bytes downloaded
[16:14:23] + 1597440 bytes downloaded
[16:14:23] + 1607680 bytes downloaded
[16:14:23] + 1617920 bytes downloaded
[16:14:23] + 1628160 bytes downloaded
[16:14:23] + 1638400 bytes downloaded
[16:14:23] + 1648640 bytes downloaded
[16:14:23] + 1658880 bytes downloaded
[16:14:23] + 1669120 bytes downloaded
[16:14:23] + 1679360 bytes downloaded
[16:14:23] + 1689600 bytes downloaded
[16:14:23] + 1699840 bytes downloaded
[16:14:23] + 1710080 bytes downloaded
[16:14:24] + 1720320 bytes downloaded
[16:14:24] + 1730560 bytes downloaded
[16:14:24] + 1740800 bytes downloaded
[16:14:24] + 1751040 bytes downloaded
[16:14:24] + 1761280 bytes downloaded
[16:14:24] + 1771520 bytes downloaded
[16:14:24] + 1781760 bytes downloaded
[16:14:24] + 1785668 bytes downloaded
[16:14:24] Verifying core Core_a2.fah...
[16:14:24] Signature is VALID
[16:14:24] 
[16:14:24] Trying to unzip core FahCore_a2.exe
[16:14:24] Decompressed FahCore_a2.exe (4382312 bytes) successfully
[16:14:24] + Core successfully engaged
[16:14:29] 
[16:14:29] + Processing work unit
[16:14:29] At least 4 processors must be requested.Core required: FahCore_a2.exe
[16:14:29] Core found.
[16:14:29] Working on Unit 02 [August 29 16:14:29]
[16:14:29] + Working ...
[16:14:29] 
[16:14:29] *------------------------------*
[16:14:29] Folding@Home Gromacs SMP Core
[16:14:29] Version 2.08 (Mon May 18 14:47:42 PDT 2009)
[16:14:29] 
[16:14:29] Preparing to commence simulation
[16:14:29] - Ensuring status. Please wait.
[16:14:39] - Assembly optimizations manually forced on.
[16:14:39] - Not checking prior termination.
[16:14:42] - Expanded 1501263 -> 24031357 (decompressed 1600.7 percent)
[16:14:44] Called DecompressByteArray: compressed_data_size=1501263 data_size=24031357, decompressed_data_size=24031357 diff=0
[16:14:45] - Digital signature verified
[16:14:45] 
[16:14:45] Project: 2677 (Run 19, Clone 34, Gen 26)
[16:14:45] 
[16:14:46] Assembly optimizations on if available.
[16:14:46] Entering M.D.
[16:14:52] Using Gromacs checkpoints
[16:14:56] Multi-core optimizations on
[16:15:02] Resuming from checkpoint
[16:15:03] Verified work/wudata_02.log
[16:15:03] Verified work/wudata_02.trr
[16:15:03] Verified work/wudata_02.xtc
[16:15:03] Verified work/wudata_02.edr
[16:15:34] Completed 20010 out of 250000 steps  (8%)
[18:34:39] Completed 22500 out of 250000 steps  (9%)
[20:46:13] Completed 25000 out of 250000 steps  (10%)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Wed Sep 02, 2009 1:35 am
by HayesK
one core wu pretty tough today. Cleaned up 4 before going to work this am and 5 more this evening. Some of these wu are getting very popular.
p2671 (R37-C79-G78) same wu 3x in one day (same rig 2x, even after deleting machine.dat, and 1x on a different rig)
p2671 (R24-C41-G91) Same wu on two different rigs
p2677 (R9-C62-G42)
p2677 (R24-C57-G32)
p2671 (R12-C40-G88)
p2671 (R52-C43-G82)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Wed Sep 02, 2009 8:48 am
by bollix47
Project: 2671 (Run 37, Clone 79, Gen 78) .... repeat

compressed_data_size=1513330

Re: List of SMP WUs with the "1 core usage" issue

Posted: Wed Sep 02, 2009 12:55 pm
by Bartzero
P2671 (R37 C79 G78)
p2677 (R7 C74 G39)
p2677 (R5 C54 G34)

Re: List of SMP WUs with the "1 core usage" issue

Posted: Wed Sep 02, 2009 3:03 pm
by ikerekes
Project: 2677 (Run 24, Clone 57, Gen 32)