171.67.108.11 & 171.67.108.21 down
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 289
- Joined: Sun Dec 02, 2007 4:31 am
- Location: Carrizo Plain National Monument, California
- Contact:
171.67.108.11 & 171.67.108.21 down
GPU work servers - Could not connect to Work Server (results)171.67.108.11 Could not connect to Work Server 171.67.108.21 - servers out now for about 12 hours. Any news?
-
- Posts: 2948
- Joined: Sun Dec 02, 2007 4:36 am
- Hardware configuration: Machine #1:
Intel Q9450; 2x2GB=8GB Ram; Gigabyte GA-X48-DS4 Motherboard; PC Power and Cooling Q750 PS; 2x GTX 460; Windows Server 2008 X64 (SP1).
Machine #2:
Intel Q6600; 2x2GB=4GB Ram; Gigabyte GA-X48-DS4 Motherboard; PC Power and Cooling Q750 PS; 2x GTX 460 video card; Windows 7 X64.
Machine 3:
Dell Dimension 8400, 3.2GHz P4 4x512GB Ram, Video card GTX 460, Windows 7 X32
I am currently folding just on the 5x GTX 460's for aprox. 70K PPD - Location: Salem. OR USA
Re: 171.67.108.11 & 171.67.108.21 down
Those servers do not look right according to the server status page, so I notified the owner.
Re: 171.67.108.11 & 171.67.108.21 down
Thanks for alerting Stanford of the issue. The .11 server has been in reject mode and all attempts to communicate with .21 show failure in the log files, although the assignment server is still routing to it. I confirm the time frame as starting no later than noon PDT, 17 June.
I'm the same farmpuma from years gone by, but it appears my account went away when the passwords changed to six characters minimum.
-
- Posts: 289
- Joined: Sun Dec 02, 2007 4:31 am
- Location: Carrizo Plain National Monument, California
- Contact:
Re: 171.67.108.11 & 171.67.108.21 down
Thanks for passing the message on - time to take out the professional's toolkit maybe - a hammer and a screwdriver
Re: 171.67.108.11 & 171.67.108.21 down
Naaah. The professional toolkit consists of a pair of steel-toed work boots.John_Weatherman wrote:Thanks for passing the message on - time to take out the professional's toolkit maybe - a hammer and a screwdriver
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 289
- Joined: Sun Dec 02, 2007 4:31 am
- Location: Carrizo Plain National Monument, California
- Contact:
Re: 171.67.108.11 & 171.67.108.21 down
Both still showing "reject" - has somebody pulled out a RJ11 by mistake?
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: 171.67.108.11 & 171.67.108.21 down
If the server status script could not access these servers at all they would be reported as DOWN. So they are on the network, what the problem is has not been reported yet.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Pande Group Member
- Posts: 2058
- Joined: Fri Nov 30, 2007 6:25 am
- Location: Stanford
Re: 171.67.108.11 & 171.67.108.21 down
We're working on this one now.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Re: 171.67.108.11 & 171.67.108.21 down
Problems with uploaden here as well.
-
- Posts: 289
- Joined: Sun Dec 02, 2007 4:31 am
- Location: Carrizo Plain National Monument, California
- Contact:
Re: 171.67.108.11 & 171.67.108.21 down
Both reporting as Down now - is this a step in the right direction?
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: 171.67.108.11 & 171.67.108.21 down
My guess is that indicates the actual hardware was shutdown, possibly to take care of a component failure. Since Dr. Pande posted they were looking into it at a time that was late afternoon Stanford time, service response might be the following morning.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
GPU work units not being assigned from 171.67.108.201
Hi
I have been folding since 2009, and my FAH ID is montague-cripps.
I have struck a problem for the first time.
One of my four folding PCs can no longer download GPU work units from the 171.67.108.201 server. This has lasted some four days, and I cannot find a work-around. The other PCs have no problems with either slots. I have pinged it and it responds, but seems to be empty.
The steps I have taken include:
1. re-installing FAH@home
2. deleting the GPU slot (several times)
I can see no way of forcing FAH to log on to a different work server.
The log file is as follows:
Let me know if you need any more information.
Grateful for any help.
I have been folding since 2009, and my FAH ID is montague-cripps.
I have struck a problem for the first time.
One of my four folding PCs can no longer download GPU work units from the 171.67.108.201 server. This has lasted some four days, and I cannot find a work-around. The other PCs have no problems with either slots. I have pinged it and it responds, but seems to be empty.
The steps I have taken include:
1. re-installing FAH@home
2. deleting the GPU slot (several times)
I can see no way of forcing FAH to log on to a different work server.
The log file is as follows:
Code: Select all
06:32:22:Adding folding slot 01: READY gpu:0:GT215 [GeForce GT 240]
06:32:22:Saving configuration to config.xml
06:32:22:<config>
06:32:22: <!-- Network -->
06:32:22: <proxy v=':8080'/>
06:32:22:
06:32:22: <!-- Slot Control -->
06:32:22: <power v='full'/>
06:32:22:
06:32:22: <!-- User Information -->
06:32:22: <passkey v='********************************'/>
06:32:22: <user v='Montague-Cripps'/>
06:32:22:
06:32:22: <!-- Folding Slots -->
06:32:22: <slot id='0' type='CPU'/>
06:32:22: <slot id='1' type='GPU'/>
06:32:22:</config>
06:32:22:FS00:Shutting core down
06:32:23:WU01:FS01:Connecting to 171.67.108.201:80
06:32:23:WARNING:WU01:FS01:Failed to get assignment from '171.67.108.201:80': Empty work server assignment
06:32:23:WU01:FS01:Connecting to 171.64.65.160:80
06:32:24:WARNING:WU01:FS01:Failed to get assignment from '171.64.65.160:80': Empty work server assignment
06:32:24:ERROR:WU01:FS01:Exception: Could not get an assignment
06:32:24:WU01:FS01:Connecting to 171.67.108.201:80
06:32:25:WARNING:WU01:FS01:Failed to get assignment from '171.67.108.201:80': Empty work server assignment
06:32:25:WU01:FS01:Connecting to 171.64.65.160:80
06:32:25:WARNING:WU01:FS01:Failed to get assignment from '171.64.65.160:80': Empty work server assignment
06:32:25:ERROR:WU01:FS01:Exception: Could not get an assignment
06:32:27:WU00:FS00:0xa4:Client no longer detected. Shutting down core
06:32:27:WU00:FS00:0xa4:
06:32:27:WU00:FS00:0xa4:Folding@home Core Shutdown: CLIENT_DIED
06:32:28:WU00:FS00:FahCore returned: INTERRUPTED (102 = 0x66)
06:32:28:WU00:FS00:Starting
06:32:28:WARNING:WU00:FS00:Changed SMP threads from 4 to 3 this can cause some work units to fail
06:32:28:WU00:FS00:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/ProgramData/FAHClient/cores/web.stanford.edu/~pande/Win32/AMD64/Core_a4.fah/FahCore_a4.exe -dir 00 -suffix 01 -version 704 -lifeline 992 -checkpoint 15 -np 3
06:32:28:WU00:FS00:Started FahCore on PID 6920
06:32:28:WU00:FS00:Core PID:6132
06:32:28:WU00:FS00:FahCore 0xa4 started
06:32:28:WU00:FS00:0xa4:
06:32:28:WU00:FS00:0xa4:*------------------------------*
06:32:28:WU00:FS00:0xa4:Folding@Home Gromacs GB Core
06:32:28:WU00:FS00:0xa4:Version 2.27 (Dec. 15, 2010)
06:32:28:WU00:FS00:0xa4:
06:32:28:WU00:FS00:0xa4:Preparing to commence simulation
06:32:28:WU00:FS00:0xa4:- Looking at optimizations...
06:32:28:WU00:FS00:0xa4:- Files status OK
06:32:28:WU00:FS00:0xa4:- Expanded 117132 -> 264000 (decompressed 225.3 percent)
06:32:28:WU00:FS00:0xa4:Called DecompressByteArray: compressed_data_size=117132 data_size=264000, decompressed_data_size=264000 diff=0
06:32:28:WU00:FS00:0xa4:- Digital signature verified
06:32:28:WU00:FS00:0xa4:
06:32:28:WU00:FS00:0xa4:Project: 6370 (Run 58, Clone 49, Gen 25)
06:32:28:WU00:FS00:0xa4:
06:32:28:WU00:FS00:0xa4:Assembly optimizations on if available.
06:32:28:WU00:FS00:0xa4:Entering M.D.
06:32:34:WU00:FS00:0xa4:Using Gromacs checkpoints
06:32:34:WU00:FS00:0xa4:Mapping NT from 3 to 3
06:32:34:WU00:FS00:0xa4:Resuming from checkpoint
06:32:34:WU00:FS00:0xa4:Verified 00/wudata_01.log
06:32:34:WU00:FS00:0xa4:Verified 00/wudata_01.trr
06:32:34:WU00:FS00:0xa4:Verified 00/wudata_01.xtc
06:32:34:WU00:FS00:0xa4:Verified 00/wudata_01.edr
06:32:34:WU00:FS00:0xa4:Completed 1791140 out of 5000000 steps (35%)
06:32:52:Saving configuration to config.xml
06:32:52:<config>
06:32:52: <!-- Network -->
06:32:52: <proxy v=':8080'/>
06:32:52:
06:32:52: <!-- Slot Control -->
06:32:52: <power v='full'/>
06:32:52:
06:32:52: <!-- User Information -->
06:32:52: <passkey v='********************************'/>
06:32:52: <user v='Montague-Cripps'/>
06:32:52:
06:32:52: <!-- Folding Slots -->
06:32:52: <slot id='0' type='CPU'/>
06:32:52: <slot id='1' type='GPU'/>
06:32:52:</config>
06:33:24:WU01:FS01:Connecting to 171.67.108.201:80
06:33:25:WARNING:WU01:FS01:Failed to get assignment from '171.67.108.201:80': Empty work server assignment
06:33:25:WU01:FS01:Connecting to 171.64.65.160:80
06:33:25:WARNING:WU01:FS01:Failed to get assignment from '171.64.65.160:80': Empty work server assignment
06:33:25:ERROR:WU01:FS01:Exception: Could not get an assignment
06:33:43:WU00:FS00:0xa4:Completed 1800000 out of 5000000 steps (36%)
06:35:01:WU01:FS01:Connecting to 171.67.108.201:80
06:35:02:WARNING:WU01:FS01:Failed to get assignment from '171.67.108.201:80': Empty work server assignment
06:35:02:WU01:FS01:Connecting to 171.64.65.160:80
06:35:02:WARNING:WU01:FS01:Failed to get assignment from '171.64.65.160:80': Empty work server assignment
06:35:02:ERROR:WU01:FS01:Exception: Could not get an assignment
06:37:38:WU01:FS01:Connecting to 171.67.108.201:80
06:37:39:WARNING:WU01:FS01:Failed to get assignment from '171.67.108.201:80': Empty work server assignment
06:37:39:WU01:FS01:Connecting to 171.64.65.160:80
06:37:40:WARNING:WU01:FS01:Failed to get assignment from '171.64.65.160:80': Empty work server assignment
06:37:40:ERROR:WU01:FS01:Exception: Could not get an assignment
06:39:26:WU00:FS00:0xa4:Completed 1850000 out of 5000000 steps (37%)
06:41:53:WU01:FS01:Connecting to 171.67.108.201:80
06:41:53:WARNING:WU01:FS01:Failed to get assignment from '171.67.108.201:80': Empty work server assignment
06:41:53:WU01:FS01:Connecting to 171.64.65.160:80
06:41:54:WARNING:WU01:FS01:Failed to get assignment from '171.64.65.160:80': Empty work server assignment
06:41:54:ERROR:WU01:FS01:Exception: Could not get an assignment
06:45:01:WU00:FS00:0xa4:Completed 1900000 out of 5000000 steps (38%)
06:48:44:WU01:FS01:Connecting to 171.67.108.201:80
06:48:45:WARNING:WU01:FS01:Failed to get assignment from '171.67.108.201:80': Empty work server assignment
06:48:45:WU01:FS01:Connecting to 171.64.65.160:80
06:48:45:WARNING:WU01:FS01:Failed to get assignment from '171.64.65.160:80': Empty work server assignment
06:48:45:ERROR:WU01:FS01:Exception: Could not get an assignment
06:50:53:WU00:FS00:0xa4:Completed 1950000 out of 5000000 steps (39%)
Grateful for any help.
Re: 171.67.108.11 & 171.67.108.21 down
Welcome to the folding@home support forum Nick200.
As you can see I have moved your post to a thread with similar problems. The work servers that assignment servers 171.67.108.201 & 171.64.65.160 would normally send your client to for appropriate work are currently marked as DOWN on the Server Status page and PG is working on the situation. Nothing you can do at this point other that pause your GPU slot and wait for notification from this thread that the servers are working normally again. Make sure you're subscribed to this topic (see link at bottom of page).
As you can see I have moved your post to a thread with similar problems. The work servers that assignment servers 171.67.108.201 & 171.64.65.160 would normally send your client to for appropriate work are currently marked as DOWN on the Server Status page and PG is working on the situation. Nothing you can do at this point other that pause your GPU slot and wait for notification from this thread that the servers are working normally again. Make sure you're subscribed to this topic (see link at bottom of page).
-
- Posts: 289
- Joined: Sun Dec 02, 2007 4:31 am
- Location: Carrizo Plain National Monument, California
- Contact:
Re: 171.67.108.11 & 171.67.108.21 down
I take it that the size 12 boot to the server didn't work and an expert's been called in (coming sometime between 12 and 3 next week)?
-
- Posts: 6
- Joined: Fri Mar 12, 2010 10:09 am
- Hardware configuration: Pentium III, 384 MB RAM, Linux Xubuntu 10.04
Re: No appropriate work server was found
Not receiving work.
Pre-Fermi GPUs (2x nV Geforce 9500GT) on Windows XP SP3 (yes, still...)
I know that core 11 is going end-of-life at some point, but couldn't find a clear statement on whether they are actually EOL as of today, 2014-06-21.
There is the reminder about aging cores from Aug 23, 2013 ( https://folding.stanford.edu/home/remin ... -core78-2/ ).
Is there no work because the servers are for the moment not distributing work (for whatever reason) or because the Core 11 have now finally run out of steam?
Edit/update: ah,ok - been moved to this thread. I looked at the server status before posting, saw some GPU servers but couldn't see on the server status page ( http://fah-web.stanford.edu/pybeta/serverstat.html ) where I could work out which GPU server(s) assign the Core 11 WUs. Would it be fair to work on the assumption that 171.67.108.11 and .21 assign Core 11 WUs?
Pre-Fermi GPUs (2x nV Geforce 9500GT) on Windows XP SP3 (yes, still...)
I know that core 11 is going end-of-life at some point, but couldn't find a clear statement on whether they are actually EOL as of today, 2014-06-21.
There is the reminder about aging cores from Aug 23, 2013 ( https://folding.stanford.edu/home/remin ... -core78-2/ ).
Is there no work because the servers are for the moment not distributing work (for whatever reason) or because the Core 11 have now finally run out of steam?
Edit/update: ah,ok - been moved to this thread. I looked at the server status before posting, saw some GPU servers but couldn't see on the server status page ( http://fah-web.stanford.edu/pybeta/serverstat.html ) where I could work out which GPU server(s) assign the Core 11 WUs. Would it be fair to work on the assumption that 171.67.108.11 and .21 assign Core 11 WUs?
Last edited by herbak on Sat Jun 21, 2014 8:11 pm, edited 1 time in total.