171.67.108.11

Moderators: Site Moderators, FAHC Science Team

Post Reply
BenL-PS3
Posts: 10
Joined: Sun Aug 29, 2010 12:02 pm

171.67.108.11

Post by BenL-PS3 »

Hi

171.67.108.11 has been down for about 4 hours - statuses of Reject, Not accept, Reject and Reject from 02:00 to 05:00 PDT

As a side note, the backup is CS 171.67.108.25 - the notes say not to report these but its status has been Not Accept since 22 June - as far back as the records go!

Cheers

Ben
sortofageek
Site Admin
Posts: 3110
Joined: Fri Nov 30, 2007 8:06 pm
Location: Team Helix
Contact:

Re: 171.67.108.11

Post by sortofageek »

BenL-PS3 wrote:Hi

171.67.108.11 has been down for about 4 hours - statuses of Reject, Not accept, Reject and Reject from 02:00 to 05:00 PDT
Thanks for reporting this. It is likely still before business hours at Stanford. I'll try to keep an eye on this and give the server manager a heads up if this doesn't get attention some time this morning. :)

BenL-PS3 wrote:Hi
As a side note, the backup is CS 171.67.108.25 - the notes say not to report these but its status has been Not Accept since 22 June - as far back as the records go!
... which is the reason the notes say not to report these. :)
sortofageek
Site Admin
Posts: 3110
Joined: Fri Nov 30, 2007 8:06 pm
Location: Team Helix
Contact:

Re: 171.67.108.11

Post by sortofageek »

It may still be too early, but I did send a notification email.
new08
Posts: 188
Joined: Fri Jan 04, 2008 11:02 pm
Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021]
Location: England

Re: 171.67.108.11 [& 25]

Post by new08 »

I have a unit project:5767 run:1 clone:65 finished today and only issued yesterday that is stuck in upload mode.
I thought units weren't sent out without a default CS that worked as a standby? 25 has indeed been out for days.

14:56:24:WU02:FS00:Sending unit results: id:02 state:SEND error:OK project:5767 run:1 clone:65 gen:1970 core:0x11 unit:0x6c7ba46c501afa7b07b2004100011687
14:56:24:WU02:FS00:Uploading 97.48KiB to 171.67.108.11
14:56:24:WU02:FS00:Connecting to 171.67.108.11:8080
14:56:27:WARNING:WU02:FS00:Exception: Failed to send results to work server: Transfer failed
Image
Joe_H
Site Admin
Posts: 7927
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: 171.67.108.11

Post by Joe_H »

new08 wrote:I thought units weren't sent out without a default CS that worked as a standby? 25 has indeed been out for days.
No, that has not been a requirement for as long as I can remember. In any case, '25 and the other CS's mentioned to not report have not been active for a lot longer than days. The posting where this is mentioned dates to Oct. 2010, and as best as I can recall they were down even before that.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
Jesse_V
Site Moderator
Posts: 2850
Joined: Mon Jul 18, 2011 4:44 am
Hardware configuration: OS: Windows 10, Kubuntu 19.04
CPU: i7-6700k
GPU: GTX 970, GTX 1080 TI
RAM: 24 GB DDR4
Location: Western Washington

171.67.108.25 and 171.67.108.11

Post by Jesse_V »

This has been going on for the last four hours or so.

Code: Select all

16:18:56:WU01:FS00:Sending unit results: id:01 state:SEND error:OK project:5768 run:11 clone:44 gen:2909 core:0x11 unit:0x4b83227e501b755e0b5d002c000b1688
16:18:56:WU01:FS00:Uploading 88.60KiB to 171.67.108.11
16:18:56:WU01:FS00:Connecting to 171.67.108.11:8080
16:18:58:WARNING:WU01:FS00:WorkServer connection failed on port 8080 trying 80
16:18:58:WU01:FS00:Connecting to 171.67.108.11:80
16:18:59:WARNING:WU01:FS00:Exception: Failed to send results to work server: Failed to connect to 171.67.108.11:80: No connection could be made because the target machine actively refused it.
16:18:59:WU01:FS00:Trying to send results to collection server
16:18:59:WU01:FS00:Uploading 88.60KiB to 171.67.108.25
16:18:59:WU01:FS00:Connecting to 171.67.108.25:8080
16:19:00:WARNING:WU01:FS00:WorkServer connection failed on port 8080 trying 80
16:19:00:WU01:FS00:Connecting to 171.67.108.25:80
16:19:02:ERROR:WU01:FS00:Exception: Failed to connect to 171.67.108.25:80: No connection could be made because the target machine actively refused it.
F@h is now the top computing platform on the planet and nothing unites people like a dedicated fight against a common enemy. This virus affects all of us. Lets end it together.
Joe_H
Site Admin
Posts: 7927
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: 171.67.108.25 and 171.67.108.11

Post by Joe_H »

Already reported here, viewtopic.php?f=18&t=22165. As mentioned, 171.67.108.25 is one of the collection servers that is no longer in service and problems with it do not get reported.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
new08
Posts: 188
Joined: Fri Jan 04, 2008 11:02 pm
Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021]
Location: England

Re: 171.67.108.25 and 171.67.108.11

Post by new08 »

108.25 may be long dead -but the software still searches for an upload to it. I wasted time looking it up, like others. Get a grip PG!
Image
Joe_H
Site Admin
Posts: 7927
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: 171.67.108.25 and 171.67.108.11

Post by Joe_H »

The information is posted in the (Do This First) topic at the top of this group, viewtopic.php?f=18&t=17794. Specific collection servers to ignore are listed in this post, viewtopic.php?f=18&t=17794#p161539. It does not take that much time to find.
7im
Posts: 10179
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: 171.67.108.25 and 171.67.108.11

Post by 7im »

PG is slowly upgrading everything to V7, which works with collection servers.

But a large ship does not turn quickly. It takes time...

Also, if everyone was familiar with the (Do This First) troubleshooting steps, then everyone would know about collection servers, and wouldn't need to waste time looking them up. ;)
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
new08
Posts: 188
Joined: Fri Jan 04, 2008 11:02 pm
Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021]
Location: England

Re: 171.67.108.25 and 171.67.108.11

Post by new08 »

Or to take a redundant server out of service?
Not everyone goes through the forum- other access points to stats!
Image
sortofageek
Site Admin
Posts: 3110
Joined: Fri Nov 30, 2007 8:06 pm
Location: Team Helix
Contact:

Re: 171.67.108.11

Post by sortofageek »

The logs show 171.67.108.11 accepting for the past three hours. Usually, when a server comes back like that it takes time to receive the WUs waiting. Patience is always appreciated. :)
Jesse_V
Site Moderator
Posts: 2850
Joined: Mon Jul 18, 2011 4:44 am
Hardware configuration: OS: Windows 10, Kubuntu 19.04
CPU: i7-6700k
GPU: GTX 970, GTX 1080 TI
RAM: 24 GB DDR4
Location: Western Washington

Re: 171.67.108.11

Post by Jesse_V »

Sorry for not re-reading that Do This First thread. The WU was sent and everything is fine now.
F@h is now the top computing platform on the planet and nothing unites people like a dedicated fight against a common enemy. This virus affects all of us. Lets end it together.
Post Reply