Page 5 of 5

Re: Failed to connect to 171.64.65.104:80

Posted: Sat Jul 23, 2016 11:01 pm
by kwerboom
I don't want to my delete WU. I just don't want it to disappear when it expires because it never got collected due to a server error.

Re: Failed to connect to 171.64.65.104:80

Posted: Sun Jul 24, 2016 7:59 pm
by MahlRem
It's unfortunate that there's going to be a lot more computing power lost over the next couple of days if this server's issue isn't resolved. The last of the WUs assigned before the server went down on the 17th will be expiring on or before Wednesday while they sit completed and waiting to be turned in.

The WUs are important to me and I hope they're important to the folks receiving them, but it sends an odd message to have a server out of space for a full week.

Re: Failed to connect to 171.64.65.104:80

Posted: Tue Jul 26, 2016 8:04 pm
by foldy
jadeshi says: Sorry for the delay. p92xx should be accepting work requests again.
viewtopic.php?f=24&t=28976

Re: Failed to connect to 171.64.65.104:80

Posted: Tue Jul 26, 2016 8:39 pm
by Joe_H
Looking at the Project Summary page, this appears to involve a move to a different server. So for the moment there are two IP addresses listed for each project. I recall this causing problems for some monitoring tools.

WS 171.64.65.104 is still shown as in Reject status.

Re: Failed to connect to 171.64.65.104:80

Posted: Tue Jul 26, 2016 9:12 pm
by Simplex0
Yes. jadeshi's 'fix' seams to be that the server only sends out new work maybe, caus I'm stll unable to send in my old result.

Re: Failed to connect to 171.64.65.104:80

Posted: Tue Jul 26, 2016 10:53 pm
by bruce
bs_texas wrote:^ hey thanks... But, but I had to search around and found it was actually in C:\Users\bs\AppData\Roaming\FAHClient\work.
The environment variable %appdata% is set to C:\Users\<logon_name>\AppData\Roaming\ so that's the same as what foldy said, provided you're logged on as bs
foldy wrote: %appdata%\FAHClient\work

Re: Failed to connect to 171.64.65.104:80

Posted: Wed Jul 27, 2016 10:50 am
by PS3EdOlkkola
I'm having the same problem as Simplex0: My rigs are correctly downloading, folding and uploading new p92xx assignments. However, the old p92xx work units completed before the server mishap still will not upload. They are stuck getting the same "PLEASE_WAIT (464)" error message on failed upload attempts. Tried pausing the client, unpausing to restart upload, but to no avail. Also tried shutting down and restarting the client, which also has no impact. Is there any fix, or is it time to dump these work units?

Re: Failed to connect to 171.64.65.104:80

Posted: Wed Jul 27, 2016 1:25 pm
by Joe_H
As the last WU's download from the 171.64.65.104 WS are approaching the final deadline of 10 days, they should be dumped automatically. As I posted, the projects were moved to a new WS and the old WU's will not upload to the new WS. Their destination WS and CS would have been set when the WU was downloaded. I do not have further information, but the need to move the projects from this WS would imply to me that there was more wrong than just running out of space.

Re: Failed to connect to 171.64.65.104:80

Posted: Wed Jul 27, 2016 3:54 pm
by MahlRem
Joe_H wrote:they should be dumped automatically
While this looks like what's going to happen, the problem is more a lack of communication about the server status. Not a big deal, but my machines have spent about 30 GB of bandwidth with the client trying to upload these WUs to the listed CS since the WS rejected them. Just a little frustrating to know that I could have manually dumped them if the server was never coming back up on the listed IP addresses.

Re: Failed to connect to 171.64.65.104:80

Posted: Wed Jul 27, 2016 7:25 pm
by kwerboom
Well, my last WU from this server was dumped. Sad. :(

Re: Failed to connect to 171.64.65.104:80

Posted: Wed Jul 27, 2016 7:29 pm
by kwerboom
MahlRem wrote:While this looks like what's going to happen, the problem is more a lack of communication about the server status.
This. I wish someone would have given us some kind of ongoing status update to let us know what was going on. That's where most of my frustration is from.