Problem uploading to 129.74.246.143
Moderators: Site Moderators, FAHC Science Team
Re: Problem uploading to 129.74.246.143
I don't know if it is so with all the others having problems here; but since this problem (of having collection server 0.0.0.0 assigned) has spread to all 4 of my systems, I notice that it is only occurring on the GPUs, not the CPUs. My GPUs are all NVidia, with a GT 640 in my oldest system (which replaced an even older, still functioning but very slow GT 5-something card within the last two months or so), GTX 670M in the laptop (which is original equipment for it), my second newest i7 system with a GTX 760, newly replaced after the other, older 760 fan failed (more memory, but only 1 fan), and my newest added in the last week, i7 Haswell system, that started with a GT 730 while I awaited the arrival of the GTX 970. I did dump one job from the 730, leaving that slot blank for a couple of days, because it was going to take 10.5 days to complete, and I knew the 970 would arrive within 2 days. Perhaps those recent changes confused your servers; but I doubt I'd have had the same problem as so many others, starting with the supposed problem server (143) above unless it was by the wildest of coincidences. All my systems have Asus motherboards, 3 of which were built by Asus; the newest built by Fry's to my specifications with all Asus except for the case, Power Supply, and CPU; and at least 3, maybe all 4 of the GT(X) NVidia GPUs are from Asus too--I don't want to dig into the oldest one right now to see if it's an Asus or MSI, or EVGA 640 in there--my guess is it's either Asus or EVGA. But, since most of my NVidia cards are Asus, and all are NVidia, and all the hang-ups are in GPUs, you might want to look to see if this is only happening with WUs assigned to GPUs, not CPUs, then see if (it IS all GPUs, are they all NVidia) and go from there.
-
- Posts: 1094
- Joined: Wed Nov 05, 2008 3:19 pm
- Location: Cambridge, UK
Re: Problem uploading to 129.74.246.143
The client can (for the moment) handle the delayed upload. Normal upload is to the Work Server that issued the job, the Collection Server is only a backup, so absence of a CS is only a problem when the WS itself is having problems. Yes these should always be a CS, but hey, this world ain't perfect. I don't know why you are not getting new assignments -- the stored, unreturned, completed WUs should not cause that (and do not seem to be doing so for anyone else, myself included). So I doubt that ditching completed units will help you (except possibly coincidentally by the level of reset involved), but it will certainly harm the science.
-
- Posts: 35
- Joined: Fri Apr 25, 2014 12:26 am
Re: Problem uploading to 129.74.246.143
Okay well you're situation is unique.RABishop wrote:If you look at what I said, in it's entirety, you'll see I wasn't saying I'd quit the whole FAH project; but I would be watching for anything assigned to .143 as the work server, since its collection server seems to be 0.0.0.0 in both cases where the upload is stalled. I didn't notice how long how long THIS computer (my second fastest, second by a lot, since my fastest often doubles the output of this with the other two combined) might have sat around before downloading a new job and just getting on with it; but I know my laptop (the slowest by far) didn't do so for over 2 hours, because I WAS watching when that problem started.
What I was saying is that I KNOW you guys don't want people mounting and dismounting cpu or gpu slots just to dump a job one thinks will take too long or have too few bonus points associated with it; and that fair. But when a job is going to act like a Trojan horse and gum up the ability of my systems (AFTER it has been completed and should be uploaded) to begin working on a new job; I don't think it is fair for my systems to have to be idle entirely for any length of time because of a failure in the FAH network to upload the results properly and download a new assignment my cpu/gpu can be working on until the problem is fixed. If that's the case, I think it's fair for me to delete the slot, thus clearing the defective job, and get on with a new job, straight away.
If this were only about points to me, I wouldn't be spending the money in electricity, replacing failed GPUs, getting a new Haswell 5830k system with a GTX 970, and running 4 systems around the clock. I need little of this array merely for email and shopping on line. I can't write off any of this as charity, nor do I care to; but the points are a way for me to see that I'm contributing; and I see no reason to contribute what amounts to nothing if someone else's system just gets the same job I couldn't upload, does it, contributes it via another work/collection server pair, while my system is clogged due to this snafu. That is a waste of everyone's time and expense. I look forward to the time when FAH runs out of new folding to do; because I am sure by then that many new cures and effective therapies will have been found. Dumping a bad job by deleting a slot isn't simply trying for a better roll of the dice under such conditions. Duplication of effort won't help in this. I'm sure once the weekend is over, this will be solved. But, budgets being tight, this isn't surprising, especially before Thanksgiving week. I'm saying I'll protect my systems from useless work until this is resolved.
My client still downloads and works on WU's even with that one 16Mb Core a4 WU I have stuck.
In fact, it just handed in a Core a4 WU just fine a few hours ago........
Doesn't make sense that your slots would wait for the WU to upload.
Re: Problem uploading to 129.74.246.143
Work is returned to the the server that assigned it, not the collection server. The collection server is a backup for if the main server goes down but not all servers have a backup so they show 0.0.0.0. As for the server this thread is about give them some damn time, its the weekend.
Re: Problem uploading to 129.74.246.143
7im & silver_pharoh,
Again, look at what has been written. I never made any threats (veiled or otherwise) about leaving the F@h project and I do take exception about being accused of it.
If points are not important then why have a points/team system? I personally could not give a to$$ about the points. The only reason they are of importance to me is to push my team up the league table and hopefully increase awareness of the disease (MSA) that I am folding for. However I feel that if donors are willing to contribute their time and money to this effort the least the project leads can do is have a robust system of collecting the results and rewarding the donors.
EVERY project should have a collection server in case the primary server goes down. This should be a mandatory requirement for any project to be allowed to use the F@h network. It is not an unreasonable restriction. If a WU server goes down the results of assigned WU's will still be collected which is, after all, the whole point of the system and in the best interests of the research team. It is precisely BECAUSE 24x7x365 support is too costly that a backup should be in place so that in the event of a failure there doesn't need to be anybody on site immediately.
My machine (running v7) is quite happy to queue the completed WU and continue with other work so it is not blocked as some others seem to be. The only issue it causes for me is the loss of QRB points which is frustrating. If machines are being blocked from running other WU's by this issue then valuable computing time is being lost which is bad for everybody.
We would feel more valued as a donors if moderators acknowledged our frustrations and agreed that this situation is unacceptable rather than taking the attitude that we are being petulant and that the F@h system doesn't guarantee anything anyway.
BP
Again, look at what has been written. I never made any threats (veiled or otherwise) about leaving the F@h project and I do take exception about being accused of it.
If points are not important then why have a points/team system? I personally could not give a to$$ about the points. The only reason they are of importance to me is to push my team up the league table and hopefully increase awareness of the disease (MSA) that I am folding for. However I feel that if donors are willing to contribute their time and money to this effort the least the project leads can do is have a robust system of collecting the results and rewarding the donors.
EVERY project should have a collection server in case the primary server goes down. This should be a mandatory requirement for any project to be allowed to use the F@h network. It is not an unreasonable restriction. If a WU server goes down the results of assigned WU's will still be collected which is, after all, the whole point of the system and in the best interests of the research team. It is precisely BECAUSE 24x7x365 support is too costly that a backup should be in place so that in the event of a failure there doesn't need to be anybody on site immediately.
My machine (running v7) is quite happy to queue the completed WU and continue with other work so it is not blocked as some others seem to be. The only issue it causes for me is the loss of QRB points which is frustrating. If machines are being blocked from running other WU's by this issue then valuable computing time is being lost which is bad for everybody.
We would feel more valued as a donors if moderators acknowledged our frustrations and agreed that this situation is unacceptable rather than taking the attitude that we are being petulant and that the F@h system doesn't guarantee anything anyway.
BP
-
- Posts: 22
- Joined: Sun Jan 26, 2014 1:44 pm
Re: Problem uploading to 129.74.246.143
Yeah, I'm a bit tired of the holier than thou attitude myself. Contributors make FAH work. Without us the project would collapse.
Re: Problem uploading to 129.74.246.143
It's now 48 hours since I first reported this problem and not much seems to have been done. I appreciate it's been over the weekend but server support is not a 9-5 job (I know, I did it for long enough) and I would have thought this was serious enough for people to be looking at it straight away. I'm seriously pissed off about the points I'm losing but I'm more concerned that there's no apparent action.
-
- Posts: 83
- Joined: Sun Jan 12, 2014 8:17 pm
- Hardware configuration: HP z600-dual 5650 xeons (6 cores-2.67 x2) , 32g ram, gtx780
- Location: UK
Re: Problem uploading to 129.74.246.143
Just adding another 10p to the pot, i too have a work unit that is awaiting to go back to this sever... attempt 20 in 5hrs 18mins
Re: Problem uploading to 129.74.246.143
You can add me as having a WU failing to upload to this server (129.74.246.143) since Friday.
Re: Problem uploading to 129.74.246.143
Well, I had what I thought was an issue with my fastest system; but it resolved itself within a few minutes: failure to upload from CPU completed WU. I still wonder if this is restricted to ONLY GPU completed units; since the three units I have backlogged are all from my GPUs and none from my CPUs. About 20 minutes ago a GPU WU completed on that same machine, and it uploaded within about 6 minutes; but the GPU WAS prevented during that time from beginning to process the newly downloaded job--or, at least it wasn't showing anything happening during that time. I'm not going to sweat about 6 minutes; but the GPU on my laptop WAS held up for about 2.5 hours b4 it just began processing the new WU unit holding the old one for when it can be collected. Maybe it took that long b4 the AI decision was made to get the old WU out of the DDR5 and move it onto the HDD (or into the DDR3, of which little is ever used) so the GPU could work again.
And, I just checked my laptop. Bonus points there are about 1.5 times the base points (1157 base, 3602, total, now), which I'm sure must be approaching minimum bonus at this point. At some point even that small bonus will go away, and only base points will be awarded, then no points are awarded after it is timed out, and assigned to a new contributor. The GTX 670M in that laptop is no 970, but it usually gets better bonus points awarded to it than a factor of 1.5, normally around double. So, apparently, the job waiting to be sent is being updated (downgraded) as to it's possible point bonuses based on how long it is taking to deliver it.
And, I MUST say, I have to agree with some of the comments above about the apparent smugness in the answer that "we aren't promised anything" in FAQ. Who mentioned any guarantees anyway? I too am most concerned about the apparent lack of "RESPONSE" that isn't verbal or smug, but actually "fixes" the problem.
And, I just checked my laptop. Bonus points there are about 1.5 times the base points (1157 base, 3602, total, now), which I'm sure must be approaching minimum bonus at this point. At some point even that small bonus will go away, and only base points will be awarded, then no points are awarded after it is timed out, and assigned to a new contributor. The GTX 670M in that laptop is no 970, but it usually gets better bonus points awarded to it than a factor of 1.5, normally around double. So, apparently, the job waiting to be sent is being updated (downgraded) as to it's possible point bonuses based on how long it is taking to deliver it.
And, I MUST say, I have to agree with some of the comments above about the apparent smugness in the answer that "we aren't promised anything" in FAQ. Who mentioned any guarantees anyway? I too am most concerned about the apparent lack of "RESPONSE" that isn't verbal or smug, but actually "fixes" the problem.
-
- Posts: 35
- Joined: Fri Apr 25, 2014 12:26 am
Re: Problem uploading to 129.74.246.143
It is not just GPU WU's.RABishop wrote:Well, I had what I thought was an issue with my fastest system; but it resolved itself within a few minutes: failure to upload from CPU completed WU. I still wonder if this is restricted to ONLY GPU completed units; since the three units I have backlogged are all from my GPUs and none from my CPUs. About 20 minutes ago a GPU WU completed on that same machine, and it uploaded within about 6 minutes; but the GPU WAS prevented during that time from beginning to process the newly downloaded job--or, at least it wasn't showing anything happening during that time. I'm not going to sweat about 6 minutes; but the GPU on my laptop WAS held up for about 2.5 hours b4 it just began processing the new WU unit holding the old one for when it can be collected. Maybe it took that long b4 the AI decision was made to get the old WU out of the DDR5 and move it onto the HDD (or into the DDR3, of which little is ever used) so the GPU could work again.
And, I just checked my laptop. Bonus points there are about 1.5 times the base points (1157 base, 3602, total, now), which I'm sure must be approaching minimum bonus at this point. At some point even that small bonus will go away, and only base points will be awarded, then no points are awarded after it is timed out, and assigned to a new contributor. The GTX 670M in that laptop is no 970, but it usually gets better bonus points awarded to it than a factor of 1.5, normally around double. So, apparently, the job waiting to be sent is being updated (downgraded) as to it's possible point bonuses based on how long it is taking to deliver it.
And, I MUST say, I have to agree with some of the comments above about the apparent smugness in the answer that "we aren't promised anything" in FAQ. Who mentioned any guarantees anyway? I too am most concerned about the apparent lack of "RESPONSE" that isn't verbal or smug, but actually "fixes" the problem.
my rig is still trying to upload my CPU Core a4 WU to this server.
However, this WU is 16.82Mb - the biggest by far I've received for my CPU...
http://www.overclockers.com/forums/show ... nd-in-WU-s See my post on Overclockers if you want to see my logs and stuff.
It's the same issue - uploads a bit then "Transfer Failed"
Although, this WU is not causing any issues - my CPU and GPU's are folding away and sending in WU's just fine.
It's just that one WU for me.
-
- Posts: 2
- Joined: Mon Nov 24, 2014 2:53 am
Cannot return completed work units to 129.74.246.143
Hi --
I have two completed work units sitting on my two Windows 8.1 machines that have not been sent back for several days. It looks like the server 129.74.246.143 is the culprit in both cases. I have attached the log file for one of these cases. What should I do?
I have two completed work units sitting on my two Windows 8.1 machines that have not been sent back for several days. It looks like the server 129.74.246.143 is the culprit in both cases. I have attached the log file for one of these cases. What should I do?
Code: Select all
*********************** Log Started 2014-11-24T02:41:49Z ***********************
02:41:49:************************* Folding@home Client *************************
02:41:49: Website: http://folding.stanford.edu/
02:41:49: Copyright: (c) 2009-2014 Stanford University
02:41:49: Author: Joseph Coffland <joseph@cauldrondevelopment.com>
02:41:49: Args:
02:41:49: Config: C:/Users/Frank/AppData/Roaming/FAHClient/config.xml
02:41:49:******************************** Build ********************************
02:41:49: Version: 7.4.4
02:41:49: Date: Mar 4 2014
02:41:49: Time: 20:26:54
02:41:49: SVN Rev: 4130
02:41:49: Branch: fah/trunk/client
02:41:49: Compiler: Intel(R) C++ MSVC 1500 mode 1200
02:41:49: Options: /TP /nologo /EHa /Qdiag-disable:4297,4103,1786,279 /Ox -arch:SSE
02:41:49: /QaxSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 /Qopenmp /Qrestrict /MT /Qmkl
02:41:49: Platform: win32 XP
02:41:49: Bits: 32
02:41:49: Mode: Release
02:41:49:******************************* System ********************************
02:41:49: CPU: Intel(R) Core(TM) i7-4770S CPU @ 3.10GHz
02:41:49: CPU ID: GenuineIntel Family 6 Model 60 Stepping 3
02:41:49: CPUs: 8
02:41:49: Memory: 15.91GiB
02:41:49: Free Memory: 13.50GiB
02:41:49: Threads: WINDOWS_THREADS
02:41:49: OS Version: 6.2
02:41:49: Has Battery: false
02:41:49: On Battery: false
02:41:49: UTC Offset: -7
02:41:49: PID: 6444
02:41:49: CWD: C:/Users/Frank/AppData/Roaming/FAHClient
02:41:49: OS: Windows 8.1 Pro
02:41:49: OS Arch: AMD64
02:41:49: GPUs: 1
02:41:49: GPU 0: NVIDIA:3 GK107 [GeForce GT 750M]
02:41:49: CUDA: 3.0
02:41:49: CUDA Driver: 6050
02:41:49:Win32 Service: false
02:41:49:***********************************************************************
02:41:49:<config>
02:41:49: <!-- Network -->
02:41:49: <proxy v=':8080'/>
02:41:49:
02:41:49: <!-- Slot Control -->
02:41:49: <power v='full'/>
02:41:49:
02:41:49: <!-- User Information -->
02:41:49: <passkey v='********************************'/>
02:41:49: <user v='helioseism'/>
02:41:49:
02:41:49: <!-- Folding Slots -->
02:41:49: <slot id='0' type='GPU'/>
02:41:49: <slot id='1' type='CPU'/>
02:41:49:</config>
02:41:49:Trying to access database...
02:41:50:Successfully acquired database lock
02:41:50:Enabled folding slot 00: READY gpu:0:GK107 [GeForce GT 750M]
02:41:50:Enabled folding slot 01: READY cpu:7
02:41:51:WU01:FS00:Starting
02:41:51:WU01:FS00:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/Users/Frank/AppData/Roaming/FAHClient/cores/web.stanford.edu/~pande/Win32/AMD64/NVIDIA/Fermi/Core_17.fah/FahCore_17.exe -dir 01 -suffix 01 -version 704 -lifeline 6444 -checkpoint 15 -gpu 0 -gpu-vendor nvidia
02:41:51:WU01:FS00:Started FahCore on PID 6648
02:41:51:WU01:FS00:Core PID:5356
02:41:51:WU01:FS00:FahCore 0x17 started
02:41:51:WU02:FS01:Sending unit results: id:02 state:SEND error:NO_ERROR project:7087 run:0 clone:180 gen:1 core:0xa4 unit:0x000000010001329c546e693ff17c9e57
02:41:51:WU02:FS01:Uploading 12.70MiB to 129.74.246.143
02:41:51:WU03:FS01:Starting
02:41:51:WU02:FS01:Connecting to 129.74.246.143:8080
02:41:51:WU03:FS01:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/Users/Frank/AppData/Roaming/FAHClient/cores/web.stanford.edu/~pande/Win32/AMD64/Core_a4.fah/FahCore_a4.exe -dir 03 -suffix 01 -version 704 -lifeline 6444 -checkpoint 15 -np 7
02:41:51:WU03:FS01:Started FahCore on PID 5436
02:41:51:WU03:FS01:Core PID:5444
02:41:51:WU03:FS01:FahCore 0xa4 started
02:41:52:WU03:FS01:0xa4:
02:41:52:WU03:FS01:0xa4:*------------------------------*
02:41:52:WU03:FS01:0xa4:Folding@Home Gromacs GB Core
02:41:52:WU03:FS01:0xa4:Version 2.27 (Dec. 15, 2010)
02:41:52:WU03:FS01:0xa4:
02:41:52:WU03:FS01:0xa4:Preparing to commence simulation
02:41:52:WU03:FS01:0xa4:- Ensuring status. Please wait.
02:41:52:WU01:FS00:0x17:*********************** Log Started 2014-11-24T02:41:51Z ***********************
02:41:52:WU01:FS00:0x17:Project: 13001 (Run 481, Clone 2, Gen 42)
02:41:52:WU01:FS00:0x17:Unit: 0x00000045538b3db7532c847615962aaf
02:41:52:WU01:FS00:0x17:CPU: 0x00000000000000000000000000000000
02:41:52:WU01:FS00:0x17:Machine: 0
02:41:52:WU01:FS00:0x17:Digital signatures verified
02:41:52:WU01:FS00:0x17:Folding@home GPU core17
02:41:52:WU01:FS00:0x17:Version 0.0.52
02:41:52:WU01:FS00:0x17: Found a checkpoint file
02:41:52:WARNING:WU02:FS01:WorkServer connection failed on port 8080 trying 80
02:41:52:WU02:FS01:Connecting to 129.74.246.143:80
02:41:54:WARNING:WU02:FS01:Exception: Failed to send results to work server: Failed to connect to 129.74.246.143:80: No connection could be made because the target machine actively refused it.
02:41:54:WU02:FS01:Sending unit results: id:02 state:SEND error:NO_ERROR project:7087 run:0 clone:180 gen:1 core:0xa4 unit:0x000000010001329c546e693ff17c9e57
02:41:55:WU02:FS01:Uploading 12.70MiB to 129.74.246.143
02:41:55:WU02:FS01:Connecting to 129.74.246.143:8080
02:42:01:WU03:FS01:0xa4:- Looking at optimizations...
02:42:01:WU03:FS01:0xa4:- Working with standard loops on this execution.
02:42:01:WU03:FS01:0xa4:- Previous termination of core was improper.
02:42:01:WU03:FS01:0xa4:- Files status OK
02:42:01:WU03:FS01:0xa4:- Expanded 923652 -> 1527780 (decompressed 165.4 percent)
02:42:01:WU03:FS01:0xa4:Called DecompressByteArray: compressed_data_size=923652 data_size=1527780, decompressed_data_size=1527780 diff=0
02:42:01:WU03:FS01:0xa4:- Digital signature verified
02:42:01:WU03:FS01:0xa4:
02:42:01:WU03:FS01:0xa4:Project: 9011 (Run 697, Clone 0, Gen 119)
02:42:01:WU03:FS01:0xa4:
02:42:01:WU03:FS01:0xa4:Entering M.D.
02:42:04:WARNING:WU02:FS01:WorkServer connection failed on port 8080 trying 80
02:42:04:WU02:FS01:Connecting to 129.74.246.143:80
02:42:07:WU03:FS01:0xa4:Using Gromacs checkpoints
02:42:07:WU03:FS01:0xa4:Mapping NT from 7 to 7
02:42:07:WU03:FS01:0xa4:Resuming from checkpoint
02:42:08:WU03:FS01:0xa4:Verified 03/wudata_01.log
02:42:08:WU03:FS01:0xa4:Verified 03/wudata_01.trr
02:42:08:WU03:FS01:0xa4:Verified 03/wudata_01.xtc
02:42:08:WU03:FS01:0xa4:Verified 03/wudata_01.edr
02:42:08:WU03:FS01:0xa4:Completed 43795 out of 250000 steps (17%)
02:42:17:WARNING:WU02:FS01:Exception: Failed to send results to work server: Failed to connect to 129.74.246.143:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
02:42:54:WU02:FS01:Sending unit results: id:02 state:SEND error:NO_ERROR project:7087 run:0 clone:180 gen:1 core:0xa4 unit:0x000000010001329c546e693ff17c9e57
02:42:55:WU03:FS01:0xa4:Completed 45000 out of 250000 steps (18%)
02:42:55:WU02:FS01:Uploading 12.70MiB to 129.74.246.143
02:42:55:WU02:FS01:Connecting to 129.74.246.143:8080
02:42:57:WARNING:WU02:FS01:WorkServer connection failed on port 8080 trying 80
02:42:57:WU02:FS01:Connecting to 129.74.246.143:80
02:43:06:WARNING:WU02:FS01:Exception: Failed to send results to work server: Failed to connect to 129.74.246.143:80: No connection could be made because the target machine actively refused it.
02:44:32:WU03:FS01:0xa4:Completed 47500 out of 250000 steps (19%)
02:44:32:WU02:FS01:Sending unit results: id:02 state:SEND error:NO_ERROR project:7087 run:0 clone:180 gen:1 core:0xa4 unit:0x000000010001329c546e693ff17c9e57
02:44:32:WU02:FS01:Uploading 12.70MiB to 129.74.246.143
02:44:32:WU02:FS01:Connecting to 129.74.246.143:8080
02:44:33:WARNING:WU02:FS01:WorkServer connection failed on port 8080 trying 80
02:44:33:WU02:FS01:Connecting to 129.74.246.143:80
02:44:35:WARNING:WU02:FS01:Exception: Failed to send results to work server: Failed to connect to 129.74.246.143:80: No connection could be made because the target machine actively refused it.
02:45:12:WU01:FS00:0x17:Completed 3625000 out of 5000000 steps (72%)
02:45:12:WU01:FS00:0x17:Temperature control disabled. Requirements: single Nvidia GPU, tmax must be < 110 and twait >= 900
02:46:03:WU03:FS01:0xa4:Completed 50000 out of 250000 steps (20%)
02:47:09:WU02:FS01:Sending unit results: id:02 state:SEND error:NO_ERROR project:7087 run:0 clone:180 gen:1 core:0xa4 unit:0x000000010001329c546e693ff17c9e57
02:47:09:WU02:FS01:Uploading 12.70MiB to 129.74.246.143
02:47:09:WU02:FS01:Connecting to 129.74.246.143:8080
02:47:22:WARNING:WU02:FS01:WorkServer connection failed on port 8080 trying 80
02:47:22:WU02:FS01:Connecting to 129.74.246.143:80
02:47:26:WARNING:WU02:FS01:Exception: Failed to send results to work server: Failed to connect to 129.74.246.143:80: No connection could be made because the target machine actively refused it.
02:47:38:WU03:FS01:0xa4:Completed 52500 out of 250000 steps (21%)
02:49:18:WU03:FS01:0xa4:Completed 55000 out of 250000 steps (22%)
02:50:54:WU03:FS01:0xa4:Completed 57500 out of 250000 steps (23%)
02:51:23:WU02:FS01:Sending unit results: id:02 state:SEND error:NO_ERROR project:7087 run:0 clone:180 gen:1 core:0xa4 unit:0x000000010001329c546e693ff17c9e57
02:51:23:WU02:FS01:Uploading 12.70MiB to 129.74.246.143
02:51:23:WU02:FS01:Connecting to 129.74.246.143:8080
02:51:25:WARNING:WU02:FS01:WorkServer connection failed on port 8080 trying 80
02:51:25:WU02:FS01:Connecting to 129.74.246.143:80
02:51:34:WARNING:WU02:FS01:Exception: Failed to send results to work server: Failed to connect to 129.74.246.143:80: No connection could be made because the target machine actively refused it.
02:52:26:WU03:FS01:0xa4:Completed 60000 out of 250000 steps (24%)
02:54:01:WU03:FS01:0xa4:Completed 62500 out of 250000 steps (25%)
02:55:36:WU03:FS01:0xa4:Completed 65000 out of 250000 steps (26%)
02:57:11:WU03:FS01:0xa4:Completed 67500 out of 250000 steps (27%)
02:58:15:WU02:FS01:Sending unit results: id:02 state:SEND error:NO_ERROR project:7087 run:0 clone:180 gen:1 core:0xa4 unit:0x000000010001329c546e693ff17c9e57
02:58:15:WU02:FS01:Uploading 12.70MiB to 129.74.246.143
02:58:15:WU02:FS01:Connecting to 129.74.246.143:8080
02:58:16:WARNING:WU02:FS01:WorkServer connection failed on port 8080 trying 80
02:58:16:WU02:FS01:Connecting to 129.74.246.143:80
02:58:18:WARNING:WU02:FS01:Exception: Failed to send results to work server: Failed to connect to 129.74.246.143:80: No connection could be made because the target machine actively refused it.
02:58:44:WU03:FS01:0xa4:Completed 70000 out of 250000 steps (28%)
-
- Site Admin
- Posts: 7927
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: Problem uploading to 129.74.246.143
The problem has been reported to the manager of the server. No information has come back yet on when the server might be fixed. But since it is the weekend, probably it will be fixed Monday.helioseism wrote: I have attached the log file for one of these cases. What should I do?
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 2
- Joined: Mon Nov 24, 2014 2:53 am
Re: Problem uploading to 129.74.246.143
Thanks for the quick reply!Joe_H wrote:The problem has been reported to the manager of the server. No information has come back yet on when the server might be fixed. But since it is the weekend, probably it will be fixed Monday.helioseism wrote: I have attached the log file for one of these cases. What should I do?
Re: Problem uploading to 129.74.246.143
I've also got several WU's waiting to be uploaded to this server, they're now almost worthless due to the QR value depreciating. It's breaking my balls