Running GPU since it came out, CPU since client version 3. Folding since Folding began (~2000) and ran Genome@Home for a while too. Ran Seti@Home prior to that.
Can anyone help with this. Its popping up in my log at this point on 3 of my 4 clients (and they arent getting work). It happened the other day when i posted about an assign to 0.0.0.0 problem.
[15:45:29] + Attempting to get work packet
[15:45:29] - Connecting to assignment server
[15:45:29] Connecting to http://assign.stanford.edu:8080/
[15:45:30] Posted data.
[15:45:30] Initial: 0000; - Successful: assigned to (0.0.0.0).
[15:45:30] + News From Folding@Home: Welcome to Folding@Home
[15:45:30] Work Unit has an invalid address.
[15:45:30] - Error: Attempt #26 to get work failed, and no other work to do.
Waiting before retry.
It seems to affect all clients on all machines, so not limited to this machine.
$ traceroute assign.stanford.edu
traceroute to assign.stanford.edu (171.65.103.93), 30 hops max, 40 byte packets
1 router (192.168.1.2) 1.462 ms 1.984 ms 2.550 ms
2 10.238.156.1 (10.238.156.1) 15.699 ms 15.940 ms 16.219 ms
3 gsr01-sh.blueyonder.co.uk (62.30.241.65) 16.545 ms 16.779 ms 19.569 ms
4 62.30.252.93 (62.30.252.93) 28.364 ms 28.869 ms 29.111 ms
5 62.30.252.57 (62.30.252.57) 27.948 ms 28.557 ms 29.312 ms
6 lee-bb-b-ge-110-0.inet.ntl.com (195.182.178.125) 27.022 ms 25.883 ms 25.745 ms
7 lee-bb-a-ae0-0.inet.ntl.com (62.253.187.185) 24.327 ms 19.116 ms 19.082 ms
8 bre-bb-b-so-200-0.inet.ntl.com (213.105.175.26) 19.595 ms 19.997 ms 20.220 ms
9 telc-ic-1-as0-0.inet.ntl.com (62.253.185.74) 21.054 ms 21.292 ms 21.512 ms
10 te1-3.ccr01.lon04.atlas.cogentco.com (130.117.15.69) 27.037 ms 27.276 ms 27.504 ms
11 te4-3.mpd01.lon01.atlas.cogentco.com (130.117.3.189) 28.260 ms 28.491 ms te7-3.mpd01.lon01.atlas.cogentco.com (130.117.3.185) 27.886 ms
12 * * *
13 te3-2.mpd01.bos01.atlas.cogentco.com (130.117.0.185) 92.203 ms te4-4.ccr02.bos01.atlas.cogentco.com (130.117.0.46) 90.552 ms te3-2.mpd01.bos01.atlas.cogentco.com (130.117.0.185) 98.025 ms
14 * te2-2.ccr02.ord01.atlas.cogentco.com (154.54.6.22) 111.793 ms *
15 te7-2.mpd01.mci01.atlas.cogentco.com (154.54.2.189) 128.707 ms 128.995 ms te8-3.ccr02.mci01.atlas.cogentco.com (154.54.7.165) 126.695 ms
16 te2-4.mpd01.sfo01.atlas.cogentco.com (154.54.24.105) 164.622 ms te2-2.ccr02.sfo01.atlas.cogentco.com (154.54.6.42) 159.121 ms 159.413 ms
17 vl3490.mpd01.sjc04.atlas.cogentco.com (154.54.2.166) 163.353 ms 164.134 ms 162.796 ms
18 Stanford_University2.demarc.cogentco.com (66.250.7.138) 321.410 ms 307.920 ms 293.236 ms
19 bbr2-rtr.Stanford.EDU (171.64.1.152) 168.225 ms 168.263 ms 170.010 ms
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Please post your client type, client settings, OS, and CPU info. Stanford may have run out of WUs for your specific setup. If so, Stanford can use that info to add more work.
It was happing to me as well on one of my Linux consoles. I merely tweaked the config temporarily to fetch projects without deadlines and she immediately found a nice 3906 Double Gromacs B Kasson special worth 310p.
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Tobit wrote:It was happing to me as well on one of my Linux consoles. I merely tweaked the config temporarily to fetch projects without deadlines and she immediately found a nice 3906 Double Gromacs B Kasson special worth 310p.
That's odd. Normally, you don't get any WUs when you set the client to request WUs with no deadlines. Are you sure that is the setting you changed? Or did you mean "no preference" instead?
Running GPU since it came out, CPU since client version 3. Folding since Folding began (~2000) and ran Genome@Home for a while too. Ran Seti@Home prior to that.
Machine 1:
AMD Phenom X4 9600 @ Stock, 4GB DDR2-667
Centos 5.1 x86
4x Linux 5.04
Flags: forceasm, advmethods (have tried without adv or that other setting), verbosity 9
Mod Edit: Be very careful. 1st rule of fight club. Thanks for the info.
7im wrote:That's odd. Normally, you don't get any WUs when you set the client to request WUs with no deadlines. Are you sure that is the setting you changed? Or did you mean "no preference" instead?
Request work units without deadlines (no/yes) [yes]?
I said 'yes' instead of the default 'no' - this was the only change I made and it went off and had a 3906 assigned immediately.
Running GPU since it came out, CPU since client version 3. Folding since Folding began (~2000) and ran Genome@Home for a while too. Ran Seti@Home prior to that.
PS There are a whole mess of new WU's in the pipeline. We're doing QA on them, but our hope is that this new project will solve this issue for good. It's more than 1M WU's and will be split on two servers. The size of that project should take care of this issue for quite a while.
VijayPande wrote:It's more than 1M WU's and will be split on two servers. The size of that project should take care of this issue for quite a while.
Understatement of the year there. the phrase "using a sledgehammer to crack a nut" comes to mind Are these standard WUs or the Big variety? (I'd guess standard but I'll ask anyway)
Folding whatever I'm sent since March 2006 Beta testing since October 2006. www.FAH-Addict.net Administrator since August 2009.
We'd like to eliminate this problem as it's a nasty one for donors. 1M WU's is a lot, but I want to leave enough room that this is something we can all stop worrying about. The new project is pretty cool. It's designed to sit on the back burner soaking up excess clients (from the other ongoing projects) and in a year produce some really neat results (if all goes on right).