GPUs stuck on READY

If you're new to FAH and need help getting started or you have very basic questions, start here.

Moderators: Site Moderators, FAHC Science Team

maj28
Posts: 29
Joined: Mon Apr 20, 2015 3:13 pm

GPUs stuck on READY

Post by maj28 »

Hello, first time poster, short time reader, having a very frustrating issue here:

Running V7 client on Athlon II x4 640 with W7 and Geforce 9400 GT and 9500 GT (aka G96) and having the following symptoms:
1. GPU's appear in slot 1 and 2, as "READY", but never fold anything (despite speed/priority setting)
2. GPU ip's show are 0.0.0.0 and 0.0.0.0
3. CPU receives work fine and processes fine
4. Confirmed GPU's are in GPU.txt
5. Confirmed via direct download from nvidia and device manager that drivers are up to date.
6. Reinstalled FAH client to no avail and also toyed around with GPU Index, no dice.

At a total loss, feel like I'm missing something obvious here.

Code: Select all

15:28:21:Adding folding slot 01: READY gpu:0:G96 [GeForce 9500 GT]
15:28:21:Removing old file 'configs/config-20150420-130152.xml'
15:28:21:Saving configuration to config.xml
15:28:21:<config>
15:28:21:  <!-- Folding Slot Configuration -->
15:28:21:  <max-packet-size v='big'/>
15:28:21:
15:28:21:  <!-- Network -->
15:28:21:  <proxy v=':8080'/>
15:28:21:
15:28:21:  <!-- Slot Control -->
15:28:21:  <power v='full'/>
15:28:21:
15:28:21:  <!-- User Information -->
15:28:21:  <passkey v='********************************'/>
15:28:21:  <team v='111065'/>
15:28:21:  <user v='maj28'/>
15:28:21:
15:28:21:  <!-- Work Unit Control -->
15:28:21:  <next-unit-percentage v='90'/>
15:28:21:
15:28:21:  <!-- Folding Slots -->
15:28:21:  <slot id='0' type='CPU'>
15:28:21:    <cpus v='4'/>
15:28:21:  </slot>
15:28:21:  <slot id='1' type='GPU'/>
15:28:21:</config>
15:28:28:WU06:FS01:Connecting to 171.67.108.200:80
15:28:29:WARNING:WU06:FS01:Failed to get assignment from '171.67.108.200:80': Empty work server assignment
15:28:29:WU06:FS01:Connecting to 171.67.108.204:80
15:28:30:WARNING:WU06:FS01:Failed to get assignment from '171.67.108.204:80': Empty work server assignment
15:28:30:ERROR:WU06:FS01:Exception: Could not get an assignment
15:28:30:WU06:FS01:Connecting to 171.67.108.200:80
15:28:31:WARNING:WU06:FS01:Failed to get assignment from '171.67.108.200:80': Empty work server assignment
15:28:31:WU06:FS01:Connecting to 171.67.108.204:80
15:28:32:WARNING:WU06:FS01:Failed to get assignment from '171.67.108.204:80': Empty work server assignment
15:28:32:ERROR:WU06:FS01:Exception: Could not get an assignment
-jason

Mod edit: Changed Quote tags to Code tags around log file
7im
Posts: 10179
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: GPUs stuck on READY

Post by 7im »

Hello Jason, welcome to the folding support forum.

Change the max packet size to small, and the next unit percentage to 100.

You might also need to reserve one CPU core to feed the GPUs with data. Try it both ways to see how it affects the total PPD.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Joe_H
Site Admin
Posts: 7939
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: GPUs stuck on READY

Post by Joe_H »

If you read through enough posts related to your older video cards, there are a number that mention the ending of Core_11 work last August. Those projects were the main source of assignments for pre-Fermi nVidia GPU's. Some pre-Fermi cards can successfully fold WU's from projects that use Core_15, but not all. There was a post recently connected to this, use the parameter max-packet-size=small to indicate wanting to fold Core_15 WU's. You will need to monitor your system to see if your cards can process these WU's,
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
maj28
Posts: 29
Joined: Mon Apr 20, 2015 3:13 pm

Re: GPUs stuck on READY

Post by maj28 »

Thanks folks, looks like at this point they are still not pulling anything despite the changes to 'small' and '100'.

I'm looking at the EVGA GT 740 maybe at this point since it sounds like these aren't going to work, definitely had reservations about these older cards working.

Thanks for the very quick responses!
Joe_H
Site Admin
Posts: 7939
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: GPUs stuck on READY

Post by Joe_H »

Since you have pre-Fermi cards, you may also need to add the client option "client-type=beta".
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
maj28
Posts: 29
Joined: Mon Apr 20, 2015 3:13 pm

Re: GPUs stuck on READY

Post by maj28 »

Will also try that in in the mean time, let's see....
maj28
Posts: 29
Joined: Mon Apr 20, 2015 3:13 pm

Re: GPUs stuck on READY

Post by maj28 »

That seemed to do it, but is showing 38 days of ETA, so probably going to be a no go. Been itching for a new GPU anyway.
Napoleon
Posts: 887
Joined: Wed May 26, 2010 2:31 pm
Hardware configuration: Atom330 (overclocked):
Windows 7 Ultimate 64bit
Intel Atom330 dualcore (4 HyperThreads)
NVidia GT430, core_15 work
2x2GB Kingston KVR1333D3N9K2/4G 1333MHz memory kit
Asus AT3IONT-I Deluxe motherboard
Location: Finland

Re: GPUs stuck on READY

Post by Napoleon »

The ETA may need a few more completed frames to settle down on a more reasonable estimate. The 9400GT and 9500GT *are* able to complete core_15 WUs well within preferred deadlines (assuming they get assignments), but the PPD/W is pretty low.

You might want to consider a 750Ti, I'd think it provides more "bang for the buck" than a 740. There are 750Ti cards which do not require external PCIE power, in case that is an issue.
everyman
Posts: 27
Joined: Fri Aug 08, 2008 4:15 am
Hardware configuration: Toshiba X205-SLi1 using nVidia CUDA drivers version 177.35

Re: GPUs stuck on READY

Post by everyman »

Napoleon wrote:You might want to consider a 750Ti, I'd think it provides more "bang for the buck" than a 740. There are 750Ti cards which do not require external PCIE power, in case that is an issue.


I have one of the 750ti cards that doesn't have 6 or 8 pin PSU connectors. It gets all of it's power from the PCIe slot which means it uses BOTH the 3.3v and the 12v rail from PSU and it's max wattage is capped at 75.9W. (See http://en.wikipedia.org/wiki/PCI_Express#Power and slide 19 of https://www.pcisig.com/developers/main/ ... 469f57e5f1 for more info.)

I bring this up just in case you have a flaky PSU/MB combo like my m-ITX socket AM1 system. Since I have no way to add extra power to my MB this system becomes unstable with the card installed. This is what I get for buying cheap parts. :oops:

For now I have it running in my main system as a secondary card and it does OK, but would do better in a dedicated folding box. As for PPD/W and PPD/$$ I think it's a great card, but not as good as a GTX 970, which is my main card now.


E
"In Theory there is no difference between Theory and Practice. In Practice there is."
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: GPUs stuck on READY

Post by bruce »

In theory, ;) two 750 Ti would be a better choice than a 970. In practice, that might not be true. :D

On your MB, forget it, but FAH doesn't get very close to the maximum rating for these GPUs, even though it uses the shaders pretty hard.
everyman
Posts: 27
Joined: Fri Aug 08, 2008 4:15 am
Hardware configuration: Toshiba X205-SLi1 using nVidia CUDA drivers version 177.35

Re: GPUs stuck on READY

Post by everyman »

bruce wrote:In theory, ;) two 750 Ti would be a better choice than a 970. In practice, that might not be true. :D

^^LOL

I can't say for sure because I have yet to let either card fold 24/7. They are in my gaming/art box.

From the numbers I have seen in Linux on Project 9411 (core 17), the 970 has an average TPF of 4m 08s and the 750 averages about 10m 35s. Combine this with 4 CPU threads and the max PPD gets up to ~380k according to Advanced Control.

Edited for accuracy. Thanks Bruce!

E
Last edited by everyman on Tue Apr 21, 2015 2:59 am, edited 1 time in total.
"In Theory there is no difference between Theory and Practice. In Practice there is."
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: GPUs stuck on READY

Post by bruce »

TPF is only meaningful if you specify which project you're running.

The total PPD is supposed to be more or less constant except when a new project has just started, at which point FAH doesn't have enough information for an accurate projection. Nevertheless, each project has it's own characteristics, with especially large differences in TPF.
everyman
Posts: 27
Joined: Fri Aug 08, 2008 4:15 am
Hardware configuration: Toshiba X205-SLi1 using nVidia CUDA drivers version 177.35

Re: GPUs stuck on READY

Post by everyman »

Those TPF numbers come from Project 9411. Because this box only folds part time I expect the PPD to fluctuate quite a bit. I normally don't even bother to put in my passkey for the QRB. Then just after Easter I became very curious about my potential numbers and entered it on my new build. I have added about 1.1 million points to my name, dropped my overall ranking from 69,000 some to 33,000 some, and nearly doubled my completed number of WU. I can't wait to see where it's at after a month.
E
"In Theory there is no difference between Theory and Practice. In Practice there is."
maj28
Posts: 29
Joined: Mon Apr 20, 2015 3:13 pm

Re: GPUs stuck on READY

Post by maj28 »

Great ideas everyone, I was wondering about that if my PSU could handle the 740 or 750, I'm running at 380W Antec 80 Plus, low wattage, but quality item.

Regarding the board, I'm on the Asus M5A97 with Athlon II x4 640 OC'd to 3626Mhz, (obviously with the 9500 and 9400, the 9400 is coming out.)

Hoping to break the 750,000 PPM threshold for some EVGA Bucks.

(Figured out why it says 38 days ETA, because projects 7621/7624 say it's very large and will test the FAH limits.)
Napoleon
Posts: 887
Joined: Wed May 26, 2010 2:31 pm
Hardware configuration: Atom330 (overclocked):
Windows 7 Ultimate 64bit
Intel Atom330 dualcore (4 HyperThreads)
NVidia GT430, core_15 work
2x2GB Kingston KVR1333D3N9K2/4G 1333MHz memory kit
Asus AT3IONT-I Deluxe motherboard
Location: Finland

Re: GPUs stuck on READY

Post by Napoleon »

If the PSU is specifically EA-380, it could certainly handle a 750Ti (max 12Vx17A = 204W for both 12V rails, max 324W combined). You actually might want a version which does have the external 6pin PCIE power connector, that gives better headroom for overclocking if you're so inclined. Note that vanilla 750 and the 750Ti are very much different beasts. 750Ti means newer 1st gen Maxwell chip, while 750 is based on Kepler.

Anyway, it could probably handle a 960 (120W at stock) requiring a single 6pin PCIE connector and some 750Ti model without an external PCIE power connector. I wouldn't bother overclocking the 750Ti, just slap it to the slower PCIE slot. Going from 50W+50W (9400GT + 9500GT) to 120W+60W at stock isn't *that* big a leap. Sure, you're approaching the limits of the PSU, but if the PSU's +12V rail configuration isn't entirely FUBAR you should be OK. I *guess* the 1st rail is for mobo +CPU +2nd GPU without external PCIE power, and the 2nd rail is exclusive to the 6pin PCIE power. With these assumptions about your setup:

12V1:
95W + 65W == 160W (44W reserved for mobo, memory, HDD, ODD etc)
12V2:
120W (84W headroom for overclocking)

Comes to about 280W at stock clocks, and that's probably erring to the side of caution when folding. Be that as it may, you'd probably want to revert your CPU back to stock clocks. I'd think it capable enough for feeding the two GPUs at stock 3.0GHz. Not much point in overclocking the CPU then since the GPUs would clearly be the big guns, PPD wise, by a wide margin. Methinks exceeding 750e3 PPM would be a given, even if you ran into a string of non-QRB (core_15) WUs for an entire month.

Given their low power consumption and efficiency, those Maxwell thingies can turn obsolescent FAH setups into surprisingly lively senior citizens. :mrgreen:
Post Reply