This may not be the WU fault- 66xx's [NVidia-8400GS]
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 1024
- Joined: Sun Dec 02, 2007 12:43 pm
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
I don't think the CUDA level is the same as the CUDA Compute Level. Don't we see many upgrades to CUDA which do not change the Compute Level?
-
- Posts: 188
- Joined: Fri Jan 04, 2008 11:02 pm
- Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021] - Location: England
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
I haven't got the logs of failed 66xx runs now, though I could easliy get a few- guaranteed! [Clear my firewall]
I can run <some> Core 11 ok [on GPU2 ] -but never managed any type on GPU3.
Is Compute level 1.1 showing on all CUDA scans then -and is this then independent of CUDA installed version?
Panthers' link says that Species=0 [which I get- on working units] means CUDA lower than 2.1 ,which seems an obvious candidate for incompatabilities ,with the lack of more info from PG on this.
At present I'm getting 'runners' after around an hour of waiting @ unit completion, which is acceptable- but still an unknown overhead on production. It has been over a day!
I'm buying the bits to run a GT240 GPU rig now,as that will X10 my work again- already done that increase once,just moving to the slowest CUDA capable GPU- 8400GS .[albeit o/c'd ]
Even with all the 'big 'n fast' Folding Farms around - it looks like the numbers still need every cruncher!
[Looking at clone numbers in their hundreds]
I can run <some> Core 11 ok [on GPU2 ] -but never managed any type on GPU3.
Is Compute level 1.1 showing on all CUDA scans then -and is this then independent of CUDA installed version?
Panthers' link says that Species=0 [which I get- on working units] means CUDA lower than 2.1 ,which seems an obvious candidate for incompatabilities ,with the lack of more info from PG on this.
At present I'm getting 'runners' after around an hour of waiting @ unit completion, which is acceptable- but still an unknown overhead on production. It has been over a day!
I'm buying the bits to run a GT240 GPU rig now,as that will X10 my work again- already done that increase once,just moving to the slowest CUDA capable GPU- 8400GS .[albeit o/c'd ]
Even with all the 'big 'n fast' Folding Farms around - it looks like the numbers still need every cruncher!
[Looking at clone numbers in their hundreds]
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
From what I can glean from the Internet, the NVIDIA 8800, 9600, 9800, and 250 cards also only go up to CUDA Compute Level 1.1, so this would not be the source of the issue with the 8400. According to NVIDIA, no hardware is even available with a Compute Level higher than 2.0. This would confirm that CUDA Compute Capability is not the same as CUDA Version.
http://www.geeks3d.com/20100606/gpu-com ... ive-table/
http://www.nvidia.com/object/cuda_gpus.html
The species=0 mystery still remains.
http://www.geeks3d.com/20100606/gpu-com ... ive-table/
http://www.nvidia.com/object/cuda_gpus.html
new08 I think maybe you are confusing the clients with the cores. You are currently running the GPU3 client according to your logs, but it is pulling Core11 WUs (as it should). CUDA version 2.2 is only required for Core15 (or above).I can run <some> Core 11 ok [on GPU2 ] -but never managed any type on GPU3.
The species=0 mystery still remains.
-
- Posts: 188
- Joined: Fri Jan 04, 2008 11:02 pm
- Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021] - Location: England
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
Ahh- the earlier run showing GPU core ver 1.31 had problems? [dd Sept 3]You are currently running the GPU3 client according to your logs
That was run as GPU3 Console, I was really under the impression that GPU2 was then running by default!
I see from my latest working log that Folding@Home Client Version 6.30r2 is doing the biz - and as you say, that is GPU3 -
I must had 'slid across' to using it during many trials..
http://www.overclock.net/overclock-net- ... -if-i.html
-
- Posts: 188
- Joined: Fri Jan 04, 2008 11:02 pm
- Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021] - Location: England
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
On the running discussion on flaky units that just don't want to run on these lower end cards my eye caught the lack of Double precision capabilty on my 8400GS. This could well affect how calculations need to be done on various projects- can someone from Pande Group comment on this?
Or, maybe someone with a good eye can pick out the math requirements from some project files...?
From CUDA-Z output:
Or, maybe someone with a good eye can pick out the math requirements from some project files...?
From CUDA-Z output:
Code: Select all
CUDA-Z Report
=============
Version: 0.5.95
http://cuda-z.sourceforge.net/
OS Version: Windows x86 5.1.2600 Service Pack 3
Core Information
----------------
Name: GeForce 8400 GS
Compute Capability: 1.1
Clock Rate: 1852 MHz
Multiprocessors: 1
Warp Size: 32
Regs Per Block: 8192
Threads Per Block: 512
Watchdog Enabled: Yes
Threads Dimentions: 512 x 512 x 64
Grid Dimentions: 65535 x 65535 x 1
Memory Information
------------------
Total Global: 511.688 MB
Shared Per Block: 16 KB
Pitch: 2.09715e+06 KB
Total Constant: 64 KB
Texture Alignment: 256
GPU Overlap: No
Performance Information
-----------------------
Memory Copy
Host Pinned to Device: 45.5527 MB/s
Host Pageable to Device: 41.0889 MB/s
Device to Host Pinned: 44.9834 MB/s
Device to Host Pageable: 41.0703 MB/s
Device to Device: 2737.36 MB/s
GPU Core Performance
Single-precision Float: 28967.7 Mflop/s
****** Double-precision Float: Not Supported *********
32-bit Integer: 5845.58 Miop/s
24-bit Integer: 29009.9 Miop/s
Generated: Thu Oct 21 13:30:50 2010
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
I don't think the issue is actually Double Precision, but you're on the right track. To the best of my knowledge, FAH-GPU does not use Double Precision.
NVidia classifies their various GPUs and their drivers based using a numeric value called CUDA compute capability 1.x. Software, like the FahCore being used to fold a particular protein, is written to use a specific compute capability. That means when the client on your machine reeports what hardware you have, the Assignment server figures out what x is and directs you to a server that has projects that can run with that set of hardware/driver features.
Stanford has been having trouble properly matching up the projects with the hardware and they're actively working to correct that problem as we speak.
NVidia classifies their various GPUs and their drivers based using a numeric value called CUDA compute capability 1.x. Software, like the FahCore being used to fold a particular protein, is written to use a specific compute capability. That means when the client on your machine reeports what hardware you have, the Assignment server figures out what x is and directs you to a server that has projects that can run with that set of hardware/driver features.
Stanford has been having trouble properly matching up the projects with the hardware and they're actively working to correct that problem as we speak.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 188
- Joined: Fri Jan 04, 2008 11:02 pm
- Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021] - Location: England
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
Thanks Bruce- I applaud Stanford looking into the work unit/hardware mismatch issue.
Edit- From WIKI this snippet amongst some CUDA exceptions:
"For double precision (only supported in newer GPUs like GTX 260[12]) there are some deviations from the IEEE 754 standard: round-to-nearest-even is the only supported rounding mode for reciprocal, division, and square root. In single precision, denormals and signalling *NaNs are not supported" ..
[* I've noticed NaNs cropping up in error reports- not sure what they are though ]
Certainly , we live in an age of increasing efficiency awareness which is a good turnaround from decades previous. [Maybe I'm just getting old ]
On the reportage issue- at least some hard feedback from donors with various 'disparate rigs' may highlight a few gems... will we ever be told?
Edit- From WIKI this snippet amongst some CUDA exceptions:
"For double precision (only supported in newer GPUs like GTX 260[12]) there are some deviations from the IEEE 754 standard: round-to-nearest-even is the only supported rounding mode for reciprocal, division, and square root. In single precision, denormals and signalling *NaNs are not supported" ..
[* I've noticed NaNs cropping up in error reports- not sure what they are though ]
Certainly , we live in an age of increasing efficiency awareness which is a good turnaround from decades previous. [Maybe I'm just getting old ]
On the reportage issue- at least some hard feedback from donors with various 'disparate rigs' may highlight a few gems... will we ever be told?
Last edited by new08 on Wed Oct 27, 2010 4:52 pm, edited 4 times in total.
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
It's not Stanford's policy to report the status of bugs, fixed or otherwise. Most of the time things just start working correctly. I'm sure there's a lot that goes on behind the scenes that we never hear about.
Sometimes a "simple" change turns out to have system-wide implications. Sometimes there are already improvements planned that will incorporate that simple change or negate the need for it. ... and sometimes it might truly be a simple change.
Sometimes a "simple" change turns out to have system-wide implications. Sometimes there are already improvements planned that will incorporate that simple change or negate the need for it. ... and sometimes it might truly be a simple change.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 188
- Joined: Fri Jan 04, 2008 11:02 pm
- Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021] - Location: England
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
Well Bruce, it looks like some changes have been made with GPU ver 6.40r1 now out- which I shall try soon..
as you posted elsewhere..viewtopic.php?f=59&t=16471
as you posted elsewhere..viewtopic.php?f=59&t=16471
-
- Posts: 188
- Joined: Fri Jan 04, 2008 11:02 pm
- Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021] - Location: England
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
Now trying the new client 6.40r1 and it seems to have improved matters.
As it's not still obvious as to whether this is a client or a work unit fault I'll post result here as well as the other thread on 8400GS problems.
As it's not still obvious as to whether this is a client or a work unit fault I'll post result here as well as the other thread on 8400GS problems.
Code: Select all
10:12:25] Gpu type=2 species=11.
[10:12:25] - Connecting to assignment server
[10:12:26] - Successful: assigned to (171.64.65.61).
[10:12:26] + News From Folding@Home: Welcome to Folding@Home
[10:12:26] Loaded queue successfully.
[10:12:26] Gpu type=2 species=11.
[10:12:33] + Closed connections
[10:12:33]
[10:12:33] + Processing work unit
[10:12:33] Core required: FahCore_11.exe
[10:12:33] Core found.
[10:12:33] Working on queue slot 04 [October 29 10:12:33 UTC]
[10:12:33] + Working ...
[10:12:33]
[10:12:33] *------------------------------*
[10:12:33] Folding@Home GPU Core
[10:12:33] Version 1.31 (Tue Sep 15 10:57:42 PDT 2009)
[10:12:33]
[10:12:33] Compiler : Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86
[10:12:33] Build host: amoeba
[10:12:33] Board Type: Nvidia
[10:12:33] Core :
[10:12:33] Preparing to commence simulation
[10:12:33] - Looking at optimizations...
[10:12:33] DeleteFrameFiles: successfully deleted file=work/wudata_04.ckp
[10:12:33] - Created dyn
[10:12:33] - Files status OK
[10:12:33] - Expanded 73831 -> 383588 (decompressed 519.5 percent)
[10:12:33] Called DecompressByteArray: compressed_data_size=73831 data_size=383588, decompressed_data_size=383588 diff=0
[10:12:33] - Digital signature verified
[10:12:33]
[10:12:33] Project: 6602 (Run 3, Clone 580, Gen 419)
[10:12:33]
[10:12:33] Assembly optimizations on if available.
[10:12:33] Entering M.D.
[10:12:39] Tpr hash work/wudata_04.tpr: 4088196835 1200456741 1296165516 1169439291 1399839376
[10:12:39]
[10:12:39] Calling fah_main args: 14 usage=100
[10:12:39]
[10:12:43] Working on Protein
[10:12:44] mdrun_gpu returned
[10:12:44] Going to send back what have done -- stepsTotalG=0
[10:12:44] Work fraction=0.0000 steps=0.
[10:12:47] logfile size=9159 infoLength=9159 edr=0 trr=25
[10:12:47] + Opened results file
[10:12:47] - Writing 9697 bytes of core data to disk...
[10:12:48] Done: 9185 -> 3339 (compressed to 36.3 percent)
[10:12:48] ... Done.
[10:12:48] DeleteFrameFiles: successfully deleted file=work/wudata_04.ckp
[10:12:50]
[10:12:50] Folding@home Core Shutdown: UNSTABLE_MACHINE
[10:12:53] CoreStatus = 7A (122)
[10:12:53] Sending work to server
[10:12:53] Project: 6602 (Run 3, Clone 580, Gen 419)
[10:12:53] - Read packet limit of 540015616... Set to 524286976.
[10:12:53] + Attempting to send results [October 29 10:12:53 UTC]
[10:12:53] Gpu type=2 species=11.
[10:12:54] + Results successfully sent
[10:12:54] Thank you for your contribution to Folding@Home.
[10:12:58] - Preparing to get new work unit...
[10:12:58] Cleaning up work directory
[10:12:58] + Attempting to get work packet
[10:12:58] Gpu type=2 species=11.
[10:12:58] - Connecting to assignment server
[10:12:59] - Successful: assigned to (171.64.65.61).
[10:12:59] + News From Folding@Home: Welcome to Folding@Home
[10:13:00] Loaded queue successfully.
[10:13:00] Gpu type=2 species=11.
[10:13:01] + Closed connections
[10:13:06]
[10:13:06] + Processing work unit
[10:13:06] Core required: FahCore_11.exe
[10:13:06] Core found.
[10:13:06] Working on queue slot 05 [October 29 10:13:06 UTC]
[10:13:06] + Working ...
[10:13:07]
[10:13:07] *------------------------------*
[10:13:07] Folding@Home GPU Core
[10:13:07] Version 1.31 (Tue Sep 15 10:57:42 PDT 2009)
[10:13:07]
[10:13:07] Compiler : Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86
[10:13:07] Build host: amoeba
[10:13:07] Board Type: Nvidia
[10:13:07] Core :
[10:13:07] Preparing to commence simulation
[10:13:07] - Looking at optimizations...
[10:13:07] DeleteFrameFiles: successfully deleted file=work/wudata_05.ckp
[10:13:07] - Created dyn
[10:13:07] - Files status OK
[10:13:07] - Expanded 63010 -> 337940 (decompressed 536.3 percent)
[10:13:07] Called DecompressByteArray: compressed_data_size=63010 data_size=337940, decompressed_data_size=337940 diff=0
[10:13:07] - Digital signature verified
[10:13:07]
[10:13:07] Project: 10513 (Run 2, Clone 672, Gen 90)
[10:13:07]
[10:13:07] Assembly optimizations on if available.
[10:13:07] Entering M.D.
[10:13:13] Tpr hash work/wudata_05.tpr: 3449600631 2897179719 2214020697 481515886 179679849
[10:13:13]
[10:13:13] Calling fah_main args: 14 usage=100
[10:13:13]
[10:13:16] Working on Protein
[10:13:24] Client config found, loading data.
[10:13:24] Starting GUI Server
[10:29:06] Completed 1%
[10:45:03] Completed 2%
-
- Site Admin
- Posts: 3110
- Joined: Fri Nov 30, 2007 8:06 pm
- Location: Team Helix
- Contact:
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
Project: 6602 (Run 3, Clone 580, Gen 419) has been completed by others successfully.
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
Hi new08,
I'm confused by your post. What did the new client improve, aside from changing the species from 0 to 11? I see a crash on P6602 and a running P10513 in your log, which are the same results as before.
Also, I was told in a different thread that the new client would not change the assigned WUs until the servers were reprogrammed, so no change should be expected at this stage.
Thanks,
Brett
I'm confused by your post. What did the new client improve, aside from changing the species from 0 to 11? I see a crash on P6602 and a running P10513 in your log, which are the same results as before.
Also, I was told in a different thread that the new client would not change the assigned WUs until the servers were reprogrammed, so no change should be expected at this stage.
Thanks,
Brett
-
- Posts: 188
- Joined: Fri Jan 04, 2008 11:02 pm
- Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021] - Location: England
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
I know these 66xx units are completing ok- just not om my rig 8400GS.
Last edited by new08 on Fri Oct 29, 2010 6:50 pm, edited 1 time in total.
-
- Posts: 188
- Joined: Fri Jan 04, 2008 11:02 pm
- Hardware configuration: Hewlett-Packard 1494 Win10 Build 1836
GeForce [MSI] GTX 950
Runs F@H Ver7.6.21
[As of Jan 2021] - Location: England
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
Brett- Fair comment:The reason I think there's change is instead of going at the 66xx server for sometimes 30+hrs, a 105xx was gained next attempt~ when I had freed the F/Wall.
I don't have a record of the actual AS back then- but I thought it was a different server between these two W/us before.[Though not this time]
Maybe it was wishful thinking, but I put that down to the change in client- even though that was published to be concerned with Compute level issues.
Time will tell -with ongoing performance returns.
I don't have a record of the actual AS back then- but I thought it was a different server between these two W/us before.[Though not this time]
Maybe it was wishful thinking, but I put that down to the change in client- even though that was published to be concerned with Compute level issues.
Time will tell -with ongoing performance returns.
Re: This may not be the WU fault- 66xx's [NVidia-8400GS]
OK, now I understand what you're thinking.
The 171.64.65.61 Work Server has been serving up 105xx as well as 66xx WUs since before I started blocking IPs, though. I have them on a little notepad on my desk from a couple of months ago. Sorry you got your hopes up.
I really don't think the changes PG has put in place with this new client will help us, since they are only identifying CUDA Compute Capability, not individual hardware. We share the same CUDA Compute Capability with lots of other cards that have no problem with these WUs. But, hey, let's hope I'm wrong.
The 171.64.65.61 Work Server has been serving up 105xx as well as 66xx WUs since before I started blocking IPs, though. I have them on a little notepad on my desk from a couple of months ago. Sorry you got your hopes up.
I really don't think the changes PG has put in place with this new client will help us, since they are only identifying CUDA Compute Capability, not individual hardware. We share the same CUDA Compute Capability with lots of other cards that have no problem with these WUs. But, hey, let's hope I'm wrong.