R9 290 dramatically low PPD
Moderators: Site Moderators, FAHC Science Team
R9 290 dramatically low PPD
Hello there,
i am running 12 machines, each has 6x r9 290 gpus.
When i first installed F@H client on the test machine i was able to get over 600k PPD smoothly
After couple of weeks i installed the client (the new 7.4.4 version was out by then) on my machines but something seems to be terribly wrong as on the same setup where i was getting over 600kppd now i-m getting less then 10k.
I tried pausing cpu folding (celeron cpus), i tried to fold with a single gpu, i tried using a hexacore i7 CPU, formatting, changing passkey and username, different relase of catalyst drivers, and also tried to use previous relases of the software: No luck, i-m still stuck @ less then 10k ppd (approx the ppd i get on my old notebook)
Thank you for your time
ps. im trying to attach logs and screenshots but i get moderation as new user
i am running 12 machines, each has 6x r9 290 gpus.
When i first installed F@H client on the test machine i was able to get over 600k PPD smoothly
After couple of weeks i installed the client (the new 7.4.4 version was out by then) on my machines but something seems to be terribly wrong as on the same setup where i was getting over 600kppd now i-m getting less then 10k.
I tried pausing cpu folding (celeron cpus), i tried to fold with a single gpu, i tried using a hexacore i7 CPU, formatting, changing passkey and username, different relase of catalyst drivers, and also tried to use previous relases of the software: No luck, i-m still stuck @ less then 10k ppd (approx the ppd i get on my old notebook)
Thank you for your time
ps. im trying to attach logs and screenshots but i get moderation as new user
Re: R9 290 dramatically low PPD
Logs of one of the fail
Code: Select all
*********************** Log Started 2014-04-04T15:47:10Z ***********************
15:47:10:************************* Folding@home Client *************************
15:47:10: Website: http://folding.stanford.edu/
15:47:10: Copyright: (c) 2009-2014 Stanford University
15:47:10: Author: Joseph Coffland <joseph@cauldrondevelopment.com>
15:47:10: Args: --client-type=advanced
15:47:10: Config: <none>
15:47:10:******************************** Build ********************************
15:47:10: Version: 7.4.4
15:47:10: Date: Mar 4 2014
15:47:10: Time: 20:26:54
15:47:10: SVN Rev: 4130
15:47:10: Branch: fah/trunk/client
15:47:10: Compiler: Intel(R) C++ MSVC 1500 mode 1200
15:47:10: Options: /TP /nologo /EHa /Qdiag-disable:4297,4103,1786,279 /Ox -arch:SSE
15:47:10: /QaxSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 /Qopenmp /Qrestrict /MT /Qmkl
15:47:10: Platform: win32 XP
15:47:10: Bits: 32
15:47:10: Mode: Release
15:47:10:******************************* System ********************************
15:47:10: CPU: Intel(R) Celeron(R) CPU G1620 @ 2.70GHz
15:47:10: CPU ID: GenuineIntel Family 6 Model 58 Stepping 9
15:47:10: CPUs: 2
15:47:10: Memory: 3.96GiB
15:47:10: Free Memory: 2.69GiB
15:47:10: Threads: WINDOWS_THREADS
15:47:10: OS Version: 6.2
15:47:10: Has Battery: false
15:47:10: On Battery: false
15:47:10: UTC Offset: 2
15:47:10: PID: 1568
15:47:10: CWD: C:/Users/dr3/AppData/Roaming/FAHClient
15:47:10: OS: Windows 8.1 Pro
15:47:10: OS Arch: AMD64
15:47:10: GPUs: 6
15:47:10: GPU 0: ATI:5 Hawaii [Radeon R9 200 Series]
15:47:10: GPU 1: ATI:5 Hawaii [Radeon R9 200 Series]
15:47:10: GPU 2: ATI:5 Hawaii [Radeon R9 200 Series]
15:47:10: GPU 3: ATI:5 Hawaii [Radeon R9 200 Series]
15:47:10: GPU 4: ATI:5 Hawaii [Radeon R9 200 Series]
15:47:10: GPU 5: ATI:5 Hawaii [Radeon R9 200 Series]
15:47:10: CUDA: Not detected
15:47:10:Win32 Service: false
15:47:10:***********************************************************************
15:47:10:<config>
15:47:10: <!-- Folding Slots -->
15:47:10:</config>
15:47:10:Connecting to assign-GPU.stanford.edu:80
15:47:12:Updated GPUs.txt
15:47:12:Read GPUs.txt
15:47:12:Trying to access database...
15:47:12:Successfully acquired database lock
15:47:12:Enabled folding slot 00: PAUSED cpu:1 (not configured)
15:47:12:Enabled folding slot 01: PAUSED gpu:0:Hawaii [Radeon R9 200 Series] (not configured)
15:47:12:Enabled folding slot 02: PAUSED gpu:1:Hawaii [Radeon R9 200 Series] (not configured)
15:47:12:Enabled folding slot 03: PAUSED gpu:2:Hawaii [Radeon R9 200 Series] (not configured)
15:47:12:Enabled folding slot 04: PAUSED gpu:3:Hawaii [Radeon R9 200 Series] (not configured)
15:47:12:Enabled folding slot 05: PAUSED gpu:4:Hawaii [Radeon R9 200 Series] (not configured)
15:47:12:Enabled folding slot 06: PAUSED gpu:5:Hawaii [Radeon R9 200 Series] (not configured)
15:47:56:Saving configuration to config.xml
15:47:56:<config>
15:47:56: <!-- Network -->
15:47:56: <proxy v=':8080'/>
15:47:56:
15:47:56: <!-- User Information -->
15:47:56: <passkey v='********************************'/>
15:47:56: <team v='224497'/>
15:47:56: <user v='raghathol'/>
15:47:56:
15:47:56: <!-- Folding Slots -->
15:47:56: <slot id='0' type='CPU'/>
15:47:56: <slot id='1' type='GPU'/>
15:47:56: <slot id='2' type='GPU'/>
15:47:56: <slot id='3' type='GPU'/>
15:47:56: <slot id='4' type='GPU'/>
15:47:56: <slot id='5' type='GPU'/>
15:47:56: <slot id='6' type='GPU'/>
15:47:56:</config>
15:47:56:Set client configured
15:47:57:WU00:FS00:Connecting to 171.67.108.200:8080
15:47:57:WU01:FS01:Connecting to 171.67.108.200:8080
15:47:57:WU02:FS02:Connecting to 171.67.108.200:8080
15:47:57:WU03:FS03:Connecting to 171.67.108.200:8080
15:47:57:WU04:FS04:Connecting to 171.67.108.200:8080
15:47:58:WU00:FS00:Connecting to 171.67.108.200:8080
15:47:58:WU01:FS01:Connecting to 171.67.108.201:80
15:47:58:WU02:FS02:Connecting to 171.67.108.201:80
15:47:58:WU05:FS05:Connecting to 171.67.108.201:80
15:47:58:WU03:FS03:Connecting to 171.67.108.201:80
15:47:58:WU04:FS04:Connecting to 171.67.108.201:80
15:47:58:WU00:FS00:Assigned to work server 171.64.65.124
15:47:58:WU00:FS00:Requesting new work unit for slot 00: READY cpu:1 from 171.64.65.124
15:47:58:WU00:FS00:Connecting to 171.64.65.124:8080
15:47:59:WU01:FS01:Assigned to work server 140.163.4.231
15:47:59:WU01:FS01:Requesting new work unit for slot 01: READY gpu:0:Hawaii [Radeon R9 200 Series] from 140.163.4.231
15:47:59:WU02:FS02:Assigned to work server 140.163.4.231
15:47:59:WU01:FS01:Connecting to 140.163.4.231:8080
15:47:59:WU02:FS02:Requesting new work unit for slot 02: READY gpu:1:Hawaii [Radeon R9 200 Series] from 140.163.4.231
15:47:59:WU02:FS02:Connecting to 140.163.4.231:8080
15:47:59:WU05:FS05:Assigned to work server 140.163.4.231
15:47:59:WU05:FS05:Requesting new work unit for slot 05: READY gpu:4:Hawaii [Radeon R9 200 Series] from 140.163.4.231
15:47:59:WU03:FS03:Assigned to work server 140.163.4.231
15:47:59:WU05:FS05:Connecting to 140.163.4.231:8080
15:47:59:WU03:FS03:Requesting new work unit for slot 03: READY gpu:2:Hawaii [Radeon R9 200 Series] from 140.163.4.231
15:47:59:WU03:FS03:Connecting to 140.163.4.231:8080
15:47:59:WU04:FS04:Assigned to work server 140.163.4.231
15:47:59:WU04:FS04:Requesting new work unit for slot 04: READY gpu:3:Hawaii [Radeon R9 200 Series] from 140.163.4.231
15:47:59:WU04:FS04:Connecting to 140.163.4.231:8080
15:47:59:WU01:FS01:Downloading 4.84MiB
15:47:59:WU02:FS02:Downloading 4.84MiB
15:47:59:WU05:FS05:Downloading 4.83MiB
15:47:59:WU03:FS03:Downloading 4.83MiB
15:47:59:WU04:FS04:Downloading 4.84MiB
15:48:00:WU00:FS00:Downloading 855.03KiB
15:48:05:WU02:FS02:Download 15.50%
15:48:05:WU05:FS05:Download 16.81%
15:48:05:WU01:FS01:Download 21.97%
15:48:05:WU03:FS03:Download 14.22%
15:48:05:WU04:FS04:Download 18.10%
15:48:07:WU00:FS00:Download 22.46%
15:48:11:WU04:FS04:Download 36.19%
15:48:11:WU03:FS03:Download 40.07%
15:48:11:WU01:FS01:Download 41.35%
15:48:11:WU05:FS05:Download 33.61%
15:48:11:WU02:FS02:Download 33.58%
15:48:13:WU00:FS00:Download 37.43%
15:48:17:WU01:FS01:Download 62.03%
15:48:17:WU05:FS05:Download 46.54%
15:48:17:WU02:FS02:Download 47.79%
15:48:17:WU04:FS04:Download 53.00%
15:48:17:WU03:FS03:Download 69.81%
15:48:21:WU00:FS00:Download 59.88%
15:48:23:WU03:FS03:Download 96.95%
15:48:23:WU01:FS01:Download 78.83%
15:48:23:WU05:FS05:Download 67.22%
15:48:23:WU02:FS02:Download 61.99%
15:48:23:WU04:FS04:Download 73.68%
15:48:23:WU03:FS03:Download complete
15:48:23:WU03:FS03:Received Unit: id:03 state:DOWNLOAD error:NO_ERROR project:13000 run:2025 clone:2 gen:2 core:0x17 unit:0x0000000b538b3db75311d8734094b841
15:48:23:WU03:FS03:Downloading core from http://www.stanford.edu/~pande/Win32/AMD64/ATI/R600/Core_17.fah
15:48:23:WU03:FS03:Connecting to http://www.stanford.edu:80
15:48:24:WU03:FS03:FahCore 17: Downloading 2.55MiB
15:48:27:WU00:FS00:Download 82.34%
15:48:29:WU04:FS04:Download 96.95%
15:48:29:WU05:FS05:Download 93.08%
15:48:29:WU02:FS02:Download 80.07%
15:48:29:WU04:FS04:Download complete
15:48:29:WU04:FS04:Received Unit: id:04 state:DOWNLOAD error:NO_ERROR project:13001 run:470 clone:6 gen:3 core:0x17 unit:0x00000006538b3db7532c8179eb136e67
15:48:29:WU06:FS06:Connecting to 171.67.108.201:80
15:48:29:WU01:FS01:Download 100.00%
15:48:29:WU01:FS01:Download complete
15:48:29:WU01:FS01:Received Unit: id:01 state:DOWNLOAD error:NO_ERROR project:13001 run:91 clone:5 gen:7 core:0x17 unit:0x0000000f538b3db75328698e195fb65a
15:48:29:Saving configuration to config.xml
15:48:29:<config>
15:48:29: <!-- Network -->
15:48:29: <proxy v=':8080'/>
15:48:29:
15:48:29: <!-- Slot Control -->
15:48:29: <power v='full'/>
15:48:29:
15:48:29: <!-- User Information -->
15:48:29: <passkey v='********************************'/>
15:48:29: <team v='224497'/>
15:48:29: <user v='raghathol'/>
15:48:29:
15:48:29: <!-- Folding Slots -->
15:48:29: <slot id='0' type='CPU'/>
15:48:29: <slot id='1' type='GPU'/>
15:48:29: <slot id='2' type='GPU'/>
15:48:29: <slot id='3' type='GPU'/>
15:48:29: <slot id='4' type='GPU'/>
15:48:29: <slot id='5' type='GPU'/>
15:48:29: <slot id='6' type='GPU'/>
15:48:29:</config>
15:48:30:WU00:FS00:Download complete
15:48:30:WU03:FS03:FahCore 17: 17.15%
15:48:30:WU00:FS00:Received Unit: id:00 state:DOWNLOAD error:NO_ERROR project:9006 run:1684 clone:2 gen:2 core:0xa4 unit:0x00000002664f2de4533b30ed6b9c3a48
15:48:30:WU00:FS00:Downloading core from http://www.stanford.edu/~pande/Win32/AMD64/Core_a4.fah
15:48:30:WU00:FS00:Connecting to http://www.stanford.edu:80
15:48:30:WU05:FS05:Download complete
15:48:30:WU00:FS00:FahCore a4: Downloading 2.89MiB
15:48:30:WU05:FS05:Received Unit: id:05 state:DOWNLOAD error:NO_ERROR project:13001 run:311 clone:0 gen:5 core:0x17 unit:0x0000000a538b3db75328a7d3fb5f69c1
15:48:31:WU06:FS06:Assigned to work server 140.163.4.231
15:48:31:WU06:FS06:Requesting new work unit for slot 06: READY gpu:5:Hawaii [Radeon R9 200 Series] from 140.163.4.231
15:48:31:WU06:FS06:Connecting to 140.163.4.231:8080
15:48:31:WU06:FS06:Downloading 4.84MiB
15:48:34:WU02:FS02:Download complete
15:48:34:WU02:FS02:Received Unit: id:02 state:DOWNLOAD error:NO_ERROR project:13000 run:393 clone:0 gen:5 core:0x17 unit:0x0000000a538b3db753100ac17e779536
15:48:36:WU00:FS00:FahCore a4: 15.15%
15:48:36:WU03:FS03:FahCore 17: 51.46%
15:48:37:WU06:FS06:Download 46.52%
15:48:41:WU06:FS06:Download complete
15:48:41:WU06:FS06:Received Unit: id:06 state:DOWNLOAD error:NO_ERROR project:13000 run:2117 clone:2 gen:2 core:0x17 unit:0x00000006538b3db75311f2872e68188e
15:48:42:WU00:FS00:FahCore a4: 38.96%
15:48:42:WU03:FS03:FahCore 17: 73.52%
15:48:46:WU03:FS03:FahCore 17: Download complete
15:48:46:WU03:FS03:Valid core signature
15:48:46:WU03:FS03:Unpacked 8.60MiB to cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/Core_17.fah/FahCore_17.exe
15:48:46:WU03:FS03:Starting
15:48:46:WU03:FS03:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/Users/dr3/AppData/Roaming/FAHClient/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/Core_17.fah/FahCore_17.exe -dir 03 -suffix 01 -version 704 -lifeline 1568 -checkpoint 15 -gpu 2 -gpu-vendor ati
15:48:46:WU03:FS03:Started FahCore on PID 3356
15:48:47:WU03:FS03:Core PID:2504
15:48:47:WU03:FS03:FahCore 0x17 started
15:48:47:WU04:FS04:Starting
15:48:47:WU04:FS04:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/Users/dr3/AppData/Roaming/FAHClient/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/Core_17.fah/FahCore_17.exe -dir 04 -suffix 01 -version 704 -lifeline 1568 -checkpoint 15 -gpu 3 -gpu-vendor ati
15:48:47:WU04:FS04:Started FahCore on PID 3828
15:48:47:WU04:FS04:Core PID:184
15:48:47:WU04:FS04:FahCore 0x17 started
15:48:47:WU05:FS05:Starting
15:48:47:WU05:FS05:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/Users/dr3/AppData/Roaming/FAHClient/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/Core_17.fah/FahCore_17.exe -dir 05 -suffix 01 -version 704 -lifeline 1568 -checkpoint 15 -gpu 4 -gpu-vendor ati
15:48:47:WU05:FS05:Started FahCore on PID 1436
15:48:47:WU05:FS05:Core PID:2684
15:48:47:WU05:FS05:FahCore 0x17 started
15:48:47:WU06:FS06:Starting
15:48:47:WU06:FS06:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/Users/dr3/AppData/Roaming/FAHClient/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/Core_17.fah/FahCore_17.exe -dir 06 -suffix 01 -version 704 -lifeline 1568 -checkpoint 15 -gpu 5 -gpu-vendor ati
15:48:47:WU06:FS06:Started FahCore on PID 3164
15:48:47:WU06:FS06:Core PID:3496
15:48:47:WU06:FS06:FahCore 0x17 started
15:48:47:WU01:FS01:Starting
15:48:47:WU01:FS01:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/Users/dr3/AppData/Roaming/FAHClient/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/Core_17.fah/FahCore_17.exe -dir 01 -suffix 01 -version 704 -lifeline 1568 -checkpoint 15 -gpu 0 -gpu-vendor ati
15:48:47:WU01:FS01:Started FahCore on PID 1608
15:48:47:WU01:FS01:Core PID:4028
15:48:47:WU01:FS01:FahCore 0x17 started
15:48:47:WU02:FS02:Starting
15:48:47:WU02:FS02:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/Users/dr3/AppData/Roaming/FAHClient/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/Core_17.fah/FahCore_17.exe -dir 02 -suffix 01 -version 704 -lifeline 1568 -checkpoint 15 -gpu 1 -gpu-vendor ati
15:48:47:WU02:FS02:Started FahCore on PID 1000
15:48:47:WU03:FS03:0x17:*********************** Log Started 2014-04-04T15:48:47Z ***********************
15:48:47:WU03:FS03:0x17:Project: 13000 (Run 2025, Clone 2, Gen 2)
15:48:47:WU03:FS03:0x17:Unit: 0x0000000b538b3db75311d8734094b841
15:48:47:WU03:FS03:0x17:CPU: 0x00000000000000000000000000000000
15:48:47:WU03:FS03:0x17:Machine: 3
15:48:47:WU03:FS03:0x17:Reading tar file state.xml
15:48:47:WU02:FS02:Core PID:2924
15:48:47:WU02:FS02:FahCore 0x17 started
15:48:47:WU05:FS05:0x17:*********************** Log Started 2014-04-04T15:48:47Z ***********************
15:48:47:WU05:FS05:0x17:Project: 13001 (Run 311, Clone 0, Gen 5)
15:48:47:WU05:FS05:0x17:Unit: 0x0000000a538b3db75328a7d3fb5f69c1
15:48:47:WU05:FS05:0x17:CPU: 0x00000000000000000000000000000000
15:48:47:WU05:FS05:0x17:Machine: 5
15:48:47:WU05:FS05:0x17:Reading tar file state.xml
15:48:47:WU06:FS06:0x17:*********************** Log Started 2014-04-04T15:48:47Z ***********************
15:48:47:WU06:FS06:0x17:Project: 13000 (Run 2117, Clone 2, Gen 2)
15:48:47:WU06:FS06:0x17:Unit: 0x00000006538b3db75311f2872e68188e
15:48:47:WU06:FS06:0x17:CPU: 0x00000000000000000000000000000000
15:48:47:WU06:FS06:0x17:Machine: 6
15:48:47:WU06:FS06:0x17:Reading tar file state.xml
15:48:47:WU04:FS04:0x17:*********************** Log Started 2014-04-04T15:48:47Z ***********************
15:48:47:WU04:FS04:0x17:Project: 13001 (Run 470, Clone 6, Gen 3)
15:48:47:WU04:FS04:0x17:Unit: 0x00000006538b3db7532c8179eb136e67
15:48:47:WU04:FS04:0x17:CPU: 0x00000000000000000000000000000000
15:48:47:WU04:FS04:0x17:Machine: 4
15:48:47:WU04:FS04:0x17:Reading tar file state.xml
15:48:48:WU01:FS01:0x17:*********************** Log Started 2014-04-04T15:48:47Z ***********************
15:48:48:WU00:FS00:FahCore a4: 71.42%
15:48:48:WU01:FS01:0x17:Project: 13001 (Run 91, Clone 5, Gen 7)
15:48:48:WU01:FS01:0x17:Unit: 0x0000000f538b3db75328698e195fb65a
15:48:48:WU01:FS01:0x17:CPU: 0x00000000000000000000000000000000
15:48:48:WU01:FS01:0x17:Machine: 1
15:48:48:WU01:FS01:0x17:Reading tar file state.xml
15:48:48:WU02:FS02:0x17:*********************** Log Started 2014-04-04T15:48:48Z ***********************
15:48:48:WU02:FS02:0x17:Project: 13000 (Run 393, Clone 0, Gen 5)
15:48:48:WU02:FS02:0x17:Unit: 0x0000000a538b3db753100ac17e779536
15:48:48:WU02:FS02:0x17:CPU: 0x00000000000000000000000000000000
15:48:48:WU02:FS02:0x17:Machine: 2
15:48:48:WU02:FS02:0x17:Reading tar file state.xml
15:48:49:FS00:Paused
15:48:50:WU06:FS06:0x17:Reading tar file system.xml
15:48:50:WU03:FS03:0x17:Reading tar file system.xml
15:48:51:WU04:FS04:0x17:Reading tar file system.xml
15:48:51:WU05:FS05:0x17:Reading tar file system.xml
15:48:52:WU01:FS01:0x17:Reading tar file system.xml
15:48:53:WU06:FS06:0x17:Reading tar file integrator.xml
15:48:53:WU02:FS02:0x17:Reading tar file system.xml
15:48:53:WU03:FS03:0x17:Reading tar file integrator.xml
15:48:53:WU06:FS06:0x17:Reading tar file core.xml
15:48:53:WU03:FS03:0x17:Reading tar file core.xml
15:48:53:WU06:FS06:0x17:Digital signatures verified
15:48:53:WU06:FS06:0x17:Folding@home GPU core17
15:48:53:WU06:FS06:0x17:Version 0.0.52
15:48:53:WU04:FS04:0x17:Reading tar file integrator.xml
15:48:53:WU00:FS00:FahCore a4: Download complete
15:48:53:WU03:FS03:0x17:Digital signatures verified
15:48:53:WU03:FS03:0x17:Folding@home GPU core17
15:48:53:WU03:FS03:0x17:Version 0.0.52
15:48:54:WU00:FS00:Valid core signature
15:48:54:WU00:FS00:Unpacked 9.59MiB to cores/www.stanford.edu/~pande/Win32/AMD64/Core_a4.fah/FahCore_a4.exe
15:48:54:WU04:FS04:0x17:Reading tar file core.xml
15:48:54:WU04:FS04:0x17:Digital signatures verified
15:48:54:WU04:FS04:0x17:Folding@home GPU core17
15:48:54:WU04:FS04:0x17:Version 0.0.52
15:48:54:WU05:FS05:0x17:Reading tar file integrator.xml
15:48:54:WU05:FS05:0x17:Reading tar file core.xml
15:48:54:WU05:FS05:0x17:Digital signatures verified
15:48:54:WU05:FS05:0x17:Folding@home GPU core17
15:48:54:WU05:FS05:0x17:Version 0.0.52
15:48:55:WU01:FS01:0x17:Reading tar file integrator.xml
15:48:55:WU01:FS01:0x17:Reading tar file core.xml
15:48:55:WU02:FS02:0x17:Reading tar file integrator.xml
15:48:55:WU02:FS02:0x17:Reading tar file core.xml
15:48:55:WU01:FS01:0x17:Digital signatures verified
15:48:55:WU01:FS01:0x17:Folding@home GPU core17
15:48:55:WU01:FS01:0x17:Version 0.0.52
15:48:55:WU02:FS02:0x17:Digital signatures verified
15:48:55:WU02:FS02:0x17:Folding@home GPU core17
15:48:55:WU02:FS02:0x17:Version 0.0.52
15:49:30:Saving configuration to config.xml
15:49:30:<config>
15:49:30: <!-- Network -->
15:49:30: <proxy v=':8080'/>
15:49:30:
15:49:30: <!-- Slot Control -->
15:49:30: <power v='full'/>
15:49:30:
15:49:30: <!-- User Information -->
15:49:30: <passkey v='********************************'/>
15:49:30: <team v='224497'/>
15:49:30: <user v='raghathol'/>
15:49:30:
15:49:30: <!-- Folding Slots -->
15:49:30: <slot id='0' type='CPU'>
15:49:30: <paused v='true'/>
15:49:30: </slot>
15:49:30: <slot id='1' type='GPU'/>
15:49:30: <slot id='2' type='GPU'/>
15:49:30: <slot id='3' type='GPU'/>
15:49:30: <slot id='4' type='GPU'/>
15:49:30: <slot id='5' type='GPU'/>
15:49:30: <slot id='6' type='GPU'/>
15:49:30:</config>
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: R9 290 dramatically low PPD
Welcome to the F@H Forum raghathol,
Could you please tell us what driver version you are using? Also, make sure that the username/passkey are correctly configured.
There has been a change between V7.3.6 and V7.4.4 in regards to how the PPD is calculated. V7.3.6 would give inaccurate PPD but V7.4.4 does give a more realistic estimate. However, in order for V7.4.4 to get the estimate, it would need to fold the WU for a few percentages before it can calculate the PPD based on the TPF. From the log posted, it shows that you were successfully assigned FahCore_17 WUs and your GPUs are correctly detected. However, it doesn't show the progress of the WU so I would suggest that you let it run for a few hours and see what is reported by FAHControl.
If there isn't any progress reported overnight, then I suggest that you pause all the GPU Slots, and only start 1 GPU and monitor it with GPU-Z (http://www.techpowerup.com/downloads/SysInfo/GPU-Z/) to see if it folds successfully or not. If it does, start up the next GPU and carry on until you find the limit of your system.
Having 6 GPUs for folding in a single system is rather rare so I hope that you have powerful enough PSU to ensure that it can handle the power requirements. Moreover, I think that if you do use the CPU to fold, you may slowdown the checkpointing of FahCore_17 which use the CPU. However, I am not sure by how much so if you want, you can experiment with it.
Could you please tell us what driver version you are using? Also, make sure that the username/passkey are correctly configured.
There has been a change between V7.3.6 and V7.4.4 in regards to how the PPD is calculated. V7.3.6 would give inaccurate PPD but V7.4.4 does give a more realistic estimate. However, in order for V7.4.4 to get the estimate, it would need to fold the WU for a few percentages before it can calculate the PPD based on the TPF. From the log posted, it shows that you were successfully assigned FahCore_17 WUs and your GPUs are correctly detected. However, it doesn't show the progress of the WU so I would suggest that you let it run for a few hours and see what is reported by FAHControl.
If there isn't any progress reported overnight, then I suggest that you pause all the GPU Slots, and only start 1 GPU and monitor it with GPU-Z (http://www.techpowerup.com/downloads/SysInfo/GPU-Z/) to see if it folds successfully or not. If it does, start up the next GPU and carry on until you find the limit of your system.
Having 6 GPUs for folding in a single system is rather rare so I hope that you have powerful enough PSU to ensure that it can handle the power requirements. Moreover, I think that if you do use the CPU to fold, you may slowdown the checkpointing of FahCore_17 which use the CPU. However, I am not sure by how much so if you want, you can experiment with it.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Re: R9 290 dramatically low PPD
My advice, look at the TPF and check in the bonus calculator yourself.
Or use HFM. V.7.4.4 is no good with AMD so far.
http://www.linuxforge.net/bonuscalc2.php
Or use HFM. V.7.4.4 is no good with AMD so far.
http://www.linuxforge.net/bonuscalc2.php
-
- Posts: 1576
- Joined: Tue May 28, 2013 12:14 pm
- Location: Tokyo
Re: R9 290 dramatically low PPD
I would remove first at least two GPUs, better three. For testing. As PCIe lanes are limited we might have a shortage on resources ? Also disk controller and other components might use PCIe lanes.
OP, did you physically removed GPU or paused ?
As for AMD drivers I can't help, I'm an nvidia guy
OP, did you physically removed GPU or paused ?
As for AMD drivers I can't help, I'm an nvidia guy
Last edited by ChristianVirtual on Sun Apr 06, 2014 1:23 pm, edited 1 time in total.
Please contribute your logs to http://ppd.fahmm.net
Re: R9 290 dramatically low PPD
Your are folding on both the CPU and the GPUS. Kill the CPU. It is a bit optimistic to run 6 GPUs on two cores but the 100k PPD you did before was not alarming low. A few more cores would boost it.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: R9 290 dramatically low PPD
Nope. Follow the advice above and start with only 1 GPU and confirm it is working correctly. Basic trouble shooting. Then add a second card, and watch. Work your way up from there.ChristianVirtual wrote:I would remove first at least two GPUs, better three. For testing. As PCIe lanes are limited we might have a shortage on resources ? Also disk controller and other components might use PCIe lanes.
OP, did you physically removed GPU or paused ?
As for AMD drivers I can't help, I'm an nvidia guy
Next, client version has no affect on performance. All V7 client versions use the same fahcores which do all the work. But as noted above, 7.4.4 has much more accurate PPD estimates.
Cat 14.1 drivers and above are known to perform better than previous versions. With that said, knowing what PSU powers all those cards is important.
And as noted above, stop folding on the CPU to get more performance from that many cards. Finish the current CPU WU and then remove that slot (not while testing).
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: R9 290 dramatically low PPD
Nice PPD, I have two new LCS R9 290x I will be Folding on soon.
PS you Image Site is Spamming users to Install Windows Driver.
US Army Retired | Folding@EVGA The Number One Team in the Folding@Home Community.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: R9 290 dramatically low PPD
Dedicate one CPU per GPU, run 14.4 driver, use a passkey, get the right projects.
What are your low PPD numbers?
What are your low PPD numbers?
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: R9 290 dramatically low PPD
I use a AMD 4 cores for my folding machine 2xR9 290. I installed Windows 8.1 before and I got about 130k PPD per card, which is not super but acceptable.7im wrote:Dedicate one CPU per GPU, run 14.4 driver, use a passkey, get the right projects.
What are your low PPD numbers?
Now I decided to wipe out everything and install Win7 because I can't use Win8.1 properly. All the time I used 14.4 driver, use my passkey with my team number.
I get now 30k PPD per card, which is obviously wrong. I don't remember which project I got when I use Win8.1 but now the client always fetches me Project 9406 and 9408. I have tried to delete everything and reinstall two times, it is always the same. GPU usage fluctuates from 0 to 100%, but most of time they stay at 100%.
I'm pretty clueless now. Could you tell how to get the right projects? I think the client fetches the project automatically for you??
Thanks in advance.
Re: R9 290 dramatically low PPD
That is better then my 17123 on each of my LCS 2 x R9 290x running now at 1% P1300
14.4 driver as well and one CPU for each GPU.
@ 2.64% PPD jumped to 76000 11 Hours 50 Min to End Game or ETA
@ 6.04% PPD jumped to 83800 9 Hours 20 Min to End Game or ETA
14.4 driver as well and one CPU for each GPU.
@ 2.64% PPD jumped to 76000 11 Hours 50 Min to End Game or ETA
@ 6.04% PPD jumped to 83800 9 Hours 20 Min to End Game or ETA
And what are the Right Projects?7im wrote:Dedicate one CPU per GPU, run 14.4 driver, use a passkey, get the right projects.
What are your low PPD numbers?
Last edited by bcavnaugh on Wed May 21, 2014 4:12 pm, edited 1 time in total.
US Army Retired | Folding@EVGA The Number One Team in the Folding@Home Community.
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: R9 290 dramatically low PPD
Is that 30k PPD after the WU has run for several hours and completed several percent? Or is that the instantaneous PPD reported by the folding client after the WU has been only running a few minutes. The client does need some processing history before it can give accurate estimates. Version 7.4.4 is better at the estimates than earlier versions, but work on improving that is still needed.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: R9 290 dramatically low PPD
There are no right and wrong projects as far as the donor is concerned. Stanford determines which work units are assigned, so they are all the "right" projects at the time.
However, in regards to the right projects for Points, the newer FAHCore_17 work units are in high demand because these include the newly added Quick Return Bonus for GPUs. FAHCore_15 is going EOL, and is still using the older points system with no bonus.
Note my GT430 gets nearly double the PPD on core_15 as core_17. This is because my GPU is very close to the original GTX 460 benchmark. Obviously newer cards scale up the exponential bonus points curve must faster using the newer core. Points are linear to speed on the older core.
However, in regards to the right projects for Points, the newer FAHCore_17 work units are in high demand because these include the newly added Quick Return Bonus for GPUs. FAHCore_15 is going EOL, and is still using the older points system with no bonus.
Note my GT430 gets nearly double the PPD on core_15 as core_17. This is because my GPU is very close to the original GTX 460 benchmark. Obviously newer cards scale up the exponential bonus points curve must faster using the newer core. Points are linear to speed on the older core.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.