Uniprocessor client or SMP?
Moderators: Site Moderators, FAHC Science Team
Uniprocessor client or SMP?
Hi!
I have an Intel Q6600 processor overclocked to 3,2Ghz running Vista64 and have 4 FAH 5.04 clients running so I use all 4 cores
My question is, now when I update to 6.20, what is more effective to run, 4 single processor clients or the beta SMP client?
I have an Intel Q6600 processor overclocked to 3,2Ghz running Vista64 and have 4 FAH 5.04 clients running so I use all 4 cores
My question is, now when I update to 6.20, what is more effective to run, 4 single processor clients or the beta SMP client?
-
- Posts: 357
- Joined: Mon Dec 03, 2007 4:36 pm
- Hardware configuration: Q9450 OC @ 3.2GHz (Win7 Home Premium) - SMP2
E7500 OC @ 3.66GHz (Windows Home Server) - SMP2
i5-3750k @ 3.8GHz (Win7 Pro) - SMP2 - Location: University of Birmingham, UK
Re: Uniprocessor client or SMP?
Welcome to the forums!
The four single processor clients would be easier to run and use more of your CPU than the current Windows SMP core. So for effectiveness the uniprocessor clients win. However, the SMP client will get you more points. If you want a challenge, run the Windows SMP client
The four single processor clients would be easier to run and use more of your CPU than the current Windows SMP core. So for effectiveness the uniprocessor clients win. However, the SMP client will get you more points. If you want a challenge, run the Windows SMP client
Folding whatever I'm sent since March 2006 Beta testing since October 2006. www.FAH-Addict.net Administrator since August 2009.
Re: Uniprocessor client or SMP?
thank you for the anwser!
I will try out the SMP, Im always up for a challange
I will try out the SMP, Im always up for a challange
Re: Uniprocessor client or SMP?
I've found that the opposite can be true. As long as I'm not getting too many Ambers, 4x single clients are usually much more productive, points-wise, than one SMP.
On a QX9650 I'm pushing 4500PPD if all four cores get nice WUs, and the best I've seen for the SMP client is about 3200PPD. Typically the SMP client gets about 2200-2300PPD though.
Now, the Amber & SimT WUs kill the points for sure, but averaging out I'd be surprised if the single processor clients won't win.
On a QX9650 I'm pushing 4500PPD if all four cores get nice WUs, and the best I've seen for the SMP client is about 3200PPD. Typically the SMP client gets about 2200-2300PPD though.
Now, the Amber & SimT WUs kill the points for sure, but averaging out I'd be surprised if the single processor clients won't win.
-
- Posts: 357
- Joined: Mon Dec 03, 2007 4:36 pm
- Hardware configuration: Q9450 OC @ 3.2GHz (Win7 Home Premium) - SMP2
E7500 OC @ 3.66GHz (Windows Home Server) - SMP2
i5-3750k @ 3.8GHz (Win7 Pro) - SMP2 - Location: University of Birmingham, UK
Re: Uniprocessor client or SMP?
lol... if that's the case then maybe uniprocs do win :S My quad pulled 4050PPD as a one off but checking current production its averaging 2600....
Folding whatever I'm sent since March 2006 Beta testing since October 2006. www.FAH-Addict.net Administrator since August 2009.
-
- Posts: 43
- Joined: Sun Dec 02, 2007 5:28 am
- Location: Vegas Baby! Yeah!
Re: Uniprocessor client or SMP?
Which client(s) produce more points seems to be a bit of a craps-shoot.
In my experience the SMP usually produces more, but after factoring in upload/download times of up to an hour (in which no processing is being done) and the lost points from SMP hangs and crashes, sometimes in the long term the steady uniprocessor clients can end up successfully posting more points.
It is easy to calculate uniprocessor Points Per Day only on a single core system. When there is more than one core each working on their own different WU, the PPD speed of any given WU can be influenced by which WUs the other core(s) are working on. We all like getting the higher PPD WUs, but they can sometimes cause a relative slowing of the progress of the existing WUs, causing a counterbalancing PPD reduction.
In my experience the SMP usually produces more, but after factoring in upload/download times of up to an hour (in which no processing is being done) and the lost points from SMP hangs and crashes, sometimes in the long term the steady uniprocessor clients can end up successfully posting more points.
It is easy to calculate uniprocessor Points Per Day only on a single core system. When there is more than one core each working on their own different WU, the PPD speed of any given WU can be influenced by which WUs the other core(s) are working on. We all like getting the higher PPD WUs, but they can sometimes cause a relative slowing of the progress of the existing WUs, causing a counterbalancing PPD reduction.
Re: Uniprocessor client or SMP?
Use the SMP client with DEINO. After you download the "r2" beta package you may also want to download the "r3" executable that has been announced not too log ago here on the forum.
If you use the SMP client you will push its development, and everyone wants to use the SMP client over the uniprocessor client, really. Choosing the uniprocessor client only because of differences in PPDs kind of defeats the idea. The up- & download time of the SMP client is not as problematic as it seems. The break during up- & download is just more visible with the SMP client than with multiple uniprocessor clients. With the SMP client this break occurs at the same time for all cores whereas multiple uniprocessor clients distribute it and make it less visible. However, this break still exists and for all cores.
If you use the SMP client you will push its development, and everyone wants to use the SMP client over the uniprocessor client, really. Choosing the uniprocessor client only because of differences in PPDs kind of defeats the idea. The up- & download time of the SMP client is not as problematic as it seems. The break during up- & download is just more visible with the SMP client than with multiple uniprocessor clients. With the SMP client this break occurs at the same time for all cores whereas multiple uniprocessor clients distribute it and make it less visible. However, this break still exists and for all cores.
-
- Posts: 43
- Joined: Sun Dec 02, 2007 5:28 am
- Location: Vegas Baby! Yeah!
Re: Uniprocessor client or SMP?
Really? Why? Isn't the purpose of having the various clients point scored differently in part to influence the contributors choices of which client they run? If not, then why not just have all the clients simply give the same PPD?sdack wrote:Choosing the uniprocessor client only because of differences in PPDs kind of defeats the idea.
I don't contribute just to gain points, but the feedback from Stanford in the form of points helps to tell me how much I have contributed. More points = a greater contribution.
Sorry, but I must disagree. I am running multiple SMP clients and the huge uploads and downloads take at least ten minutes at best and often up to a full hour to complete. The uniprocessor rarely takes more than a single minute. For SMP all the cores sit idle, and for a much longer time, while the uniprocessor has only one core sit idle for a brief moment.sdack wrote:The up- & download time of the SMP client is not as problematic as it seems. The break during up- & download is just more visible with the SMP client than with multiple uniprocessor clients. With the SMP client this break occurs at the same time for all cores whereas multiple uniprocessor clients distribute it and make it less visible. However, this break still exists and for all cores.
Samples (SMP client, current from my home laptop):
59 minutes:
Code: Select all
[09:10:41] Writing local files
[09:10:41] Completed 250000 out of 250000 steps (100 percent)
[09:10:41] Writing final coordinates.
[09:10:42] Past main M.D. loop
[09:10:42] Will end MPI now
[09:11:42]
[09:11:42] Finished Work Unit:
[09:11:42] - Reading up to 21421872 from "work/wudata_02.arc": Read 21421872
[09:11:42] - Reading up to 592312 from "work/wudata_02.xtc": Read 592312
[09:11:42] goefile size: 0
[09:11:42] logfile size: 212421
[09:11:42] Leaving Run
[09:11:43] - Writing 22232977 bytes of core data to disk...
[09:11:44] ... Done.
[09:11:44] - Failed to delete work/wudata_02.sas
[09:11:44] - Failed to delete work/wudata_02.goe
[09:11:44] Warning: check for stray files
[09:11:44] - Shutting down core
[09:13:44]
[09:13:44] Folding@home Core Shutdown: FINISHED_UNIT
[09:13:44]
[09:13:44] Folding@home Core Shutdown: FINISHED_UNIT
[09:13:48] CoreStatus = 64 (100)
[09:13:48] Sending work to server
[09:13:48] + Attempting to send results
[10:02:56] + Results successfully sent
[10:02:56] Thank you for your contribution to Folding@Home.
[10:02:56] + Number of Units Completed: 45
[10:04:59] - Preparing to get new work unit...
[10:04:59] + Attempting to get work packet
[10:04:59] - Connecting to assignment server
[10:04:59] - Successful: assigned to (171.64.65.64).
[10:04:59] + News From Folding@Home: Welcome to Folding@Home
[10:04:59] Loaded queue successfully.
[10:08:57] + Closed connections
[10:08:57]
[10:08:57] + Processing work unit
[10:08:57] Core required: FahCore_a1.exe
[10:08:57] Core found.
[10:08:57] Working on Unit 03 [September 21 10:08:57]
[10:08:57] + Working ...
[10:08:58]
[10:08:58] *------------------------------*
[10:08:58] Folding@Home Gromacs SMP Core
[10:08:58] Version 1.74 (March 10, 2007)
[10:08:58]
[10:08:58] Preparing to commence simulation
[10:08:58] - Looking at optimizations...
[10:08:58] - Previous termination of core was improper.
[10:08:58] - Going to use standard loops.
[10:08:58] - Files status OK
[10:08:58] - Going to use standard loops.
[10:08:58] - Files status OK
[10:09:31] (decompressed 518.1 percent)
[10:09:32] itial work packet
[10:09:32]
[10:09:32] Project: 2665 (Run 0, Clone 937, Gen 50)
[10:09:32]
[10:09:32] acket
[10:09:32]
[10:09:32] Project: 2665 (Run 0, Clone 937, Gen 50)
[10:09:32]
[10:09:34] Entering M.D.
[10:09:40] Rejecting checkpoint
[10:09:42] Protein: HGG in water
[10:09:42] Writing local files
[10:09:54] Extra SSE boost OK.
[10:09:55] Writing local files
[10:09:55] Completed 0 out of 250000 steps (0 percent)
47 minutes:
Code: Select all
[22:13:21] Writing local files
[22:13:22] Completed 250000 out of 250000 steps (100 percent)
[22:13:22] Writing final coordinates.
[22:13:23] Past main M.D. loop
[22:13:23] Will end MPI now
[22:14:23]
[22:14:23] Finished Work Unit:
[22:14:23] - Reading up to 21310704 from "work/wudata_03.arc": Read 21310704
[22:14:23] - Reading up to 555928 from "work/wudata_03.xtc": Read 555928
[22:14:23] goefile size: 0
[22:14:23] logfile size: 212427
[22:14:24] Leaving Run
[22:14:26] - Writing 22085431 bytes of core data to disk...
[22:14:26] ... Done.
[22:14:26] - Failed to delete work/wudata_03.sas
[22:14:26] - Failed to delete work/wudata_03.goe
[22:14:26] Warning: check for stray files
[22:14:26] - Shutting down core
[22:16:26]
[22:16:26] Folding@home Core Shutdown: FINISHED_UNIT
[22:16:26]
[22:16:26] Folding@home Core Shutdown: FINISHED_UNIT
[22:16:31] CoreStatus = 64 (100)
[22:16:31] Sending work to server
[22:16:31] + Attempting to send results
[22:55:02] + Results successfully sent
[22:55:02] Thank you for your contribution to Folding@Home.
[22:55:02] + Number of Units Completed: 46
[22:57:05] - Preparing to get new work unit...
[22:57:05] + Attempting to get work packet
[22:57:05] - Connecting to assignment server
[22:57:06] - Successful: assigned to (171.64.65.64).
[22:57:06] + News From Folding@Home: Welcome to Folding@Home
[22:57:06] Loaded queue successfully.
[22:59:54] + Closed connections
[22:59:54]
[22:59:54] + Processing work unit
[22:59:54] Core required: FahCore_a1.exe
[22:59:54] Core found.
[22:59:54] Working on Unit 04 [September 22 22:59:54]
[22:59:54] + Working ...
[22:59:54]
[22:59:54] *------------------------------*
[22:59:54] Folding@Home Gromacs SMP Core
[22:59:54] Version 1.74 (March 10, 2007)
[22:59:54]
[22:59:54] Preparing to commence simulation
[22:59:54] - Looking at optimizations...
[22:59:54] - Created dyn
[22:59:54] - Files status OK
[23:00:02] - Expanded 4767137 -> 24426905 (decompressed 512.4 percent)
[23:00:02] - Starting from initial work packet
[23:00:02]
[23:00:02] Project: 2665 (Run 3, Clone 919, Gen 41)
[23:00:02]
[23:00:03] Assembly optimizations on if available.
[23:00:03] Entering M.D.
[23:00:34] al work packet
[23:00:34]
[23:00:34] Project: 2665 (Run 3, Clone 919, Gen 41)
[23:00:34]
[23:00:36] Entering M.D.
[23:00:37] ne 919, Gen 41)
[23:00:37]
[23:00:37] Entering M.D.
[23:00:45] GG in water
[23:00:45] Writing local files
[23:00:45] cal files
[23:00:47] Extra SSE boost OK.
[23:00:58] cal files
[23:00:58] Completed 0 out of 250000 steps (0 percent)
50 minutes:
Code: Select all
[20:47:05] Writing local files
[20:47:05] Completed 250000 out of 250000 steps (100 percent)
[20:47:05] Writing final coordinates.
[20:47:06] Past main M.D. loop
[20:47:06] Will end MPI now
[20:48:06]
[20:48:06] Finished Work Unit:
[20:48:06] - Reading up to 21310704 from "work/wudata_05.arc": Read 21310704
[20:48:07] - Reading up to 648224 from "work/wudata_05.xtc": Read 648224
[20:48:07] goefile size: 0
[20:48:07] logfile size: 212724
[20:48:07] Leaving Run
[20:48:09] - Writing 22178872 bytes of core data to disk...
[20:48:10] ... Done.
[20:48:10] - Failed to delete work/wudata_05.sas
[20:48:10] - Failed to delete work/wudata_05.goe
[20:48:10] Warning: check for stray files
[20:48:10] - Shutting down core
[20:50:10]
[20:50:10] Folding@home Core Shutdown: FINISHED_UNIT
[20:50:10]
[20:50:10] Folding@home Core Shutdown: FINISHED_UNIT
[20:50:13] CoreStatus = 64 (100)
[20:50:13] Sending work to server
[20:50:13] + Attempting to send results
[21:29:07] + Results successfully sent
[21:29:07] Thank you for your contribution to Folding@Home.
[21:29:07] + Number of Units Completed: 47
[21:31:10] - Preparing to get new work unit...
[21:31:10] + Attempting to get work packet
[21:31:10] - Connecting to assignment server
[21:31:11] - Successful: assigned to (171.64.65.64).
[21:31:11] + News From Folding@Home: Welcome to Folding@Home
[21:31:11] Loaded queue successfully.
[21:35:56] + Closed connections
[21:35:56]
[21:35:56] + Processing work unit
[21:35:56] Core required: FahCore_a1.exe
[21:35:56] Core found.
[21:35:56] Working on Unit 06 [September 25 21:35:56]
[21:35:56] + Working ...
[21:35:56]
[21:35:56] *------------------------------*
[21:35:56] Folding@Home Gromacs SMP Core
[21:35:56] Version 1.74 (March 10, 2007)
[21:35:56]
[21:35:56] Preparing to commence simulation
[21:35:56] - Looking at optimizations...
[21:35:56] - Created dyn
[21:35:56] - Files status OK
[21:36:04] - Expanded 4729673 -> 24426905 (decompressed 516.4 percent)
[21:36:04] - Starting from initial work packet
[21:36:04]
[21:36:04] Project: 2665 (Run 3, Clone 254, Gen 52)
[21:36:04]
[21:36:05] Assembly optimizations on if available.
[21:36:05] Entering M.D.
[21:36:34] percent)
[21:36:35] - Starting from initial work packet
[21:36:35]
[21:36:35] Project: 2665 (Run 3, Clone 254, Gen 52)
[21:36:35]
[21:36:40] Entering M.D.
[21:36:45] Rejecting checkpoint
[21:36:47] Protein: HGG in water
[21:36:47] Writing local files
[21:37:00] Extra SSE boost OK.
[21:37:00] Writing local files
[21:37:01] Completed 0 out of 250000 steps (0 percent)
63 minutes:
Code: Select all
[09:45:19] Writing local files
[09:45:19] Completed 250000 out of 250000 steps (100 percent)
[09:45:19] Writing final coordinates.
[09:45:20] Past main M.D. loop
[09:45:20] Will end MPI now
[09:46:21]
[09:46:21] Finished Work Unit:
[09:46:21] - Reading up to 21310704 from "work/wudata_06.arc": Read 21310704
[09:46:21] - Reading up to 555472 from "work/wudata_06.xtc": Read 555472
[09:46:21] goefile size: 0
[09:46:21] logfile size: 212443
[09:46:21] Leaving Run
[09:46:22] - Writing 22084991 bytes of core data to disk...
[09:46:22] ... Done.
[09:46:22] - Failed to delete work/wudata_06.sas
[09:46:22] - Failed to delete work/wudata_06.goe
[09:46:22] Warning: check for stray files
[09:46:22] - Shutting down core
[09:48:22]
[09:48:22] Folding@home Core Shutdown: FINISHED_UNIT
[09:48:22]
[09:48:22] Folding@home Core Shutdown: FINISHED_UNIT
[09:48:27] CoreStatus = 64 (100)
[09:48:27] Sending work to server
[09:48:27] + Attempting to send results
[10:41:41] + Results successfully sent
[10:41:41] Thank you for your contribution to Folding@Home.
[10:41:41] + Number of Units Completed: 48
[10:43:44] - Preparing to get new work unit...
[10:43:44] + Attempting to get work packet
[10:43:44] - Connecting to assignment server
[10:43:44] - Successful: assigned to (171.64.65.64).
[10:43:44] + News From Folding@Home: Welcome to Folding@Home
[10:43:45] Loaded queue successfully.
[10:47:07] + Closed connections
[10:47:07]
[10:47:07] + Processing work unit
[10:47:07] Core required: FahCore_a1.exe
[10:47:07] Core found.
[10:47:07] Working on Unit 07 [September 27 10:47:07]
[10:47:07] + Working ...
[10:47:07]
[10:47:07] *------------------------------*
[10:47:07] Folding@Home Gromacs SMP Core
[10:47:07] Version 1.74 (March 10, 2007)
[10:47:07]
[10:47:07] Preparing to commence simulation
[10:47:07] - Ensuring status. Please wait.
[10:47:15] - Starting from initial work packet
[10:47:16]
[10:47:16] Project: 2665 (Run 2, Clone 567, Gen 53)
[10:47:16]
[10:47:16] Assembly optimizations on if available.
[10:47:16] Entering M.D.
[10:47:45] al work packet
[10:47:45]
[10:47:45] Project: 2665 (Run 2, Clone 567, Gen 53)
[10:47:45]
[10:47:49] Entering M.D.
[10:47:50] ne 567, Gen 53)
[10:47:50]
[10:47:51] Entering M.D.
[10:47:59] s
[10:47:59] Writing local files
[10:47:59] osylations
[10:47:59] Writing local files
[10:48:02] Extra SSE boost OK.
[10:48:14] cal files
[10:48:14] Completed 0 out of 250000 steps (0 percent)
And it has been a long time since I had a uniprocessor client EUE or crash.
SMP "long 1-4 interactions" crash at 91% complete:
Code: Select all
[07:07:00] Completed 222500 out of 250000 steps (89 percent)
[07:28:34] Writing local files
[07:28:34] Completed 225000 out of 250000 steps (90 percent)
[07:50:08] Writing local files
[07:50:08] Completed 227500 out of 250000 steps (91 percent)
[08:10:17] Warning: long 1-4 interactions
[08:10:17] Gromacs cannot continue further.
[08:10:17] Going to send back what have done.
[08:10:17] logfile size: 189972
[08:10:17] - Writing 190508 bytes of core data to disk...
[08:10:17] ... Done.
[08:10:17] - Failed to delete work/wudata_04.sas
[08:10:17] - Failed to delete work/wudata_04.goe
[08:10:17] Warning: check for stray files
[08:12:18]
[08:12:18] Folding@home Core Shutdown: EARLY_UNIT_END
[08:12:18]
[08:12:18] Folding@home Core Shutdown: EARLY_UNIT_END
[08:12:21] CoreStatus = 7B (123)
[08:12:21] Client-core communications error: ERROR 0x7b
[08:12:21] Deleting current work unit & continuing...
[08:14:23] - Preparing to get new work unit...
[08:14:23] + Attempting to get work packet
[08:14:23] - Connecting to assignment server
This is not meant as a criticism of the SMP client, but it is still beta and sometimes the uniprocessor tortoises working together can cross the finish line before the SMP hare.
Re: Uniprocessor client or SMP?
It is entirely up to everyone if they want to push the development of the SMP client or rather sit back and wait for it to finish. With a uniprocessor machine I cannot support this development, with my quad core I can. This is why I choose the SMP client over the uniprocessor client. The SMP client then gets different work assigned and I also choose to get the largest work units possible, but even then the up- & download takes not more than 30 minutes (indicating a problem on the server side rather than on the client).
The points we get only help to decide if a specific machine is doing more or less work than before. From what I have read on the forum so far I know that GPU clients do a lot more work but get less points for it. I would not be surprised if an over-clocked Intel Core 2 Quad can beat a low-end GPU in terms of points even when it would not do more scientific work than the GPU.
I have to ask, can one actually compare the SMP core and its work units with the uniprocessor core just in the way you did? My SMP client is currently getting work for project 2665, which studies the influenza virus and I am pretty happy to see that it is doing something I can relate to.
And, btw, you are using an older version of the SMP Gromacs core. Mine shows a "Version 1.76 (February 23, 2008)".
The points we get only help to decide if a specific machine is doing more or less work than before. From what I have read on the forum so far I know that GPU clients do a lot more work but get less points for it. I would not be surprised if an over-clocked Intel Core 2 Quad can beat a low-end GPU in terms of points even when it would not do more scientific work than the GPU.
I have to ask, can one actually compare the SMP core and its work units with the uniprocessor core just in the way you did? My SMP client is currently getting work for project 2665, which studies the influenza virus and I am pretty happy to see that it is doing something I can relate to.
And, btw, you are using an older version of the SMP Gromacs core. Mine shows a "Version 1.76 (February 23, 2008)".
-
- Posts: 43
- Joined: Sun Dec 02, 2007 5:28 am
- Location: Vegas Baby! Yeah!
Re: Uniprocessor client or SMP?
Why not? One reason we assign a value to anything is so we can then compare it to other things. Stanford sets the points based on something. If they wanted more of their contributors to run client x all they have to do is change client x to be a higher point producer and watch us flock to it.sdack wrote:I have to ask, can one actually compare the SMP core and its work units with the uniprocessor core just in the way you did?
I am running six CPU SMP machines right now. The samples came from a HP laptop Core2Duo T9300, 4GB, Vista Home Premium, "Version 1.74 (March 10, 2007)".sdack wrote:And, btw, you are using an older version of the SMP Gromacs core. Mine shows a "Version 1.76 (February 23, 2008)".
Re: Uniprocessor client or SMP?
It was not a rhetorical question. You do need to ask if points of different clients can be compared before you compare them as well as how to interpret the result (of your comparison). If you do not understand what this means then do not worry. It is not that important. Just ask yourself why they develop a SMP client when it is obvious [to you] that multiple uniprocessor clients do more scientific work.Sahkuhnder wrote:Why not?
-
- Posts: 43
- Joined: Sun Dec 02, 2007 5:28 am
- Location: Vegas Baby! Yeah!
Re: Uniprocessor client or SMP?
Stanford made them all the same points. There are no "SMP points" or "uniprocessor points". Stanford set the system to have a scale where all of the output of all of the various clients can be graded and compared using a standardized scale. That scale is the points we receive. If Stanford did not want us to compare the output of client x vs. client y then why rate the output of both clients by the same ruler?sdack wrote:It was not a rhetorical question. You do need to ask if points of different clients can be compared before you compare them...Sahkuhnder wrote:Why not?
I never said it was obvious that multiple uniprocessor clients did more scientific work. If you feel otherwise please provide a link to my statement.sdack wrote:Just ask yourself why they develop a SMP client when it is obvious [to you] that multiple uniprocessor clients do more scientific work.
I also do not believe this to be true or I would not be running SMP clients. The OP question was about which client to choose. All I pointed out was that the value of the scientific output, as judged by Stanford in the form of points, is in the real world of download times and EUEs, IMHO frequently not that much different.
As I understand it for F@H different clients can do, and excel at doing, different types of work problems. There is a reason that each client has WUs that are specifically meant for processing only on that client. There is no list of universal Work Units that are sent out to all of the contributors, regardless of what hardware they use (as some Distributed Computing systems do). Thus it is inevitable that the value of the processed work of the various clients is compared. Stanford set up a cross-client points system that to me seems to place just such a value. My earlier statement: "More points = a greater contribution."
Re: Uniprocessor client or SMP?
But how is this going to help the OP in making a decision between the uniprocessor client and the SMP client? I think that the initial question already shows a confusion and the more you point out how equal both clients are the more you will find yourself not having an answer to the OP's question. If the SMP client would truly have been designed for SMP systems then no one would ever consider using uniprocessor clients on an SMP machine. However, there is a believe that multiple uniprocessor clients can do a better job than the SMP client. As already mentioned does it defeat the purpose of the SMP client (including all the development work done by the Panda Group on the client and the server side).Sahkuhnder wrote:The OP question was about which client to choose. All I pointed out was that the value of the scientific output, as judged by Stanford in the form of points, is in the real world of download times and EUEs, IMHO frequently not that much different.
As I understand it for F@H different clients can do, and excel at doing, different types of work problems. There is a reason that each client has WUs that are specifically meant for processing only on that client. There is no list of universal Work Units that are sent out to all of the contributors, regardless of what hardware they use (as some Distributed Computing systems do). Thus it is inevitable that the value of the processed work of the various clients is compared. Stanford set up a cross-client points system that to me seems to place just such a value. My earlier statement: "More points = a greater contribution."
You actually do know that the SMP client is being treated a bit differently from the uniprocessor client. SMP clients get larger work units as they often have more memory available and are also those machines with longer uptimes. The last point (longer uptimes) is getting weaker with more dual- and quad-cores CPUs showing up on the desktop but the argument of the longer uptimes was one of the initial reasons why SMP clients get larger work units. If you then take a look into the high score you will see that people with similar scores often can have great differences in the amount of work units done. You cannot deduce the amount of work units completed from a donor's score, nor can you do the opposite calculation. Further, by knowing the score and the number of work units you cannot say what type of hardware has been used, nor the days spend in total, nor the type of client software that has been used. The truth is that all that can be said is: more points = greater contribution.
To help the OP and to help the software development of the clients I suggest to use the SMP client. SMP systems will dominate in the future and need the support.
From my memory I recall that uniprocessor clients do get a few points more compared to what SMPs are getting so that people with only a uniprocessor machine do not fall too quickly behind those with SMPs in terms of points. The same downsizing has been done with users of the GPU client. The GPUs also do more work than they receive in points. This is why some people use uniprocessor clients on their SMP machines - to get more points rather than doing more scientific work. They are all in for The Big Race. God bless them.