Page 1 of 1

PPD Question and o\c

Posted: Thu Apr 07, 2011 9:06 pm
by datahelp
I am building a new pc at the moment will be running it 24/7

2x GTX 560 Twin Frozer Ti
CPU i7 960 3.2GHZ

What points per day will I get with?

Would I be able to run big adv units, and finish them in time. As well as receiving a bonus with out O/C my CPU?

Thanks

Datahelp

Re: PPD Question and o\c

Posted: Fri Apr 08, 2011 2:28 am
by patonb
Yha, you should beable to. Unless the 960s really cheap, don't get it.... get a 2600k instead.

You'll probably do b33k on big, and i think 12k on the 560.

Re: PPD Question and o\c

Posted: Fri Apr 08, 2011 6:46 pm
by RoomateoYo
I have a 2600k and gtx570, I'd suggest dumping the gpu folding idea and spend your folding bugget on the fastest cpu folding bigadv that you can get.

Re: PPD Question and o\c

Posted: Fri Apr 08, 2011 8:16 pm
by GreyWhiskers
@datahelp:

As I sit right now with my Sandy Bridge system (see sig), the i7 2600k (3.90 Ghz) SMP working on a P6900 -bigadv WU is predicted at 31,586 ppd, and the GTX 560 Ti (950 MHz) is predicted for 14,203 ppd, or 45,790 between them. The entire system is pulling a steady 288 watts as measured by the CyberPower UPS, less monitor which is on a different UPS.

So, the SMP gets twice the points per day as the GPU. Is the extra 14K ppd worth it? That's up to you.

Another tidbit. There is a bit of psychology here too. Since the -bigadv WUs are so huge, you don't see any results for 2.5 days, then a big impulse. Prior to getting the Sandy Bridge system, my "big folder" was an old HP unicore desktop with an ATI HD4670 AGP GPU card. That gave me ~1,200 ppd, and completed the only kind of WU that was served, 511 points, in about 10 hours. With the GTX560, it completes a 1,348 point WU in less than 2.5 HOURS. When I look at my stats in Kakao or EOC, there is progress every update.

I haven't cranked up the CPU clock beyond what Digital Storm did in the factory, so there is lots of headroom for faster SMP performance. I have a little headroom for the GTX560.

I've been playing with the SMP configuration. When I had it at -SMP 8, all cores/threads, I would get some interference with other things running, and saw 29,700 ppd for my latest -bigadv WU. I have it set now at -SMP 7, with the Core a5 process (using the Windows 7 task manager) locked to 7 specific cores/threads, and the priority of the a5 core set to High. That seems to pretty well lock the a5 into those threads, leaving the last for servicing the GPU (minimal), and whatever else is going on. I did see the SMP PPD prediction in HFM go up to what I see here as a result.

This doesn't seem set and forget, though. If I leave an SMP client at -SMP 7, and if one of the P101xx WUs gets assigned, it will EUE and dump the WU, not good for the program. Since the P6900 type WUs take about 2.5 days to complete, I have tried setting the -SMP 7 when it starts, and then I will set back to -SMP 8 before it terminates. That may give me a little more productivity. I will, I'm sure, grow tired of that kind of management, though.

Re: PPD Question and o\c

Posted: Fri Apr 08, 2011 10:00 pm
by datahelp
Thank you for taking your time to answer my question, it has been a great help.

can i get the i7-2600k on to a 1366 motherboard cannot seem to find one anywhere. or will I have to wait?

Many thanks

Datahelp

Re: PPD Question and o\c

Posted: Fri Apr 08, 2011 10:22 pm
by kiore
datahelp wrote:Thank you for taking your time to answer my question, it has been a great help.

can i get the i7-2600k on to a 1366 motherboard cannot seem to find one anywhere. or will I have to wait?

Many thanks

Datahelp

No wrong socket for a i7-2600 sorry, the other issue that none has mentioned to consider is your internet connectivity, big advanced need a good connection and this is why I stopped doing them as they took so long to up/download, while the GPUs don't have this issue. I use 3G connection (best available here) and any interruptions in the connection caused massive delays for big advanced units. If you have a fast connection go for it, for those with erratic connections the GPU way is the best. Your 1366 mobo should fit the new intel i7's that are due out next quarter (probably).

Re: PPD Question and o\c

Posted: Fri Apr 08, 2011 11:25 pm
by Grandpa_01
If you are building for PPD and nothing else I would do as RoomateoYo suggested get the fastest most powerfull CPU, get a 970 for around $600.00 US OC it to 4.3Ghz and get around 65,000PPD. You are not going to do that on a Sandy or anything else for the price. :ewink:

Re: PPD Question and o\c

Posted: Sat Apr 09, 2011 12:18 am
by bruce
GreyWhiskers wrote:I've been playing with the SMP configuration. When I had it at -SMP 8, all cores/threads, I would get some interference with other things running, and saw 29,700 ppd for my latest -bigadv WU. I have it set now at -SMP 7, with the Core a5 process (using the Windows 7 task manager) locked to 7 specific cores/threads, and the priority of the a5 core set to High. That seems to pretty well lock the a5 into those threads, leaving the last for servicing the GPU (minimal), and whatever else is going on. I did see the SMP PPD prediction in HFM go up to what I see here as a result.

This doesn't seem set and forget, though. If I leave an SMP client at -SMP 7, and if one of the P101xx WUs gets assigned, it will EUE and dump the WU, not good for the program. Since the P6900 type WUs take about 2.5 days to complete, I have tried setting the -SMP 7 when it starts, and then I will set back to -SMP 8 before it terminates. That may give me a little more productivity. I will, I'm sure, grow tired of that kind of management, though.
You don't mention priority=High in the same sentence as -smp 8 but it's not clear to me whether you were doing both at the same time. If you did, you would certainly see interference with other things running.

FAH is designed to run in the background with a priority of either IDLE or LOW. How about giving us a comparison of what you see when you use FAH the way it was designed to run: With -smp (8) and with it at either IDLE or LOW priority. That would not be the same as either running -smp 7 or running at HIGH, and it might just be better than the choices that you're advocating. Without an objective test, I can only guess.

By moving the SMP cores from IDLE/LOW to HIGH, you're intentionally delaying other applications in favor of FAH. Which do you want: The ability of your browser to interrupt FAH so it is responsive, or a browser which is set up to be delayed until FAH decides to yield some resources? [Replace "browser" with whatever else you were running that was seeing the interference.]

Re: PPD Question and o\c

Posted: Sat Apr 09, 2011 1:00 am
by GreyWhiskers
@bruce. Good comments. I appreciate them greatly.

I have not moved the cores to HIGH within the client. I have the two clients (v6) running at SMP: Idle, GPU: Low, as recommended.

Since the most recent GPU clients for the Fermi class Nvidia GPUs, the actual CPU usage by the GPU client is minuscule. The "Low" ensures that when the GPU needs cpu cycles, it will get them. This contrasts with my use of an ATI HD4670, which routinely uses 20-30% of a CPU core/thread.

My comments related to working within Windows Task Manager and the Windows priorities.

I ran the system through three -bigadv WUs with -SMP 8, and was noting that the TPF was higher than the first two -bigadv WUs where I had set to -SMP 7.

With the SMP client set to -SMP 7, and not locking the cores in Windows, one could see that none of the cores were maxed out. There was usually one lower than the others, but all 8 cores/threads on the i7 were "busy".

When I locked the core a5 process to seven specific cores, and made one core unavailable, then I saw the use of those 7 cores go to to 100%, and minimal activity on the eighth. I am not doing this post on the Sandy system, but I did other posts earlier while the -SMP 7 was in effect - the browser didn't seem to lock up. It seemed one core/thread was still available.

As I said earlier, I will probably revert to the -SMP 8 and remove any Windows priority or Windows core locks as being too burdensome to continually manage. I was looking at two aspects of the TPF as the main measure - first contributing to a faster turn around time to get the completed science back into the hands of the researchers sooner, and second that faster return == bigger bonus.

The difference is noticeable, but not worth the effort to manage. The fastest whole-WU TPF on a P6900/6901 I saw with -SMP 7 was 33:33. The slowest TPF with -SMP 8 was 35:18. It isn't rock solid consistent as I see with the GPU WUs. e.g., when the virus checker decides it's time to do its thing, the SMP takes a hit, not the GPU. I just glanced at HFM, and see the instantaneous SMP TPF is 34:11, with -SMP 7 and the cores locked in Windows.

Re: PPD Question and o\c

Posted: Sat Apr 09, 2011 3:03 am
by ChasR
GreyWhiskers wrote:@datahelp:

As I sit right now with my Sandy Bridge system (see sig), the i7 2600k (3.90 Ghz) SMP working on a P6900 -bigadv WU is predicted at 31,586 ppd, and the GTX 560 Ti (950 MHz) is predicted for 14,203 ppd, or 45,790 between them. The entire system is pulling a steady 288 watts as measured by the CyberPower UPS, less monitor which is on a different UPS.

So, the SMP gets twice the points per day as the GPU. Is the extra 14K ppd worth it? That's up to you.
What does the 2600K make without the gpu running? Are you really making 14,000 ppd extra by running the gpu?

In my testing, running the gpu client slowed the smp client dramatically. Running -smp 7, I did make a net gain of about 5000 ppd with a GTX295, the GPUs making about 17,000 ppd, but reducing SMP production by 12,000 ppd. 5000 ppd at a cost of 300 watts just wasn't worth it to me, so I moved the 295 to another machine.

Here's a sample of what the 2600K is capable of without a gpu:

Code: Select all

 Project ID: 6901
 Core: GRO-A5
 Credit: 8955
 Frames: 100



 Name: HTPC 10.10 (native)
 Path: \\HTPC\fah\
 Number of Frames Observed: 60

 Min. Time / Frame : 00:22:01 - 59,616 PPD
 Avg. Time / Frame : 00:22:02 - 59,548 PPD


 Name: HT-PC SMP (windows client)
 Path: \\Tmp-pc\fah\SMP\
 Number of Frames Observed: 300

 Min. Time / Frame : 00:24:27 - 50,941 PPD
 Avg. Time / Frame : 00:24:44 - 50,068 PPD


 Name: HTPC VM (10.10 guest)
 Path: \\HTPC-UBUNTU\fah\
 Number of Frames Observed: 300

 Min. Time / Frame : 00:21:49 - 60,437 PPD
 Avg. Time / Frame : 00:22:56 - 56,077 PPD
In my opinion, the money is better spent on the 2600K and the motherboard to run it than on the GPU.

Re: PPD Question and o\c

Posted: Sun Apr 10, 2011 11:54 pm
by GreyWhiskers
@ChasR: Thanks for your insights. I experimented with my system, still at the CPU factory O/C of 3.90 GHz. I plan to run the O/C up to about 4.5 or 4.7 as I get time. I believe that my Test 2 condition below would come close to what you show with a 4.7 GHz clock.

I returned to the base - SMP 7 with GPU. That gives a larger total PPD, equating to more science return, at the expense of some 135 watts to run the GPU.

Base: All running v6 clients.
SMP P6901 -bigadv -SMP 7 w/bonus - TPF: 33:29 ppd: 32,736 [will scale up with more CPU O/C]
GPU TPF: P6801 - TPF: 1:21 ppd: 14,378
Total production: 47,114 ppd
CPU Load ~~87/88%
Power: 288 watts system without monitor

Test 1 - turn off GPU folding (stay at -SMP 7)
SMP P6901 -bigadv w/bonus - TPF: 32:05 ppd: 33,739
Total production: 33,739 ppd
CPU Load ~ 87/88%
Power: 153 watts system without monitor [--> GTX 560 Ti at 950 GHz core clock consumes ~135 watts while folding]

Test 2 - turn off GPU folding; run at -SMP 8
SMP P6901 -bigadv w/bonus - TPF: 30:19 ppd: 36,438 :!: [will scale up with more CPU O/C]
Total production: 36,438 ppd
CPU Load - 100%
Power: 153-162 watts system without monitor

Test 3 - Turn on GPU folding, leaving SMP -8
SMP P6901 -bigadv w/bonus - TPF: 38:18 ppd: 26,514 :( [will scale up with more CPU O/C]
GPU TPF: P6801 - TPF: 1:21 ppd: 14,378
Total production: 40,892 ppd
CPU Load 100%
Power: 288 watts system without monitor

GPU client notes. V6 systray. Set and forget. Plunks a finished WU every ~ 2.27 hours. Did NOT set the "slightly higher" flag - the GPU doesn't need the advantage