Page 1 of 2

Fair distribution for ppd BigAdv / Gpu

Posted: Thu Jun 30, 2011 1:02 pm
by Zarck
The computing power of the gpu is higher, why the number of points is lower ?

@+
*_*

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Thu Jun 30, 2011 1:29 pm
by k1wi
because GPUs are very limited in what computations they can do. They can do very simple simulations very very very fast.

Also, GPUs haven't yet got the QRB (Quick Return Bonus) system rolled out to them yet.

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Thu Jun 30, 2011 9:26 pm
by khgsw
I have 6 Nvidia GPU folding, so should I switch over to CPU and BigAdv instead instead of using GPU?
I wonder because that is the message I get by looking at the PPD being produced by different system.
The energy consumtion will also be lower if a stop using GPU.

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Thu Jun 30, 2011 10:48 pm
by bruce
Welcome to the foldingforum, khgsw.

A lot depends on your hardware. The newer GPUs do produce nice PPD, but if you have hardware that is capable of running bigadv WU (8 or more CPU Cores, as reported by your OS plus plenty of RAM) that choice is certainly recommended. As far as "switching over" that also depends on your hardware. Many people find that they can successfully run both for a total PPD exceeding either one alone. It does take some experimentation to find the balance that is optimum for your system(s).

You can save power by reducing your GPU folding, but you can also reduce power some by removing your overclocking settings.

Without detailed information about your system(s) I can only speak in generatlities. If you describe your hardware, you'll probably get more specific responses from people with similar systems.

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Fri Jul 01, 2011 2:05 am
by Grandpa_01
I think that it still holds true that bigadv will produce more PPD than bigadv and GPU folding if you follow Stanfords guide lines of not using smp 7. If you follow the guide lines then to fold bigadv and GPU would you not need to run -smp 6 or are there some of the newer nVidia cards that will not affect smp 8 bigadv adversely. ?

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Fri Jul 01, 2011 4:02 am
by GreyWhiskers
I tried several configurations back in April to see what the combined effects would be. This set of trials didn't include an -SMP6.

Conclusions:

The best overall production was with one GPU slot and -bigadv SMP, at -SMP 7. That hasn't been an issue for the last couple of months. When we needed to switch off of -bigadv a few weeks ago when the servers were down, I did switch to -SMP 8 to ensure that none of the problem WUs would be EUEed.

The highest SMP only config was (no surprise here) -SMP 8.

The -SMP 8 along with the GPU was lower total PPD than -SMP 7 with GPU, but it was STILL considerably more ppd than the SMP/bigadv only.

BTW, after having the Sandy Bridge system online for three months, I finally decided to try out the next level of CPU overclock. I'm going to try taking the factory 3.9 GHz to 4.6 GHz and see how that affects the overall production. I'll retry these configs over the next couple of weeks to see what those numbers come out to be.
GreyWhiskers wrote: my Sandy Bridge system (see sig) has an i7 2600k (3.90 Ghz) and the GTX 560 Ti (950 MHz). The entire system is pulling a steady 288 watts as measured by the CyberPower UPS, less monitor which is on a different UPS.

I returned to the base - SMP 7 with GPU. That gives a larger total PPD, equating to more science return, at the expense of some 135 watts to run the GPU. This doesn't seem set and forget, though. If I leave an SMP client at -SMP 7, and if one of the P101xx WUs gets assigned, it will EUE and dump the WU, not good for the program. Since the P6900 type WUs take about 2.5 days to complete, one could set the -SMP 7 when it starts, and then back to -SMP 8 before it terminates. That may give a little more productivity, while making sure that we don't EUE any of the other -SMP projects that may be sent.

Base: All running v6 clients.
SMP P6901 -bigadv -SMP 7 w/bonus - TPF: 33:29 ppd: 32,736 [will scale up with more CPU O/C]
GPU TPF: P6801 - TPF: 1:21 ppd: 14,378
Total production: 47,114 ppd
CPU Load ~~87/88%
Power: 288 watts system without monitor

Test 1 - turn off GPU folding (stay at -SMP 7)
SMP P6901 -bigadv w/bonus - TPF: 32:05 ppd: 33,739
Total production: 33,739 ppd
CPU Load ~ 87/88%
Power: 153 watts system without monitor [--> GTX 560 Ti at 950 GHz core clock consumes ~135 watts while folding]

Test 2 - turn off GPU folding; run at -SMP 8
SMP P6901 -bigadv w/bonus - TPF: 30:19 ppd: 36,438 :!: [will scale up with more CPU O/C]
Total production: 36,438 ppd
CPU Load - 100%
Power: 153-162 watts system without monitor

Test 3 - Turn on GPU folding, leaving SMP -8
SMP P6901 -bigadv w/bonus - TPF: 38:18 ppd: 26,514 :( [will scale up with more CPU O/C]
GPU TPF: P6801 - TPF: 1:21 ppd: 14,378
Total production: 40,892 ppd
CPU Load 100%
Power: 288 watts system without monitor

GPU client notes. V6 systray. Set and forget. Plunks a finished WU every ~ 2.27 hours. Did NOT set the "slightly higher" flag - the GPU doesn't need the advantage

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sat Jul 02, 2011 4:39 pm
by Jester
Running a Gpu alongside Bigadv smp get's progressively worse as the host Cpu gets faster or it's core count increases,
mainly due to the Bigadv bonus multi.....

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sat Jul 02, 2011 5:06 pm
by bruce
Right. The GPU client needs some CPU resources which varies quite a bit depending on which GPU you have. The bottom line is whether it's better to devote those CPU resources to supplying data to the GPU or to helping with the SMP WU. Books could be written on that subject, but in the final analysis, it depends on your hardware so the only "best" answer is whatever you find works best on your system.

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sat Jul 02, 2011 5:16 pm
by Jester
Or on the other hand if you're really all out to return the Bigadv Wu's as fast as possible from the science viewpoint
why slow it down by running a Gpu as well...... :ewink:

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sat Jul 02, 2011 5:45 pm
by bruce
There's always a trade-off. Some GPUs use very, very little CPU, especially when it's a HyperThreaded virtual CPU. The heavy FP work is done by the GPU and it doesn't compete for the FPU which is busy doing SMP work. If the amount that it slows down SMP is insignificant then getting more work done might be the right answer. YMMV.

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sat Jul 02, 2011 11:08 pm
by VijayPande
We have been considering QRB for GPUs, which should finish the rebalancing we've had in mind to do. There are some issues to work out though, which is why we haven't made that change now.

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sun Jul 03, 2011 1:49 am
by Leonardo
I have 6 Nvidia GPU folding, so should I switch over to CPU and BigAdv instead instead of using GPU?..The energy consumtion will also be lower if a stop using GPU.
It's really a tough call, and believe me, I've been there. For a while, I had a small farm of 9800GX2s, then I upgraded to a farm of GTX 295s. One had to be stoic and keep a stiff upper lip when receiving the monthly power bill. :shock: I sold off the GTX 295s while the used market was still strong for them, bought some lower powered Fermi cards and upgraded CPUs to I7 Lynnfields. This was right at the time that 8-thread and higher CPU architectures became so effective with the newer (at that time) bigadv work units. I can't claim that I timed the march of science and technology correctly, as it was mainly luck that I made my system configuration changes then. (Maybe in a few months the high powered GPUs will again be formidable, and I no longer have any. Maybe I should....nah, my wife no longer has a minor stroke when she sees the power bill. :biggrin: )

Every time I reconfigure a system, I have to remind myself that today's technology may not be an optimum solution even just a few months in the future. It's just the nature of the game. There's no way around it. :e)

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sun Jul 03, 2011 2:18 am
by Jester
Sounds like we have followed very similar paths in our past Leonardo, :ewink:
Went through the QMD disaster, managed to sell off more than a few ATi cards after Gpu1 was pulled,
it's a never ending "work in progress" keeping up with what produces the best results (both science and ppd) for
a given self imposed power budget, due to the subtropical location it's "watercool everything" for me here and sadly I've still a
watercooled 295 and a couple of 275's that "missed the boat" on "for sale" forums,
currently have 2 x 480's and 2 x 470's (all watercooled :roll: ) sitting idle as my power budget made it more prudent
to run an extra 970 rig, but with new changes I'll have to get out the calculator once again and see if it's better to sell off one
970 rig and fire up the Gpu's again....
For the quoted question:
The only way to know for sure which is the best configuration is to run all combinations available and measure their power usage,
that way at least you'll have some numbers for all your hardware, and they can be more than handy for comparing prospective
upgrades when the time comes.....

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sun Jul 03, 2011 3:28 am
by Leonardo
I had 8 total GTX 295s folding for few months. One day it hit me - "STOP THE MADNESS." Hey, I'm in Alaska, and even here I had to open the windows to keep the office cool enough.

Re: Fair distribution for ppd BigAdv / Gpu

Posted: Sun Jul 03, 2011 3:52 am
by Jester
Leonardo wrote:I had 8 total GTX 295s folding for few months. One day it hit me - "STOP THE MADNESS." Hey, I'm in Alaska, and even here I had to open the windows to keep the office cool enough.
LOL,
Imagine me back a little running 2 x 9800GX2's 2 x 295's and various single 9 and 200 series cards that took the Gpu count to 14.....
all in the one small room,
The watercooling did it's job as they never missed a beat, even highly overclocked and with ambient temps in the high 30's C...
When you actually walked in the room it was another story,
never mind "perspire" it was more like slowly melting..... :lol: