Page 1 of 2

How much CPU to drive GPU

Posted: Sun Apr 05, 2020 2:40 pm
by ccgllc
Hi all -

Just converted one of my GPU based cybercurrency rigs over to FoldingAtHome.

It has (7) GPUs installed. For cybercurrency, CPU usage is minimal - I could mine Etherium and have the 2 core system run at 10% busy.

I removed the CPU slot from config.xml and was surprised to see this systems CPU maxed out - with it apparently burning a core per GPU? It currently has a dual-core Pentium (3.3Ghz) CPU. What would I need to upgrade to in order to satisfy the CPU requirements for driving the GPUs?

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 3:17 pm
by JimboPalmer
Welcome to Folding@Home!

You do not mention what GPUs you have so I will give two answers.

For Nvidia GPUs: Nvidia implemented OpenCL via polling I/O. Each thread (F@H calls them CPUs) supports One GPU, running at 100%. Yes is is wasteful and poor programming but we have to live with it.
https://en.wikipedia.org/wiki/Polling_( ... r_science)

For AMD GPUs: Each GPU is served by one CPU thread as above, but because interrupts are used, there is CPU time for other tasks.
https://en.wikipedia.org/wiki/Interrupt ... chitecture)

So AMD CPUs support GPUs much less wastefully, but Nvidia does work, just less impressively.

In either case, F@H reserves one thread per GPU.

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 3:26 pm
by Joe_H
How much CPU depends a bit on whether you are using nVidia or AMD cards. Both use the CPU to prepare and move data to and from the GPUs, usually recommended is one CPU core per GPU.

The AMD drivers are interrupt driven, so they use the CPU only when called for and can share a CPU core for more than one GPU a bit better. nVidia has CPU in s a spin wait, so it is always active looking for the next call to move data over the PCIe bus to to from the GPU. People have tested, the nVidia drivers can share a core for more than one GPU, but at some performance loss.

At each checkpoint the AMD and nVidia folding cores both will use a CPU core fully for a short period of time performing a sanity check on the data and creating the checkpoint file. A few newer projects are using a slightly different test, those will use more than one CPU core during the sanity check.

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 3:49 pm
by ccgllc
Thanks all. System has (6) Nvidia 1070 ti's and (1) older Radeon R9 270X.

Motherboard (ASRock H110 Pro BTC+) was designed for cybercurrency usage and only supports CPUs up to 91W - so guessing an I5-9400F is going to be my best choice (65W 6-core 2.9Ghz->4.1Ghz). Obviously an I7 would be nicer, but they come in at twice (or more) the price.

Thanks!

ps. Quite familiar with polling vs interrupt driven I/O. Its why OS/2 was SO much better than Windows 3.1 back in the day. Just finding it hard to believe anything still uses polling...

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 4:13 pm
by Joe_H
The polling may just be an artifact of how nVidia implements OpenCL, from some reports using different software CUDA support doesn't do the same. But having a single codebase for the GPU folding core using OpenCL for both currently outweighs possible performance gains of having a CUDA core for nVidia.

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 4:51 pm
by HaloJones
you may struggle with PCIE bandwidth to the cards. I believe Etherium doesn't use much but FAH tends to. I assume each of those cards is getting 1x? I'd be interested to hear what ppd those 1070ti's are getting

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 4:56 pm
by semaphore
This is interesting read, and a quick web search show me a lot of things Nvidia have done to solve a lot of legacy issues, think this interrupt thing is one of them (dating back to poor PCI Express chipsets around 2003, but where many say those problems were solved 2006+ and there is no longer the need to have active polling any-more).

Been a really long while since I did code, and clearly I am over my head (technically speaking) so many years after, but something I always tried to avoid was polling. Heck, even my nickname here is taken from "linux semaphores", which give me the possibility to have code waiting at 0% CPU usage, until for instance a network package came in which was needed to act on.

So this all raises the question WHY Nvidia seem to have chosen not to support a more modern approach (like enable Message Signalled Interrupts which, if I understand this correctly, removes a need to flush CPU caches periodically, instead of just waiting...).
Is Nvidias path about stability?


I totally agree that having a single codebase is better for FAH, but this is all mind boggling when I think about all those Nvidia rigs that could give a couple of percent more... which on this scale of distributed computing would result in a lot of work done.

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 5:08 pm
by JimboPalmer
semaphore wrote:So this all raises the question WHY Nvidia seem to have chosen not to support a more modern approach (like enable Message Signalled Interrupts which, if I understand this correctly, removes a need to flush CPU caches periodically, instead of just waiting...).
Is Nvidias path about stability?

I totally agree that having a single codebase is better for FAH, but this is all mind boggling when I think about all those Nvidia rigs that could give a couple of percent more... which on this scale of distributed computing would result in a lot of work done.
This is my cynical answer: Nvidia has a GPU math API called CUDA, if Nvidia can convince you to use CUDA, you are locked into Nvidia's hardware.
Nvidia also supports an Open Standards API for GPU math, OpenCL. Since it does not lock you into Nvidia hardware, there is no incentive to make it as powerful as CUDA. Polled I/O is one way to make CUDA look better to researchers.

Or maybe they just assigned poor programmers to OpenCL.

https://en.wikipedia.org/wiki/CUDA
https://en.wikipedia.org/wiki/OpenCL

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 11:34 pm
by ccgllc
HaloJones wrote:you may struggle with PCIE bandwidth to the cards. I believe Etherium doesn't use much but FAH tends to. I assume each of those cards is getting 1x? I'd be interested to hear what ppd those 1070ti's are getting
So (6) 1070Ti GPUs each connected to the motherboard by a riser (1X), dual core Pentium (FahCore_22 processes are sharing CPU):

Drawing about 650W for the entire system at the wall - lower than expected.
After running for 20 mintues, reporting 481K PPD. Thought I was suppose to get something like 600K PPD per card?!? The 481K is from the web interface.

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 11:47 pm
by toTOW
Do you have a passkey set ? Check the FAQ here : https://foldingathome.org/support/faq/points/passkey/

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 11:49 pm
by Rel25917
Do all 6 cards actually have a wu?

Re: How much CPU to drive GPU

Posted: Sun Apr 05, 2020 11:56 pm
by ccgllc
toTOW wrote:Do you have a passkey set ? Check the FAQ here : https://foldingathome.org/support/faq/points/passkey/
Yes, passkey is set.

Re: How much CPU to drive GPU

Posted: Mon Apr 06, 2020 12:00 am
by ccgllc
Rel25917 wrote:Do all 6 cards actually have a wu?
Yes.

Re: How much CPU to drive GPU

Posted: Mon Apr 06, 2020 2:35 am
by PantherX
I would check the following:
CPU utilization
GPU utilization
GPU temprature

I have a feeling that you're hitting the limitation of PCIE 1x which is causing your performance to be throttled. If your system isn't well ventilated, your GPUs might be reducing their clock speed to get the temperature to an acceptable level.

Re: How much CPU to drive GPU

Posted: Mon Apr 06, 2020 3:45 am
by ccgllc
PantherX wrote:I would check the following:
CPU utilization
GPU utilization
GPU temprature

I have a feeling that you're hitting the limitation of PCIE 1x which is causing your performance to be throttled. If your system isn't well ventilated, your GPUs might be reducing their clock speed to get the temperature to an acceptable level.
CPU utilization is maxed out, as expected with a dual core CPU driving 6 GPU cards via a cruddy OpenCL polled software interface. Load average is slightly over 6, as expected.

Whole system is open cased (like most GPU rigs) and currently in a room that is about 40F. Each GPU is also water cooled - so heat is not a problem.

Based on power consumption, GPU utilization is low (well under 100Watts/GPU).