How much CPU to drive GPU
Moderators: Site Moderators, FAHC Science Team
How much CPU to drive GPU
Hi all -
Just converted one of my GPU based cybercurrency rigs over to FoldingAtHome.
It has (7) GPUs installed. For cybercurrency, CPU usage is minimal - I could mine Etherium and have the 2 core system run at 10% busy.
I removed the CPU slot from config.xml and was surprised to see this systems CPU maxed out - with it apparently burning a core per GPU? It currently has a dual-core Pentium (3.3Ghz) CPU. What would I need to upgrade to in order to satisfy the CPU requirements for driving the GPUs?
Just converted one of my GPU based cybercurrency rigs over to FoldingAtHome.
It has (7) GPUs installed. For cybercurrency, CPU usage is minimal - I could mine Etherium and have the 2 core system run at 10% busy.
I removed the CPU slot from config.xml and was surprised to see this systems CPU maxed out - with it apparently burning a core per GPU? It currently has a dual-core Pentium (3.3Ghz) CPU. What would I need to upgrade to in order to satisfy the CPU requirements for driving the GPUs?
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: How much CPU to drive GPU
Welcome to Folding@Home!
You do not mention what GPUs you have so I will give two answers.
For Nvidia GPUs: Nvidia implemented OpenCL via polling I/O. Each thread (F@H calls them CPUs) supports One GPU, running at 100%. Yes is is wasteful and poor programming but we have to live with it.
https://en.wikipedia.org/wiki/Polling_( ... r_science)
For AMD GPUs: Each GPU is served by one CPU thread as above, but because interrupts are used, there is CPU time for other tasks.
https://en.wikipedia.org/wiki/Interrupt ... chitecture)
So AMD CPUs support GPUs much less wastefully, but Nvidia does work, just less impressively.
In either case, F@H reserves one thread per GPU.
You do not mention what GPUs you have so I will give two answers.
For Nvidia GPUs: Nvidia implemented OpenCL via polling I/O. Each thread (F@H calls them CPUs) supports One GPU, running at 100%. Yes is is wasteful and poor programming but we have to live with it.
https://en.wikipedia.org/wiki/Polling_( ... r_science)
For AMD GPUs: Each GPU is served by one CPU thread as above, but because interrupts are used, there is CPU time for other tasks.
https://en.wikipedia.org/wiki/Interrupt ... chitecture)
So AMD CPUs support GPUs much less wastefully, but Nvidia does work, just less impressively.
In either case, F@H reserves one thread per GPU.
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: How much CPU to drive GPU
How much CPU depends a bit on whether you are using nVidia or AMD cards. Both use the CPU to prepare and move data to and from the GPUs, usually recommended is one CPU core per GPU.
The AMD drivers are interrupt driven, so they use the CPU only when called for and can share a CPU core for more than one GPU a bit better. nVidia has CPU in s a spin wait, so it is always active looking for the next call to move data over the PCIe bus to to from the GPU. People have tested, the nVidia drivers can share a core for more than one GPU, but at some performance loss.
At each checkpoint the AMD and nVidia folding cores both will use a CPU core fully for a short period of time performing a sanity check on the data and creating the checkpoint file. A few newer projects are using a slightly different test, those will use more than one CPU core during the sanity check.
The AMD drivers are interrupt driven, so they use the CPU only when called for and can share a CPU core for more than one GPU a bit better. nVidia has CPU in s a spin wait, so it is always active looking for the next call to move data over the PCIe bus to to from the GPU. People have tested, the nVidia drivers can share a core for more than one GPU, but at some performance loss.
At each checkpoint the AMD and nVidia folding cores both will use a CPU core fully for a short period of time performing a sanity check on the data and creating the checkpoint file. A few newer projects are using a slightly different test, those will use more than one CPU core during the sanity check.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
Re: How much CPU to drive GPU
Thanks all. System has (6) Nvidia 1070 ti's and (1) older Radeon R9 270X.
Motherboard (ASRock H110 Pro BTC+) was designed for cybercurrency usage and only supports CPUs up to 91W - so guessing an I5-9400F is going to be my best choice (65W 6-core 2.9Ghz->4.1Ghz). Obviously an I7 would be nicer, but they come in at twice (or more) the price.
Thanks!
ps. Quite familiar with polling vs interrupt driven I/O. Its why OS/2 was SO much better than Windows 3.1 back in the day. Just finding it hard to believe anything still uses polling...
Motherboard (ASRock H110 Pro BTC+) was designed for cybercurrency usage and only supports CPUs up to 91W - so guessing an I5-9400F is going to be my best choice (65W 6-core 2.9Ghz->4.1Ghz). Obviously an I7 would be nicer, but they come in at twice (or more) the price.
Thanks!
ps. Quite familiar with polling vs interrupt driven I/O. Its why OS/2 was SO much better than Windows 3.1 back in the day. Just finding it hard to believe anything still uses polling...
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: How much CPU to drive GPU
The polling may just be an artifact of how nVidia implements OpenCL, from some reports using different software CUDA support doesn't do the same. But having a single codebase for the GPU folding core using OpenCL for both currently outweighs possible performance gains of having a CUDA core for nVidia.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
Re: How much CPU to drive GPU
you may struggle with PCIE bandwidth to the cards. I believe Etherium doesn't use much but FAH tends to. I assume each of those cards is getting 1x? I'd be interested to hear what ppd those 1070ti's are getting
single 1070
Re: How much CPU to drive GPU
This is interesting read, and a quick web search show me a lot of things Nvidia have done to solve a lot of legacy issues, think this interrupt thing is one of them (dating back to poor PCI Express chipsets around 2003, but where many say those problems were solved 2006+ and there is no longer the need to have active polling any-more).
Been a really long while since I did code, and clearly I am over my head (technically speaking) so many years after, but something I always tried to avoid was polling. Heck, even my nickname here is taken from "linux semaphores", which give me the possibility to have code waiting at 0% CPU usage, until for instance a network package came in which was needed to act on.
So this all raises the question WHY Nvidia seem to have chosen not to support a more modern approach (like enable Message Signalled Interrupts which, if I understand this correctly, removes a need to flush CPU caches periodically, instead of just waiting...).
Is Nvidias path about stability?
I totally agree that having a single codebase is better for FAH, but this is all mind boggling when I think about all those Nvidia rigs that could give a couple of percent more... which on this scale of distributed computing would result in a lot of work done.
Been a really long while since I did code, and clearly I am over my head (technically speaking) so many years after, but something I always tried to avoid was polling. Heck, even my nickname here is taken from "linux semaphores", which give me the possibility to have code waiting at 0% CPU usage, until for instance a network package came in which was needed to act on.
So this all raises the question WHY Nvidia seem to have chosen not to support a more modern approach (like enable Message Signalled Interrupts which, if I understand this correctly, removes a need to flush CPU caches periodically, instead of just waiting...).
Is Nvidias path about stability?
I totally agree that having a single codebase is better for FAH, but this is all mind boggling when I think about all those Nvidia rigs that could give a couple of percent more... which on this scale of distributed computing would result in a lot of work done.
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: How much CPU to drive GPU
This is my cynical answer: Nvidia has a GPU math API called CUDA, if Nvidia can convince you to use CUDA, you are locked into Nvidia's hardware.semaphore wrote:So this all raises the question WHY Nvidia seem to have chosen not to support a more modern approach (like enable Message Signalled Interrupts which, if I understand this correctly, removes a need to flush CPU caches periodically, instead of just waiting...).
Is Nvidias path about stability?
I totally agree that having a single codebase is better for FAH, but this is all mind boggling when I think about all those Nvidia rigs that could give a couple of percent more... which on this scale of distributed computing would result in a lot of work done.
Nvidia also supports an Open Standards API for GPU math, OpenCL. Since it does not lock you into Nvidia hardware, there is no incentive to make it as powerful as CUDA. Polled I/O is one way to make CUDA look better to researchers.
Or maybe they just assigned poor programmers to OpenCL.
https://en.wikipedia.org/wiki/CUDA
https://en.wikipedia.org/wiki/OpenCL
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
Re: How much CPU to drive GPU
So (6) 1070Ti GPUs each connected to the motherboard by a riser (1X), dual core Pentium (FahCore_22 processes are sharing CPU):HaloJones wrote:you may struggle with PCIE bandwidth to the cards. I believe Etherium doesn't use much but FAH tends to. I assume each of those cards is getting 1x? I'd be interested to hear what ppd those 1070ti's are getting
Drawing about 650W for the entire system at the wall - lower than expected.
After running for 20 mintues, reporting 481K PPD. Thought I was suppose to get something like 600K PPD per card?!? The 481K is from the web interface.
Last edited by ccgllc on Sun Apr 05, 2020 11:55 pm, edited 1 time in total.
-
- Site Moderator
- Posts: 6359
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: How much CPU to drive GPU
Do you have a passkey set ? Check the FAQ here : https://foldingathome.org/support/faq/points/passkey/
Re: How much CPU to drive GPU
Do all 6 cards actually have a wu?
Re: How much CPU to drive GPU
Yes, passkey is set.toTOW wrote:Do you have a passkey set ? Check the FAQ here : https://foldingathome.org/support/faq/points/passkey/
Re: How much CPU to drive GPU
Yes.Rel25917 wrote:Do all 6 cards actually have a wu?
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: How much CPU to drive GPU
I would check the following:
CPU utilization
GPU utilization
GPU temprature
I have a feeling that you're hitting the limitation of PCIE 1x which is causing your performance to be throttled. If your system isn't well ventilated, your GPUs might be reducing their clock speed to get the temperature to an acceptable level.
CPU utilization
GPU utilization
GPU temprature
I have a feeling that you're hitting the limitation of PCIE 1x which is causing your performance to be throttled. If your system isn't well ventilated, your GPUs might be reducing their clock speed to get the temperature to an acceptable level.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Re: How much CPU to drive GPU
CPU utilization is maxed out, as expected with a dual core CPU driving 6 GPU cards via a cruddy OpenCL polled software interface. Load average is slightly over 6, as expected.PantherX wrote:I would check the following:
CPU utilization
GPU utilization
GPU temprature
I have a feeling that you're hitting the limitation of PCIE 1x which is causing your performance to be throttled. If your system isn't well ventilated, your GPUs might be reducing their clock speed to get the temperature to an acceptable level.
Whole system is open cased (like most GPU rigs) and currently in a room that is about 40F. Each GPU is also water cooled - so heat is not a problem.
Based on power consumption, GPU utilization is low (well under 100Watts/GPU).