Page 1 of 1
GPU vs CPU WUs
Posted: Sat Mar 27, 2021 3:41 pm
by iero
I was looking around the forum and the web, trying to understand some things. The question to which i arrived is the following:
Regarding any new WUs that get produced, could they all be writen is such a way that they run exclusively on GPUs, or do some
of the research must absoluterly call upon instruction sets that only CPUs can execute?

Re: GPU vs CPU WUs
Posted: Sat Mar 27, 2021 4:59 pm
by bruce
Yes, every project runs EITHER on the CPU (x86-64) or on supported GPUs. Each client automaically configures a slot to run CPU projects and another slot associated with each supported GPU. They'll run assignments independently of each other and you can manage the slots with FAHControl.
GPUs require OpenCL support and/or CUDA support. In most cases, we recommend drivers from the vendor. [We can help you if you have an unusual case.]
Re: GPU vs CPU WUs
Posted: Sat Mar 27, 2021 5:03 pm
by JimboPalmer
Historically, there used to be WUs that had to be done on CPUs. Until 'recently' GPUs could not solve problems with explicit solvation while CPUs could. This has been solved.*
https://en.wikipedia.org/wiki/Solvent_model
Today, they choose CPU and GPU work by the complexity of the molecule. smaller molecules would 'waste' large GPU cards, as small molecules do not have enough bonds to occupy all the shaders. Those that are small enough, occupy the CPUs that are so abundant. As GPUs and CPUs get more complex (there are Epycs with 128 cores/256 threads) the size of 'small' molecules is also increasing.
*some part of "Why doesn't my ancient GPU work any more, I used to fold on it in 2018?" is that the researchers had to code for features old GPUs do not have.
Re: GPU vs CPU WUs
Posted: Sun Mar 28, 2021 8:46 am
by iero
JimboPalmer wrote:Historically, there used to be WUs that had to be done on CPUs. Until 'recently' GPUs could not solve problems with explicit solvation while CPUs could. This has been solved.*
https://en.wikipedia.org/wiki/Solvent_model
Today, they choose CPU and GPU work by the complexity of the molecule. smaller molecules would 'waste' large GPU cards, as small molecules do not have enough bonds to occupy all the shaders. Those that are small enough, occupy the CPUs that are so abundant. As GPUs and CPUs get more complex (there are Epycs with 128 cores/256 threads) the size of 'small' molecules is also increasing.
*some part of "Why doesn't my ancient GPU work any more, I used to fold on it in 2018?" is that the researchers had to code for features old GPUs do not have.
My only gripe would be that wouldn't said underutilized GPU*if it was used for low count atoms instead of a cpu*
still be more energy efficient doing the same work? If that is true, does then need to still use CPUs arise from their abundance compared to available GPUs?*as far as F@h is concerned*
Re: GPU vs CPU WUs
Posted: Sun Mar 28, 2021 9:12 am
by gunnarre
Note that an idle GPU can clock down, while a GPU working on a low atom count WU still has raise its clocks and voltages, leading to more power usage, so at some point I suspect the GPU and CPU efficiency intersects.
Re: GPU vs CPU WUs
Posted: Sun Mar 28, 2021 2:08 pm
by JimboPalmer
If one had a GPU that was structurally sound (it did 64 bit Floating Point math and supported OpenCL 1.2) but was extremely slow, all you would need is a researcher who did not care when his/her project got done and a species for for 'sound but dead slow'. I do not think you are going to find the former, but I sincerely believe F@H is trying to achieve the latter.