FoldingFodder wrote:Secondly, is there a way to limit the usage of GPU cores just like you can with CPU threads?
Not really.
While we have been designing CPUs to support multitasking since the 1950's, GPU's mostly lack these features.
The OS has no mechanism to preempt or multitask the GPU. Vendor drivers are still not very multitask friendly as gamers want every ounce of speed, not multitasking overhead.
Nvidia added hardware support for multitasking in the GTX 10x0 cards and newer in 2016, but F@H is just not a target audience with as much sway as gamers and miners.
"Dynamic load balancing scheduling system. This allows the scheduler to dynamically adjust the amount of the GPU assigned to multiple tasks, ensuring that the GPU remains saturated with work except when there is no more work that can safely be distributed to distribute. Nvidia therefore has safely enabled asynchronous compute in Pascal's driver.
Instruction-level and thread-level preemption. In graphics tasks, the driver restricts this to pixel-level preemption because pixel tasks typically finish quickly and the overhead costs of doing pixel-level preemption are much lower than performing instruction-level preemption. Compute tasks get thread-level or instruction-level preemption. Instruction-level preemption is useful, only because compute tasks can take long times to finish and there are no guarantees on when a compute task finishes, so the driver enables the very expensive instruction-level preemption for these tasks." -
https://en.wikipedia.org/wiki/Pascal_(m ... e)#Details
Notice that Wikipedia stresses that preemption will slow results.
If F@H wrote an Nvidia only, GTX 10x0 and up only Core,they could take advantage of this, but it would not be popular among folders. (Why do they get this cool feature and my card doesn't?) Nor would it produce more science, so the researchers are not clamoring for it. (and they pay the programmer)