Has anyone tried to use Nvidia MPS (https://docs.nvidia.com/deploy/mps/index.html) to make better use of high-capacity GPUs?
If two WUs are run on the same GPU then even if they are small, there will be no net speed-up. Imagine the WU is too small to fully utilize the GPU and only makes use of 40% of the CUDA cores. Running two WUs at once would cause WU A to use 40%, then it would schedule B which would use 40%, then back to A which would use 40% etc and at no times would they both run at once (it would appear to the user that they're both running at once, but each at only half-speed). But when MPS is turned on, they are truly scheduled in parallel, now it will use 80% of the CUDA cores.
OpenMM, the software behind the GPU cores, has an open feature request for simultaneous simulations but using MPS to do that was rejected for FAH because it only supports Linux (https://github.com/openmm/openmm/issues ... -713723978). But Linux users should be able to use this easily, without modifying the core or making any changes to the client. Unlike vGPU that splits one GPU into several virtual ones, MPS is available for consumer cards.
If no one has tried this with FAH before, I will try it out and write a guide for it.
Multiple projects on one GPU with MPS
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 1410
- Joined: Sun Dec 16, 2007 6:22 pm
- Hardware configuration: 9950x, 7950x3D, 5950x, 5800x3D
7900xtx, RX9070, Radeon 7, 5700xt, 6900xt, RX 550 640SP - Location: London
- Contact:
Re: Multiple projects on one GPU with MPS
Nvidia does not support 2 workloads on a single consumer GPU.
Developer resources are scarce as is, especially for corner cases like this
Developer resources are scarce as is, especially for corner cases like this
Re: Multiple projects on one GPU with MPS
Anything with a CUDA Compute Capability above 3.5 supports it. Please see here: https://developer.nvidia.com/cuda-gpus