Page 1 of 1
CUDA
Posted: Thu Jun 02, 2016 2:50 am
by WMCheerman
I was going over the different software in OPENMM and noticed it no longer mentions cuda support? Was this drop in the later software after GPU 2 for open GL?
Re: CUDA
Posted: Thu Jun 02, 2016 5:33 am
by mmonnin
Both AMD and NVIDIA can open OpenCL so its easier to create one core with one set of WUs that can run on both cards and separate sets for both manufactures.
Re: CUDA
Posted: Thu Jun 02, 2016 1:17 pm
by WMCheerman
I thought CUDA offered better performance with NVIDA cards which is why i was surprised. Is that correct or is OpenCL the same?
Re: CUDA
Posted: Thu Jun 02, 2016 1:54 pm
by Joe_H
You are raising two different questions, and part of that appears to be based on a misunderstanding of how OpenMM is used. First, as I understand it, OpenMM itself does support CUDA. However the GPU folding cores that PG has programmed based on OpenMM starting with Core_17 have not used CUDA for a couple of reasons.
One is that maintaining and programming the folding cores in two different GPU codes would take addition resources and the moderate increase in performance of CUDA over OpenCL was not enough to justify it. A second issue was that the CUDA support did not include JIT programming support. That lack of JIT would have required even more resources in programmer time and elsewhere.
PG members have stated in the past that if the performance increase from using CUDA after JIT was supported was enough, then they might consider a CUDA version of the GPU folding core. However so far nothing further has been released in that direction.
Re: CUDA
Posted: Thu Jun 02, 2016 6:31 pm
by WMCheerman
I wonder if CUDA efficiency is why I see GPU Grid run about 75% of GPU load on my 980ti while programs that don't use CUDA like Folding@home use in the mid 90s. ie they have to use more brute force to get the same performance.
Re: CUDA
Posted: Thu Jun 02, 2016 7:25 pm
by davidcoton
How do you know that the two different programs, doing different calculations, are getting the same performance?
Your figures actually suggest that the FAH system is making more use of the GPU. But as to which system uses the GPU most efficiently, it is very difficult to tell.
Re: CUDA
Posted: Thu Jun 02, 2016 7:54 pm
by 7im
I would agree. Sounds more like the CUDA code has a bottleneck in performance for that other project, though not really an issue for this forum.
Re: CUDA
Posted: Thu Jun 02, 2016 10:16 pm
by mmonnin
I've tried a few BOINC projects and they don't load up my 970 unless tweaked or multiple units are run. Collatz though loads it right up. There's no more configuring for FAH as it uses as much of the GPU as allowed by bandwidth or OS/driver efficiency.
Re: CUDA
Posted: Fri Jun 03, 2016 12:33 pm
by toTOW
BOINC applications for GPU are usually way lesser optimized than FAH ... if you want the most accurate comparison, GPUGRID is the best candidate since they are also doing MD, but with different tools (they use
ACEMD for their simulations).
Collatz computations are very simple to parallelize, and they are using simple operation that can be executed by almost all SP available in a GPU.
Re: CUDA
Posted: Fri Jun 03, 2016 7:05 pm
by bruce
I understand that CUDA performance varies depending both on which model of GPU you have and which version of NV drivers you have installed -- and not consistently increasing. This wouold lead to a continual re-optimizing of OpenMM and a recreating new versions of CUDA FahCores which would produce mixed results unless all Donors use the same model GPU. Such inconsistencies are present in OpenCL, but to a lesser degree. This might please donors who like to tweak their systems and like to report "I increased my performance by 1% by ..." or "My performance decreased by 1% when I ,..." but the net changes to FAH's overall performance wouldn't be significant.
The only significant change would be an increase in the programming backlog at OpenMM.