CUDA
Moderator: Site Moderators
Forum rules
Please read the forum rules before posting.
Please read the forum rules before posting.
-
- Posts: 9
- Joined: Sun May 08, 2016 4:43 am
CUDA
I was going over the different software in OPENMM and noticed it no longer mentions cuda support? Was this drop in the later software after GPU 2 for open GL?
-
- Posts: 9
- Joined: Sun May 08, 2016 4:43 am
Re: CUDA
I thought CUDA offered better performance with NVIDA cards which is why i was surprised. Is that correct or is OpenCL the same?
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: CUDA
You are raising two different questions, and part of that appears to be based on a misunderstanding of how OpenMM is used. First, as I understand it, OpenMM itself does support CUDA. However the GPU folding cores that PG has programmed based on OpenMM starting with Core_17 have not used CUDA for a couple of reasons.
One is that maintaining and programming the folding cores in two different GPU codes would take addition resources and the moderate increase in performance of CUDA over OpenCL was not enough to justify it. A second issue was that the CUDA support did not include JIT programming support. That lack of JIT would have required even more resources in programmer time and elsewhere.
PG members have stated in the past that if the performance increase from using CUDA after JIT was supported was enough, then they might consider a CUDA version of the GPU folding core. However so far nothing further has been released in that direction.
One is that maintaining and programming the folding cores in two different GPU codes would take addition resources and the moderate increase in performance of CUDA over OpenCL was not enough to justify it. A second issue was that the CUDA support did not include JIT programming support. That lack of JIT would have required even more resources in programmer time and elsewhere.
PG members have stated in the past that if the performance increase from using CUDA after JIT was supported was enough, then they might consider a CUDA version of the GPU folding core. However so far nothing further has been released in that direction.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 9
- Joined: Sun May 08, 2016 4:43 am
Re: CUDA
I wonder if CUDA efficiency is why I see GPU Grid run about 75% of GPU load on my 980ti while programs that don't use CUDA like Folding@home use in the mid 90s. ie they have to use more brute force to get the same performance.
-
- Posts: 1094
- Joined: Wed Nov 05, 2008 3:19 pm
- Location: Cambridge, UK
Re: CUDA
How do you know that the two different programs, doing different calculations, are getting the same performance?
Your figures actually suggest that the FAH system is making more use of the GPU. But as to which system uses the GPU most efficiently, it is very difficult to tell.
Your figures actually suggest that the FAH system is making more use of the GPU. But as to which system uses the GPU most efficiently, it is very difficult to tell.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: CUDA
I would agree. Sounds more like the CUDA code has a bottleneck in performance for that other project, though not really an issue for this forum.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: CUDA
I've tried a few BOINC projects and they don't load up my 970 unless tweaked or multiple units are run. Collatz though loads it right up. There's no more configuring for FAH as it uses as much of the GPU as allowed by bandwidth or OS/driver efficiency.
-
- Site Moderator
- Posts: 6359
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: CUDA
BOINC applications for GPU are usually way lesser optimized than FAH ... if you want the most accurate comparison, GPUGRID is the best candidate since they are also doing MD, but with different tools (they use ACEMD for their simulations).
Collatz computations are very simple to parallelize, and they are using simple operation that can be executed by almost all SP available in a GPU.
Collatz computations are very simple to parallelize, and they are using simple operation that can be executed by almost all SP available in a GPU.
Re: CUDA
I understand that CUDA performance varies depending both on which model of GPU you have and which version of NV drivers you have installed -- and not consistently increasing. This wouold lead to a continual re-optimizing of OpenMM and a recreating new versions of CUDA FahCores which would produce mixed results unless all Donors use the same model GPU. Such inconsistencies are present in OpenCL, but to a lesser degree. This might please donors who like to tweak their systems and like to report "I increased my performance by 1% by ..." or "My performance decreased by 1% when I ,..." but the net changes to FAH's overall performance wouldn't be significant.
The only significant change would be an increase in the programming backlog at OpenMM.
The only significant change would be an increase in the programming backlog at OpenMM.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.