Why are projects 9415 and 9414 such low PPD?
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: Why are projects 9415 and 9414 such low PPD?
I 'want' point to scale with science done.
If proteins with fewer atoms do not scale well onto larger graphics cards, then that represents the science done.
If proteins with more atoms do scale well onto larger graphics cards, then that also represents the science done.
It is not "point inflation' for a7 to get higher points than a4, it is doing more and better science.
As an oldtimer, it has taken me 140,000 WUs to get 225 millions Points, as I was running on older CPUs and GPUs back in 2009. And they did less science and I am OK with that. It is not point inflation that a 'modern' i 5 with a GTX1050ti get more points per day than I used to get in a month.
I hope that F2H is getting better and better at assigning WUs to donors who can best complete the work, but what is, is what is. It takes hints in GPU.txt so the client can request the 'right' WU from the server, it takes new server code to play attention to the hints from GPU.txt passed by the client. The researchers need to understand (and adjust) which GPUs are suited to their work.
It is possible that '9415 and 9414 have low PPD' is really a biophysicist who did not listen to the instructions and is assigning their projects to cards ill suited to them. Or maybe we are years from that.
If proteins with fewer atoms do not scale well onto larger graphics cards, then that represents the science done.
If proteins with more atoms do scale well onto larger graphics cards, then that also represents the science done.
It is not "point inflation' for a7 to get higher points than a4, it is doing more and better science.
As an oldtimer, it has taken me 140,000 WUs to get 225 millions Points, as I was running on older CPUs and GPUs back in 2009. And they did less science and I am OK with that. It is not point inflation that a 'modern' i 5 with a GTX1050ti get more points per day than I used to get in a month.
I hope that F2H is getting better and better at assigning WUs to donors who can best complete the work, but what is, is what is. It takes hints in GPU.txt so the client can request the 'right' WU from the server, it takes new server code to play attention to the hints from GPU.txt passed by the client. The researchers need to understand (and adjust) which GPUs are suited to their work.
It is possible that '9415 and 9414 have low PPD' is really a biophysicist who did not listen to the instructions and is assigning their projects to cards ill suited to them. Or maybe we are years from that.
Last edited by JimboPalmer on Mon Jan 22, 2018 12:29 am, edited 1 time in total.
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
Re: Why are projects 9415 and 9414 such low PPD?
For those of you with high-end GPUs who happen to be assigned these projects, what happens to the GPU utilization numbers? Divide the number of atoms by the number of shaders, and tell us how many atoms are available for each shader to process before more data must be transferred to/from main RAM.
Do the same for some projects with more pleasing PPDs. Post your data.
Do the same for some projects with more pleasing PPDs. Post your data.
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: Why are projects 9415 and 9414 such low PPD?
Or it is possible that those are the projects with WU's available to be assigned at the moment, and the other projects are offline, being checked, waiting for more to be created, or a host of other reasons we may never hear about.JimboPalmer wrote:...
It is possible that '9415 and 9414 have low PPD' is really a biophysicist who did not listen to the instructions and is assigning their projects to cards ill suited to them. Or maybe we are years from that.
An example are the projects currently offline while the RAID for their WS is being rebuilt. When that is back online in a couple days, then there should be WU's that are not 941n available for some folders. Depends on how many are ready to assign and how long they take to process and create more.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 12
- Joined: Mon Jan 08, 2018 9:02 pm
Re: Why are projects 9415 and 9414 such low PPD?
I just installed a 1080ti FTW3 in a new machine and immediately before I did any driver updates it was doing 1.3m PPD (dont know the project #). Then I did the driver update to the newest NVIDIA drivers and it crashed the WO and started a new one. A 94** WO. Immediatly the PPD dropped to around 900K. I thought it was a driver issue so I reverted it back but that made no difference. Now all the WO I get are 94** and all are doing less than 1m PPD. From reading here maybe its just the WOs.
Weird thing to me is this. On the first WO my GPU was being utilized at 92-96%. Now it sits around 85-86%. I dont see why the GPU wouldnt be attacking all the WOs at 100%. My CPU usage is not high, only folding with the GPU, running a G4560, 16gb DDR4 2400, MSI z270 SLI board.
In my opinion if the GPU utilization was back at 92-96% like it was the PPD would be the same between the WOs. Is there something in the WO that could make the GPU not throttle up all the way?
UPDATE
So I turned client-type to advanced and when it started a new WO it gave me project 11431 and PPD went to 1.15 alread and GPU utilization is back up to 92-95%. So theres something in the WOs that keeps the GPU from maxing clearly...
Mod Edit: This is a duplication of another post in another topic. I've removed the other post.
Duplicate postings are prohibited by our forum rules.
Weird thing to me is this. On the first WO my GPU was being utilized at 92-96%. Now it sits around 85-86%. I dont see why the GPU wouldnt be attacking all the WOs at 100%. My CPU usage is not high, only folding with the GPU, running a G4560, 16gb DDR4 2400, MSI z270 SLI board.
In my opinion if the GPU utilization was back at 92-96% like it was the PPD would be the same between the WOs. Is there something in the WO that could make the GPU not throttle up all the way?
UPDATE
So I turned client-type to advanced and when it started a new WO it gave me project 11431 and PPD went to 1.15 alread and GPU utilization is back up to 92-95%. So theres something in the WOs that keeps the GPU from maxing clearly...
Mod Edit: This is a duplication of another post in another topic. I've removed the other post.
Duplicate postings are prohibited by our forum rules.
-
- Posts: 80
- Joined: Tue Dec 19, 2017 12:19 pm
Re: Why are projects 9415 and 9414 such low PPD?
The 94XX works unit only use about 85% of the 1080TI the bigger work units like the 11431 take the 1080TI to about 96% usage and massive PPD. My 1080TI FTW3 and 1080TI founders edition in my main machine are doing almost 1.4mPPD each at the moment on the larger work units. The real killer is this client and units are still on a Fermi design of GPU. We have not seen a Maxwell or Pascal designed work unit. I am hoping we will get something along that line soon as I plan on retiring a 1070 this year with a Volta GPU when the come out for the consumer.
Also in my new rig I built, I moved to a 7900X based system to give full bandwidth of PCI-E lanes to the GPU and to have cores running at 4.6GHz on the CPU for the folding core app. I had these GPU's in my X79 chipset with 3960X CPU before hand. The upgrade did give me about 400,000 more PDD per day with just the upgrade of the system for the 2 cards.
Also in my new rig I built, I moved to a 7900X based system to give full bandwidth of PCI-E lanes to the GPU and to have cores running at 4.6GHz on the CPU for the folding core app. I had these GPU's in my X79 chipset with 3960X CPU before hand. The upgrade did give me about 400,000 more PDD per day with just the upgrade of the system for the 2 cards.
Re: Why are projects 9415 and 9414 such low PPD?
I don't understand the basis of your statement. There is no "killer" involved ... and you're not going to see projects that are "designed" for a particular platform.scott@bjorn3d wrote:The real killer is this client and units are still on a Fermi design of GPU. We have not seen a Maxwell or Pascal designed work unit.
Projects are designed based on the protein that is being studied, The physical characteristics of protein don't change to accommodate a particular GPU. A project can be processed on ANY supported GPU. No alterations are possible that would customize anything based on which GPU it can or cannot run on.
The FAHCore translates the physical characteristics of the protein into calls to OpenCL The intermediate OpenMM code that does that translation doesn't do anything different for Kepler/Fermi/Maxwell/Pascal or for the various Radeon Series GPUs
The DRIVERS figure out how you specific GPU handles the incoming data passed to it via the OpenCL API. If they don't make use of the GPUs features, either the specific features of that GPU can't be used effectively or they're not good drivers.
-
- Site Moderator
- Posts: 6359
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: Why are projects 9415 and 9414 such low PPD?
You're wrong : with OpenCL, the NV (and ATI) drivers compile and optimize the code on the fly depending on the GPU and the driver version you have.scott@bjorn3d wrote:The real killer is this client and units are still on a Fermi design of GPU. We have not seen a Maxwell or Pascal designed work unit.
Your statement was true in the days we used CUDA cores that needed to be compiled with the right options to support new GPUs or new versions of CUDA.
Re: Why are projects 9415 and 9414 such low PPD?
A high end gpu (1080ti, titan etc) is not fully utilizing its available resources with WUs such as 9414/9415. My 1080ti hovers around 83-85% GPU usage (Msi Afterburner) so it has relative to the big WUs about an additional 10 percentage points available and is thus not folding to its potential. To be honest I think this can somewhat annoy donors that got 1080ti's (or titans) for the purpose of folding and it is not achieving what it can. In one way it can be claimed that by folding these WUs my gpu foregoes PPD. In other words a 1080ti is less desirable to fold on.bruce wrote:Actually, for those who run mid-range GPUs, these projects scale quite well. Thus it's reasonable to ask why projects OTHER THAN 9414 and 9415 scale higher than they should. It does make sense for FAH to hold the line against points inflation (thereby degrading the value of points earned earlier) so reducing the points on higher scaling projects does make sense. Is that what you want?
I do not get the issue of points inflation. If the next line of nvidia gpus is 25% faster at the currently available projects is it not reasonable to claim that the new cards should get 25% higher PPD? Further it is not like there is hyperinflation in the PPD. Deflating the PPD would also be a disincentive to those that cares about the stats aspect of FAH donations.
As for reducing the credit for bigger projects. Why would projects that utilize the full potential of a gpu be penalised over a project that only uses 85% of the available resources. This makes no sense whatsoever.
The 1080ti features 3584 shading units, 224 texture mapping units and 88 ROPs. PPD from memory, and will vary due to PC being used at the same time as folding. Hence the +- in ppd.bruce wrote:For those of you with high-end GPUs who happen to be assigned these projects, what happens to the GPU utilization numbers? Divide the number of atoms by the number of shaders, and tell us how many atoms are available for each shader to process before more data must be transferred to/from main RAM.
Do the same for some projects with more pleasing PPDs. Post your data.
https://puu.sh/zblv5/82e536d150.png
Re: Why are projects 9415 and 9414 such low PPD?
I don't think you understand.
Suppose protect A gets A1 PPD on a 750Ti and gets A2 PPD on a 1080Ti. Certainly A2>A1 because the 1080Ti is faster. The factor (A2/A1) would be the 25% that you mentioned, or some other number, based on the speed difference between whatever two GPUs are being compared.
I certainly would not call that points inflation. (We're assuming that project A uses ~100% of the resources on both GPUs.)
Now suppose project B gets B1 points on a 750Ti using ~100% of the resources but when you run project B on the 1080Ti, it only uses 85% of the resources so instead of earning B1*(A2/A1) points it gets about 85% of that number. {NOTE: Nobody has explained why project B uses 100% of the resources on the 750Ti and only 85% of the resources on the 1080Ti and that's a critial question Nevertheless, it doesn't make sense recalibrate the points to make up for that inefficiency.
If the points WERE to be recalibrated, the points would be increased by ~18% on BOTH GPUs even though no more science would be completed. That's points inflation because not the guys with the 750Ti will be earning ~18 more on project B than they were on project A.
Suppose protect A gets A1 PPD on a 750Ti and gets A2 PPD on a 1080Ti. Certainly A2>A1 because the 1080Ti is faster. The factor (A2/A1) would be the 25% that you mentioned, or some other number, based on the speed difference between whatever two GPUs are being compared.
I certainly would not call that points inflation. (We're assuming that project A uses ~100% of the resources on both GPUs.)
Now suppose project B gets B1 points on a 750Ti using ~100% of the resources but when you run project B on the 1080Ti, it only uses 85% of the resources so instead of earning B1*(A2/A1) points it gets about 85% of that number. {NOTE: Nobody has explained why project B uses 100% of the resources on the 750Ti and only 85% of the resources on the 1080Ti and that's a critial question Nevertheless, it doesn't make sense recalibrate the points to make up for that inefficiency.
If the points WERE to be recalibrated, the points would be increased by ~18% on BOTH GPUs even though no more science would be completed. That's points inflation because not the guys with the 750Ti will be earning ~18 more on project B than they were on project A.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Why are projects 9415 and 9414 such low PPD?
The graphics shows the more atoms per shader the more PPD you get and that will correlate with GPU % usage you get. If that is true then it makes sense that a gtx 750Ti with its low shaders gets enough atoms per shader to get nearly 100% usage on a 94xx work unit while a gtx 1080 ti with its high shaders gets not enough atoms per shader so only reached 85% gpu usage. If that can be verified then it would make sense to assign low atoms work units to low shaders GPUs preferred and high atoms work units to high shaders GPUs preferred to get max GPU usage = most science done in total. But that is just a theory and maybe FahBench can be used to benchmark slow and fast GPUs with low or high atoms count work units to verify the effect on GPU usage.
Re: Why are projects 9415 and 9414 such low PPD?
I have the same theory about atoms per shader and I asked that same question of the experts last week. At this point, it's unverified.foldy wrote:The graphics shows the more atoms per shader the more PPD you get and that will correlate with GPU % usage you get. If that is true then it makes sense that a gtx 750Ti with its low shaders gets enough atoms per shader to get nearly 100% usage on a 94xx work unit while a gtx 1080 ti with its high shaders gets not enough atoms per shader so only reached 85% gpu usage. If that can be verified then it would make sense to assign low atoms work units to low shaders GPUs preferred and high atoms work units to high shaders GPUs preferred to get max GPU usage = most science done in total. But that is just a theory and maybe FahBench can be used to benchmark slow and fast GPUs with low or high atoms count work units to verify the effect on GPU usage.
The problem with your solution is that the assignment servers don't know how many shaders your GPU has. While that's a reasonable idea for some future revision of the logic in the AS, it joins a long list of suggested enhancements which probably won't happen soon.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Why are projects 9415 and 9414 such low PPD?
Maybe users could use the option max-packet-size = small, normal, big to suggest for the assignment server how much atoms the next work unit should have preferred. A gtx 1080ti user would choose big and a gtx 1070 user would choose normal and gtx 1050 user would choose small. Even if preferred max-packet-size set the assignment server may still deploy others if needed. Users who do not choose get whatever the assignment server wants. The assignment server only needs to link projects to these 3 classes. So e.g. 94xx work units would get "normal" and 11xxx work units would get "big" and maybe some even smaller work units get "small". Alternatively next FahClient could introduce a new flag preferred-workunit-size = small, normal, big
Re: Why are projects 9415 and 9414 such low PPD?
I vote for a fourth option: max-packet-size = humongousfoldy wrote:Maybe users could use the option max-packet-size = small, normal, big to suggest for the assignment server how much atoms the next work unit should have preferred. A gtx 1080ti user would choose big and a gtx 1070 user would choose normal and gtx 1050 user would choose small. Even if preferred max-packet-size set the assignment server may still deploy others if needed. Users who do not choose get whatever the assignment server wants. The assignment server only needs to link projects to these 3 classes. So e.g. 94xx work units would get "normal" and 11xxx work units would get "big" and maybe some even smaller work units get "small".
No reason why a Ti-class or Titan level card cannot get tailored work units to wring the performance out of them.
Re: Why are projects 9415 and 9414 such low PPD?
max-packet-size='big' is, in fact unlimited. Unfortunately packet-size is based on the amount of data that once had to pass through a dial-up modem, and it's really no longer applicable since it's unlikely that anybody is using a dial-up internet connection. Unfortunately it's the only setting that MIGHT help.Luscious wrote:I vote for a fourth option: max-packet-size = humongous
No reason why a Ti-class or Titan level card cannot get tailored work units to wring the performance out of them.
As I said earlier, the design of a project is not based on your processing capabilites -- they're based on the needs of scientific research.
I have submitted a question to the experts to look for whatever limitations might be applicable to these two projects in OpenMM. They may find something that can be optimized in the code and they may not.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: Why are projects 9415 and 9414 such low PPD?
The issue I have with this is the server SHOULD know what card you have from the GPU txt file. It polls the card to see what you have, and bingo. Or does it only poll it to the point where it asks if you have an nvidia or AMD card? But as for your card not getting used to it's full potential, blame windows and it's horrible driver foundation. Once I moved things over to Linux, the GPU usage % issue no longer became an issue. Windows sucks for folding, stop using it.