@JimboPalmer: I understand that organizing and running a project that currently is nearly 100 x86 TFLOPS is a complex machinery though I perhaps do not understand fully the complexities. As with many things in life.
bruce wrote:Keeping track of individual donors's history is another developmental enhancement that's unlikely for sufficient development resources to be allocated. If N new Donors are added, total performance would go up, and the value of N doesn't need to be very big to exceed the enhancements you’re asking for.
Thank you for an informative reply.
Since there is, publicly at least, no information about the distribution of the tiers of hardware within a hardware category it is impossible to quantify if N+x donors will be a better approach than optimising for the existing folders. The project is currently running around 98 x86 TFLOPS
http://folding.stanford.edu/stats/os. A 10 percent efficiency optimisation would probably require a significant amount of recruiting by way of comparison.
If sticking with gpus, and my case is possibly an outlier in the grand scheme of things. I have up to 300k PPD variance and a hypothetical maximum PPD that is 500k lower than my lower 24h threshold. For 1080ti's this means that, in the best case, an optimisation for these cards to make them run at their capacity equals for every 3 cards optimised one new folding card. I run between low 900k (24h) and 1.2m, in the momentary PPD estimates I have been as high as 1.4m but have never received enough of those WU to get to that level of production.
For every 1,000 1080ti's, assuming same ppd characteristics, with the same variation in PPD that could be as high as 300-500m extra PPD per 24h cycle. If 1080s are in the same situation then the magnitude increases. I'm cherry picking the numbers and the 1080ti could be the only card in this situation but there is room for improvement. An allocation mechanism seems to be a possible solution.
I have tried the stats page on the FAH site and struggle with it. The extremeoverclocking.com stats page is good but not perfect. There are many motivations for folding. Some even like seeing their stats or fold because of them. That FAH does not provide good statistical information about the users for its users is unfortunate. Further, imagine something similar to steams hardware summary but tailored for folding. How often hasn't somebody asked "if i get X cpu/gpu how good will it be?" How many hardware related questions could be answered by referring to a stats page? It would be useful for the folders, perhaps not for the team behind FAH.
For instance if I could look up a table showing that the 1080 has a 24h PPD of say 800k+-50k and the 1080ti is 1m with a low of 900k and high of 1.2 (with a hypothetical PPD even higher) then my spending decision would be informed. Today there are some doubtful numbers available on the internet rather than something statistical significant. Would it not be in the interest of the FAH project to have such information readily available to the end user their donors?
bruce wrote:The variables that you're not taking into account are (A) Available development resources are extremely limited. (B) The FAHClient doesn't gather the data the servers would need.(C) Useful contributions by slow GPUs are scientifically valuable too, even if they earn fewer points.
A) I did not go into the issue of resources and their prioritisation in my last post since it hard to comment on the resource situation without knowing anything about it. Nor what a desired or proposed feature would cost etc. This is not to say that end user features which the end user might appreciate is not raised.
B) That the FAHClient doesn't gather gpu id alongside whether it is amd or nvidia is not the same as it is not possible? I assume the information is available to the FAHClient since it is to be found in the FAHClient log and hence possible to send to server side.
C) While it might appear that I am only thinking of my own sick mother (a 1080ti) it is because it is the GPU I have. A point is a point and equal in terms of scientific value regardless of it being from a low or a high end piece of hardware. I have no issue with this. What I question is why there is not a focus on delivering the best suitable work unit to the specific class of gpu. The answer I think I have gotten relates to available resources and that is a fair position.
bruce wrote:In the situation a few days ago, ASSUMING the owners of projects like 9414/9415 had excluded class of GPU including your system, and then the servers with larger projects went off-line for a few days to rebuild the RAID, assignments to that class of GPUs would probably have lapsed, leaving a lot of idle resources.for several days.
With a layered set of WU distribution rules this could be evaded. Hypothetically would it not be possible to have a set of rules with tiers that take effect if the conditions surrounding the primary rules are compromised? The hypothetical project X (or group of projects) accepts only 1080ti's or better cards as long as those collectively provide minimum 100m PPD (24h). If the tier (1080ti or better) does not deliver enough PPD then the next tier of cards gains access to the project, and so forth. If project X is down the resources that are available are momentarily redirected to a less demanding set of projects.
With such a mechanism the loss in computing power would be, as far as I can see it, limited to the time between server goes down and client side or another server asks it if its up and ready for work. Then the contingency rules can be implemented and so forth. It is obviously not necessarily an easy thing to implement.
One last thing, I was looking at the FAH website for white papers (or similar material) that cover the technical aspects of the project and detailed information in general. I could not find any white papers. I found plenty of information on setting up my system and so forth but little/nothing on the backend stuff.