Page 2 of 2

Re: Project (WU) allocation question

Posted: Fri Jan 26, 2018 6:53 pm
by bruce
Assignments are currently made based on the answer to a T/F question: "Is the GPU capable of running FAHCore_21?" You want it to be based on a much more difficult question: "Of all of the GPUs that can run FAHCore_21, which are the most beneficiary FOR THIS PARTICULAR PROJECT?" The AS would require qualitative information that differentiates between all of the Fermi-or-Better GPUS and all of the AtI GPUs that can run Core_21 (and potentially, all of the Intel GPUs, if they're ever considered) plus information about each project about which are most suited to each of those GPUs.

To gather accurate information for the entire matrix of performance information would be a costly undertaking, especially since nobody (including the Lab) has samples of every possible GPU It would be an ongoing process since new GPUs and new projects frequently become available.

Each assignment is already based on a list of projects which are currently offering assignments. You're suggesting that the assignment process needs to also be based on a new factor of how suitable the GPU that's requesting an assignment is for each those projects. There's no queue of waiting GPUs that can be compared and there's no way to predict which GPU 'MIGHTT request an assignment soon. I think we can predict that some projects will quickly run out of assignments and others would have to make adjustments to keep those projects from being starved for resources no matter what GPU happened to come along next. Under conditions like that, the assignment process would effectively become a FIFO system like it is now.

How else would you design a system that achieves your objectives? [And remember the KISS principle of project design. Come up with something simpler.)

Re: Project (WU) allocation question

Posted: Sat Jan 27, 2018 6:30 pm
by ikek
Odd, when in full editor there is an additional post in the thread.
bollix47 wrote:Are these the papers you're looking for?
http://folding.stanford.edu/category/papers/
No, I was not looking for the research papers culminating from the FAH project. I was looking for more technical details explaining the project as a whole.
bruce wrote:Assignments are currently made based on the answer to a T/F question: "Is the GPU capable of running FAHCore_21?" You want it to be based on a much more difficult question: "Of all of the GPUs that can run FAHCore_21, which are the most beneficiary FOR THIS PARTICULAR PROJECT?" The AS would require qualitative information that differentiates between all of the Fermi-or-Better GPUS and all of the AtI GPUs that can run Core_21 (and potentially, all of the Intel GPUs, if they're ever considered) plus information about each project about which are most suited to each of those GPUs.

To gather accurate information for the entire matrix of performance information would be a costly undertaking, especially since nobody (including the Lab) has samples of every possible GPU It would be an ongoing process since new GPUs and new projects frequently become available.
There is probably more than a few ways around such issues. There is the "gather all the hardware and test it approach" which is resource intensive and unrealistic. Then, perhaps, an alternate could be to let all hardware that is supported by the core run on the core initially. In the initial stage stats is collected per hardware ID group (e.g. nvidia 1060) and the hardware group tests light, medium and heavy work units. When sufficient interactions have been run a statistical analysis can confirm that the particular hardware is suitable for a set of work loads. If hardware is hierarchically listed it could then claim that if e.g. 1060 is unsuitable then every hardware ID of the same generation which is hierarchically beneath it is also denied access. Then there could be a prioritisation of certain ID's to certain project in lieu with the crude approach proposed in my last post. It might be more complex than a first in first out approach but it could also be more efficient and isn't there some see the donated computing power used in the best manner possibly.

How are deadlines set for WUs?
bruce wrote:Each assignment is already based on a list of projects which are currently offering assignments. You're suggesting that the assignment process needs to also be based on a new factor of how suitable the GPU that's requesting an assignment is for each those projects. There's no queue of waiting GPUs that can be compared and there's no way to predict which GPU 'MIGHTT request an assignment soon.
That a GPU is served a WU when ready for a new WU is essential. I will disagree that there is no way to predict when a GPU will ask for a new WU. I think it comes down to knowing a set of details about the donors donations and their hardware. In essence donation behaviour. Then run an analysis on this by way of the appropriate statistical methods. With a certain degree of reliability it should be possible to predict the distribution of requests. More complex and resource intense sure, impossible doubtful.
bruce wrote: I think we can predict that some projects will quickly run out of assignments and others would have to make adjustments to keep those projects from being starved for resources no matter what GPU happened to come along next. Under conditions like that, the assignment process would effectively become a FIFO system like it is now.
Without knowing the distribution of hardware used and how much each class contribute to the total it is impossible to make a qualified comment. Albeit, looking at the stats at extremeoverclocking, it appears (guessimate) that team Curecoin is larger in the 24h donations than all other teams combined (by virtue of looking at the top100 teams) I guess it could be presumed the majority of those donors are 24/7 donations. Those donors are also using mid to high end hardware. In this regard an allocation process I have proposed would skew a significant amount of the available resources towards "heavy" work units. Which could be an issue.

I guess I am meeting myself in the door here. The Curecoin team, under the set of presumptions above, would skew an allocation approach by its sheer volume of donations. For my part I think we can let the issue(s) rest here. I have gotten my views across and while I may not necessarily agree with the current system I understand why it is the way it is. I would like to thank those that took part in the thread, bruce in particular, for a fruitful conversation.
bruce wrote:How else would you design a system that achieves your objectives? [And remember the KISS principle of project design. Come up with something simpler.)
By way of of ending I would ask if the project infrastructure was designed today from scratch would it be significantly different from what it is today?