Well, I don’t know what is normal, or if the atom count is a deciding factor when doing distribution, since I see everything from 4K to 500K atom count in my GPU logs, but as far as I can also see in the logs, the ability of assigning projects to only CPU or GPU already exists, since a typical example of this is the 148xx projects, which are CPU only and 55K+ atoms.bruce wrote:True, and that makes sense to me. In fact, I suspect that's the way it normally works.
https://stats.foldingathome.org/project?p=14800
And how often the atom count is actually taken into consideration, when assigning projects to CPU, GPU or both, would need a deeper look into the backend statistics, but as a rule of thumb, based on the available benchmark statistics I have in my own systems, they all say the same thing, which is that higher atom count should go to GPU, while lower atom count should go to CPU.
There might of course be occasional variations to this, based on what type of calculation each project uses the most of, where some types of calculations could still be decent for GPU, even at lower atom counts, but as far as I can see in the benchmark statistics I have available for the different projects at this time, that variation seems to be non-existent.
What amount of CPU versus GPU recourses that are available in the network at any given time will fluctuate based on completion rate, and thus would impact the possible distribution of projects, to get them done, but looking at my own statistics, I would assume that the overall network utilization would benefit from doing a general split based on atom count as a start, and if my logs are any representative measure for this, then I would say that this initial split should be on at least 50K atoms.