What exactly is a core? Are they OpenMM's low-level API?
Posted: Sun Jan 22, 2012 5:17 am
Fahwiki.net defines a core as "the program that performs the calculations [on the Work Unit]. The same core is used by various versions of the client and is automatically updated whenever necessary. This provides an easy way for the scientific calculations to be improved without requiring you to install a new version of the client." This is confirmed by Paper #62. This definition is fine for most people, but I'd like to know more. I've yet to run across a real definition of them. I was reading paper #82 ("OpenMM: A Hardware Abstraction Layer for Molecular Simulations" by Peter Eastman and Vijay Pande) and it basically described three layers of API and how they all work together. As I understand it, the "OpenMM Public API (Public Interface)" provides functions for general biochemical calculations but without forcing them to think about designing their algorithms for the hardware. Then the "OpenMM low-level API (Platform abstraction layer)" serves as a plug-in architecture. "Each implementation (or platform) is distributed as a dynamic library and installed simply by placing it in a particular directory. At runtime, all libraries in that directory are loaded and made available to the program." This sounds like a core to me. This is then followed by "OpenMM implements the public API through calls to a lower-level API that serve as an interface between the platform-independent problem description and the platform-dependent computational kernels." That last part most likely refers to the lowest API, OpenCL, CUDA, MPI, .... Also, it turns out that the Public API can query a class about the particular kernel it wants to use, or the library can choose one automatically. Again, indicating cores. But it doesn't seem that the paper explicitly label them as a "core".
The main problem that I see with cores being OpenMM's low-level API is that F@h cores come from a variety of sources. They've used GROMACS, AMBER, TINKER, CPMD, SHARPEN, ProtoMol, BrookGPU and Desmond. As an example, Desmond was developed by D. E. Shaw Research, the group that runs the Anton molecular dynamics supercomputer, the 2nd fastest protein folding computing system. Why would they adopt OpenMM instead of doing their own thing? What reason do I have to believe that all of these different groups would use F@h's OpenMM architecture? From the paper, it seems like OpenMM is really amazing, but that doesn't necessarily imply that everyone jumps on it. (as an analogy, we're still using the Qwerty keyboard)
So, what exactly is a core, and is it the low-level API that the paper talks about?
The main problem that I see with cores being OpenMM's low-level API is that F@h cores come from a variety of sources. They've used GROMACS, AMBER, TINKER, CPMD, SHARPEN, ProtoMol, BrookGPU and Desmond. As an example, Desmond was developed by D. E. Shaw Research, the group that runs the Anton molecular dynamics supercomputer, the 2nd fastest protein folding computing system. Why would they adopt OpenMM instead of doing their own thing? What reason do I have to believe that all of these different groups would use F@h's OpenMM architecture? From the paper, it seems like OpenMM is really amazing, but that doesn't necessarily imply that everyone jumps on it. (as an analogy, we're still using the Qwerty keyboard)
So, what exactly is a core, and is it the low-level API that the paper talks about?