bruce wrote:I think you missed my point about O(N) vs O(N^N). With CFD, you can create a mesh so that the forces on any node depend on only on the nearby nodes associated with the mesh elements. With MD that's not true. Every atom is a unique body and the forces on it depend on the distance/charge/bound-type/etc. from every other atom in the protein. You have to add up N forces on each atom. There is no "mesh" in the CFD sense.
In macroscopic terms, what are the forces on the earth from the moon, the sun, mars, venus, jupiter, etc. and how do they influence the motion of earth in free space? Maybe you need to include the solar wind, too, since we're not in a pure vacuum, but only then would you consider a CFD type mesh or some other method of describing the solvent.
I haven't gotten that far or that deep in regards to how GROMACS actually treats the grid, whether it is an actual grid in the traditional sense or whether it's just some kind of discretized space (structured or not).
I would presume that there would be some kind of discretization because otherwise, I don't think that parallelization would have been possible. My guess (and this is really a guess here) is that they're able to decompose/deconstruct the protein that they're working on into multiple subsections and then the data is passed between partitions, otherwise the calculations performed within the partitions more or less stay with in.
I don't know.
I also think that hand-tuned ANYTHING (regardless of language) will always be faster than non-hand-tune code. WIth that being said, whether the tests were done with hand-tuned FORTRAN or not, I don't know, but consider that all other GROMACS platforms uses FORTRAN; in some regards it strikes me as odd that they would do that.
While I would agree with anybody who comes along and says that x86/x64 (Linux or otherwise, but mostly Linux) constitutes the greater majority of the user, I wouldn't argue about that. But considering that there's documentation about running GROMACS on BlueGene and GROMACS on Alpha, and both of those uses FORTRAN for the innermost loops; I am in some ways surprised that they would use Assembly for x86/x64.
I just don't see why they would want to keep two versions of the program. For speed and catering to the bulk majority of commodity hardware users, sure, ok. But I wouldn't be surprised if they're going to be phasing out some of the others because of limited resources to maintain a FORTRAN core version and an assembly core version. *shrug* I don't know.
I think that if it were up to, I'd keep it either as all Fortran or all assembly, rather than a mix. Especially knowing that Fortran is available for all platforms, that would probably win my vote. But again, I've never looked deeply into the GROMACS code and core, and I'm certainly not a molecular biologist or programmer by any stretch of the imagination.
Very interesting...
Here's something that I just found:
"Instead, GROMACS computes forces upon each atom from other atoms within a certain cut-off radius. Long-range forces are calculated using particle-mesh methods."
Source:
http://www.cs.unc.edu/~olivier/pdsec07.pdf
So looks like that there is some sort of spatial discretization.