I realize this is a longshot, but figured I'd ask.
Since it seems that at least some versions of the client or cores use MPI to do multithreading, and since there seems to be an emphasis on returning individual WUs faster these days, is there any posibility of a true MPI client? That is, one that will use MPI to utilize several machines on a network to run faster? Maybe on GigE this would be too inefficient, but better networks are not quite as rare as they used to be.
True MPI version?
Moderators: Site Moderators, FAHC Science Team
-
- Pande Group Member
- Posts: 2058
- Joined: Fri Nov 30, 2007 6:25 am
- Location: Stanford
Re: True MPI version?
The original SMP core was true MPI, but as you mention above, the latency of GigE networks is too poor to allow for efficient use of it, so we limited it to multi-core/multi-proc within a box.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Re: True MPI version?
Totally understandable! Speaking hypothetically, how does Infiniband latency work with GROMACS?
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: True MPI version?
The original question has been asked and answered several times (forum search is your friend). These types of clusters are too few and far between to be worth the development effort. And most systems of any size are of older technology, for either or both the processors and the interconnects.
However, their pert answer has always been... If you have a 100 node system of modern processors with modern interconnects, give us a call.
However, their pert answer has always been... If you have a 100 node system of modern processors with modern interconnects, give us a call.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: True MPI version?
7im, you caught me, I only read the first 5 pages (of 32+) of returned hits from searching for MPI. Sorry to clutter up the board.
Anyway, my cluster is only 18 nodes, but is connected by DDR IB, so I was curious about the theoreticals. I couldn't find much recent literature on the performance of GROMACS on IB networks, and, of course, I have only a vague notion of the operational parameters of bigadv and bigbeta cores. It seems the big national computing centers wouldn't sully themselves with anything as pedestrian as IB these days anyway
Anyway, my cluster is only 18 nodes, but is connected by DDR IB, so I was curious about the theoreticals. I couldn't find much recent literature on the performance of GROMACS on IB networks, and, of course, I have only a vague notion of the operational parameters of bigadv and bigbeta cores. It seems the big national computing centers wouldn't sully themselves with anything as pedestrian as IB these days anyway
Re: True MPI version?
DDR IB can work reasonably well, depending of course on the # of cores per node.
A good recent benchmarking on truly high-performance interconnects can be found here: www.cse.scitech.ac.uk/cbg/benchmarks/Report_II.pdf
As mentioned above, large infiniband clusters are currently rare enough that we don't do custom setups for them. To drive a custom setup, we'd probably need a bunch of people with 18-node-sized clusters or someone with hundreds of nodes/thousands of cores. Big clusters are great, but customizing the setup takes a chunk of (scarce) developer resources.
A good recent benchmarking on truly high-performance interconnects can be found here: www.cse.scitech.ac.uk/cbg/benchmarks/Report_II.pdf
As mentioned above, large infiniband clusters are currently rare enough that we don't do custom setups for them. To drive a custom setup, we'd probably need a bunch of people with 18-node-sized clusters or someone with hundreds of nodes/thousands of cores. Big clusters are great, but customizing the setup takes a chunk of (scarce) developer resources.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: True MPI version?
"MPI" is probably too generic for a good search. "cluster" may get better results.
Also, gromacs.org has a good listserv archive, for example... http://lists.gromacs.org/pipermail/gmx- ... 37005.html
Edit, eh, Kasson posted a good link there! Thanks...
Also, gromacs.org has a good listserv archive, for example... http://lists.gromacs.org/pipermail/gmx- ... 37005.html
Edit, eh, Kasson posted a good link there! Thanks...
Last edited by 7im on Wed Jun 08, 2011 10:42 pm, edited 1 time in total.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: True MPI version?
to be clear...as of Dr. Pande's first response, I am not "requesting" an MPI version; I am now just sort of curious about how such would do. Obviously there are too few MPI clusters looking to contribute to make a true custom version worthwhile.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: True MPI version?
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.