measuring/monitoring F@H

Moderators: Site Moderators, FAHC Science Team

alpha754293
Posts: 383
Joined: Sun Jan 18, 2009 1:13 am

Re: measuring/monitoring F@H

Post by alpha754293 »

There are some points about it that I agree with and yet at the same time, disagree with.

IF (BIG IF) all the data does indeed go through the loopback (and I don't know, because like I said, I still haven't made any headway in terms of SystemTap or DTrace, and I'm not even sure if there IS a DTrace port for Linux yet) but IF all the data goes through the loopback, current data is showing that it trickles through at only 8 MB/s.

ON a GbE switch, there should be no problems with running distributed parallel if that's the case.

I'm not a programmer, (somewhat sadly), so I can't do any code development of my own. :(

Clusters used to be really expensive and complex to do, but with the prevalence (somewhat, courtesy of the HPC world) of them; there's been more push now to use PS3 clusters, and Beowulf clusters now using GbE as the system interconnect. I think that on the Top500 list or something that over half of them uses GbE as the system interconnect, so you kinda gotta figure that it if's good enough for the Top500; why can't it be good enough for us? That to me, is something that just doesn't make sense. (56.4% actually, from the 11/08 list).

So while a lot of the points that were brought up USED to be valid (in regards to using GbE as the interconnect), the very fact that more than half of the Top500 uses it too would tend to suggest that perhaps it is worth looking at.

If it works. Great. If it doesn't, some may think that it was a complete and utter waste of time; but I would tend to see it as we gained a LOT of useful information about it.

The other advantage is that cluster Linuxes are somewhat readily available; although the easiest to use BY FAR was still clusterKNOPPIX, which sadly, that, and openMOSIX are both now defunct projects. :( And, it enables people who may have multiple single- and dual-core systems to be able to string them together and run the SMP client.

(Or at least that's the idea anyways).

On somewhat of a personal sidenote, I've just revisted the idea of converting my entire network from GbE to Infiniband and originally I was just going to do with 4x DDR IB. BUT; in pricing the hardware, I've found that the cost to go to 4x DDR and 4x QDR is about the same, so I think that that's what I'm going to be aiming for now instead. So, the plan now is to go to a 40 Gbps interconnect (I believe that the 36-port switch has a total aggregate bandwidth of 2.88 Tbps or something like that), of which 32 Gbps will be data per the IB standard/spec. That should give me about 4 GB/s, which is actually slightly faster than DDR-400.

Also keep in mind that this system isn't going to be a dedicate F@H; but when it is not doing other stuff for me; that it will effectively be one as a fringe benefit/perk.

(Some of the data files that I am currently working with are 350 MB a piece for a simulation that I believe ran for approximately 5 days straight, representing I think almost 20 hours, just to give you guys an idea as to WHY I would want something like IB.)
Post Reply