measuring/monitoring F@H
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 383
- Joined: Sun Jan 18, 2009 1:13 am
measuring/monitoring F@H
Is there a way for me to measure/monitor the amount of data that gets passed core-to-core (individual permutations or total aggregate) for any of the SMP clients?
How often is data passed between the two and how much or how big per transfer?
How often is data passed between the two and how much or how big per transfer?
-
- Site Moderator
- Posts: 6397
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: measuring/monitoring F@H
If I remember well, it will be displayed by ifconfig ... look at the amount of data transferred over the loopback adapter, that's FAH data.
-
- Posts: 383
- Joined: Sun Jan 18, 2009 1:13 am
Re: measuring/monitoring F@H
I am assuming that I can also run netstat lo for throughput on the interface itself.toTOW wrote:If I remember well, it will be displayed by ifconfig ... look at the amount of data transferred over the loopback adapter, that's FAH data.
And I am also guessing that in order for me to get good, accurate data from doing that, that I would need to establish a system baseline of the throughput on the loopback interface?
Do all core-to-core transfer go through the loopback?
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: measuring/monitoring F@H
Not necessarily, but it's a good idea for comparison.
For the SMP client, and fahcore to fahcore, yes.
It is a constant flow of data, passing GBs of data...
For the SMP client, and fahcore to fahcore, yes.
It is a constant flow of data, passing GBs of data...
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 383
- Joined: Sun Jan 18, 2009 1:13 am
Re: measuring/monitoring F@H
Hmm...I still have to try/test that out to see what we get. I don't suppose any body has any idea (in terms of order of magnitude) how much data gets passed through?
(I'm prepping for the early estimates in terms of what would happen if I was to run the SMP client as distributed. While I agree that with GbE, it's probably not really worth it (although I'm not sure what time of data transfer pattern I should be expecting -- whether it's going to be short bursts of a lot of data, a constant stream (keeping the network only partially loaded), or a constant stream where the system interconnect (a.k.a. network) will be the bottleneck).
I haven't been able to find any data on that yet, so it would be interesting to see what happens.
(I'm prepping for the early estimates in terms of what would happen if I was to run the SMP client as distributed. While I agree that with GbE, it's probably not really worth it (although I'm not sure what time of data transfer pattern I should be expecting -- whether it's going to be short bursts of a lot of data, a constant stream (keeping the network only partially loaded), or a constant stream where the system interconnect (a.k.a. network) will be the bottleneck).
I haven't been able to find any data on that yet, so it would be interesting to see what happens.
Re: measuring/monitoring F@H
Nobody has mentioned the pattern of the data stream, but from what I know about the mathematics, I think that you'll find the data is interchanged after every step. If your project has, say 500000 steps, then the data will be interchanged 500000 times, so you can probably call it a constant stream.alpha754293 wrote:(I'm prepping for the early estimates in terms of what would happen if I was to run the SMP client as distributed. While I agree that with GbE, it's probably not really worth it (although I'm not sure what time of data transfer pattern I should be expecting -- whether it's going to be short bursts of a lot of data, a constant stream (keeping the network only partially loaded), or a constant stream where the system interconnect (a.k.a. network) will be the bottleneck).
I haven't been able to find any data on that yet, so it would be interesting to see what happens.
You should be aware that much of the data is transferred core to core so running it on a network is probably not possible unless you know something that nobody else has figured out. Moreover, the network data transfers will probably delay the result a lot longer than you can gain by off-loading some processing (depending on your network speed, of course). FAH-SMP is not designed to be run distributed.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 383
- Joined: Sun Jan 18, 2009 1:13 am
Re: measuring/monitoring F@H
Well, yes, I realize that.
But this might lay (perhaps) a bit of the groundwork so that if there are more deadlineless units become available, that people who have multiple single processors or multiple dual-core systems can link them up together in order to work on one SMP unit.
I would imagine that doing that would be faster than trying run a SMP client on a dual-core system (where each core needs to do twice the amount of work than it physically can).
If you have 5 minutes per frame though; then you might only see data being transferred every 5 minutes + the transfer time.
I would imagine that it ought to be pretty simple testing that people should be able to run.
I think that I found a CentOS 5 (I forget the exact version) x86_64 live CD so that way I could try clustering it. I know that that the official liveCD is only x86 (not x86_64).
Worse case scenario, I install CentOS on two of my systems, set up the cluster, and then try and see if I can run F@H in distributed parallel and report back the results.
(The two systems that are scheduled to "receive" it are both quad core machines in and of themselves, but that might allow me to run the max allowable with the latest Linux SMP client.)
*edit*
What I also find rather interesting is that there are a lot of people who say "no. you can't really do that. The network would be too slow, too much of a hinderance, or there'd be too much data that you'd need to pass between the cores and it'll bottleneck the network/system interconnect interface." And yet, there are no numbers behind those claims. Verrrryyyy interesting.
How slow is it going to be? Is it just speculative or is that real data?
But this might lay (perhaps) a bit of the groundwork so that if there are more deadlineless units become available, that people who have multiple single processors or multiple dual-core systems can link them up together in order to work on one SMP unit.
I would imagine that doing that would be faster than trying run a SMP client on a dual-core system (where each core needs to do twice the amount of work than it physically can).
If you have 5 minutes per frame though; then you might only see data being transferred every 5 minutes + the transfer time.
I would imagine that it ought to be pretty simple testing that people should be able to run.
I think that I found a CentOS 5 (I forget the exact version) x86_64 live CD so that way I could try clustering it. I know that that the official liveCD is only x86 (not x86_64).
Worse case scenario, I install CentOS on two of my systems, set up the cluster, and then try and see if I can run F@H in distributed parallel and report back the results.
(The two systems that are scheduled to "receive" it are both quad core machines in and of themselves, but that might allow me to run the max allowable with the latest Linux SMP client.)
*edit*
What I also find rather interesting is that there are a lot of people who say "no. you can't really do that. The network would be too slow, too much of a hinderance, or there'd be too much data that you'd need to pass between the cores and it'll bottleneck the network/system interconnect interface." And yet, there are no numbers behind those claims. Verrrryyyy interesting.
How slow is it going to be? Is it just speculative or is that real data?
Re: measuring/monitoring F@H
When the SMP client came out, I did some calculations. If you get it to work, you'll be the first to confirm my math, though.alpha754293 wrote:How slow is it going to be? Is it just speculative or is that real data?
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 383
- Joined: Sun Jan 18, 2009 1:13 am
Re: measuring/monitoring F@H
I could try. Worst thing that can happen is that it doesn't work. But if it does, we might be able to find out how well (or how poorly) it does or doesn't work.
But it's the whole "you won't know until you try". I don't think that any harm can come out of trying.
But it's the whole "you won't know until you try". I don't think that any harm can come out of trying.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: measuring/monitoring F@H
I have seen the numbers in several posts. Paranoid people, like yourself, have monitored the loopback network traffic and mistakenly thought the client was stealing data and streaming it out of the computer back to Stanford. Neat (and impossible) trick for Stanford to 3rd Man hack a non-routable address like 127.x.x.x that is the loopback address on everyone's Windows computer.
The amount of data is in the GBs, and the stream is near enough to constant to be called that. If you won't take the word of an Admin and a former Mod who both read every post on this forum, then go find those posts for yourself instead of hinting at another conspiracy to hide the data. As I mentioned to you before, a bit of skepticism is healthy, but eventually you either have to accept what we say and stop questioning us about it, or stop questioning us about it and go test it for yourself. Note the commonality...
Download a copy of etherpeek and get to it.

The amount of data is in the GBs, and the stream is near enough to constant to be called that. If you won't take the word of an Admin and a former Mod who both read every post on this forum, then go find those posts for yourself instead of hinting at another conspiracy to hide the data. As I mentioned to you before, a bit of skepticism is healthy, but eventually you either have to accept what we say and stop questioning us about it, or stop questioning us about it and go test it for yourself. Note the commonality...

Download a copy of etherpeek and get to it.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 383
- Joined: Sun Jan 18, 2009 1:13 am
Re: measuring/monitoring F@H
Very interesting. Yet another subjective rather than objective defense.7im wrote:I have seen the numbers in several posts. Paranoid people, like yourself, have monitored the loopback network traffic and mistakenly thought the client was stealing data and streaming it out of the computer back to Stanford. Neat (and impossible) trick for Stanford to 3rd Man hack a non-routable address like 127.x.x.x that is the loopback address on everyone's Windows computer.![]()
The amount of data is in the GBs, and the stream is near enough to constant to be called that. If you won't take the word of an Admin and a former Mod who both read every post on this forum, then go find those posts for yourself instead of hinting at another conspiracy to hide the data. As I mentioned to you before, a bit of skepticism is healthy, but eventually you either have to accept what we say and stop questioning us about it, or stop questioning us about it and go test it for yourself. Note the commonality...
Download a copy of etherpeek and get to it.
call it skepticism or paranoia (I don't really care which), I just prefer to look at the actual data and just let the data speak for itself.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: measuring/monitoring F@H
Like I said, either take our word for it, or go find the posts I mentioned, or go do the tests yourself as toTOW suggested and I repeated. Not accepting our answers, ignoring our points in the right direction, and blathering on about the rest isn't getting you any closer to a solution. GO FIND THAT HARD DATA if that's what you want, but you won't find it by posting here again and again.
Any number of freely downloadable network monitoring softwares will show you the data stream between the fahcores of the SMP client. Get to work.
Any number of freely downloadable network monitoring softwares will show you the data stream between the fahcores of the SMP client. Get to work.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 383
- Joined: Sun Jan 18, 2009 1:13 am
Re: measuring/monitoring F@H
My system finally picked up a WU from the assignment servers, and I'm doing the measurement now.
I did do an earlier trial run and wow, it jumps up fast. So far, on the current WU (and I am going to presume that there will be some variation between WUs), that in the first frame; there's already 6 GiB of data being transferred.
I've set the system to process only the one WU so that I would be able to establish a final rate.
Current rate is about 11.9 MiB/s. (based on the loopback interface traffic only).
A standard GbE is about 119.2 MiB/s so based on that, actually, running F@H in distributed ought to work (minus whatever the transmission latencies are).
I'll have to look more into the transmission latencies in order to find out what kind of effect that it will have on the final, actual transmission speed. But it's actually a LOT less data than I would have previously suspected. I was expecting more along the lines of about 16 Gbps (or 2 GB/s), running at near full memory bandwidth speeds. Does seem to be that way based on the current data that I'm getting.
@7im
Care to dispute that?
I did do an earlier trial run and wow, it jumps up fast. So far, on the current WU (and I am going to presume that there will be some variation between WUs), that in the first frame; there's already 6 GiB of data being transferred.
I've set the system to process only the one WU so that I would be able to establish a final rate.
Current rate is about 11.9 MiB/s. (based on the loopback interface traffic only).
A standard GbE is about 119.2 MiB/s so based on that, actually, running F@H in distributed ought to work (minus whatever the transmission latencies are).
I'll have to look more into the transmission latencies in order to find out what kind of effect that it will have on the final, actual transmission speed. But it's actually a LOT less data than I would have previously suspected. I was expecting more along the lines of about 16 Gbps (or 2 GB/s), running at near full memory bandwidth speeds. Does seem to be that way based on the current data that I'm getting.
@7im
Care to dispute that?
-
- Posts: 383
- Joined: Sun Jan 18, 2009 1:13 am
Re: measuring/monitoring F@H
Update:
Current statistics on loopback interface after 1 h 39 m 8 s:
50341714330 bytes
Average rate: 8463637.244 bytes/s (~8.07 MiB/s)
I still haven't been able to find the latency numbers, BUT I am currently thinking that the latency has got to be quick enough in order to be able to support the actual Gb transfer rate. (However the math works out to it).
The latency would be the only thing that I can think of that would prevent a distributed parallel deployment of F@H SMP because clearly even with a GbE network interface; it ought to have sufficient bandwidth to carry the data.
Currently testing with Project: 2671 (Run 56, Clone 39, Gen 0) GROMACS CVS on SMP 4 threads.
Current statistics on loopback interface after 1 h 39 m 8 s:
50341714330 bytes
Average rate: 8463637.244 bytes/s (~8.07 MiB/s)
I still haven't been able to find the latency numbers, BUT I am currently thinking that the latency has got to be quick enough in order to be able to support the actual Gb transfer rate. (However the math works out to it).
The latency would be the only thing that I can think of that would prevent a distributed parallel deployment of F@H SMP because clearly even with a GbE network interface; it ought to have sufficient bandwidth to carry the data.
Currently testing with Project: 2671 (Run 56, Clone 39, Gen 0) GROMACS CVS on SMP 4 threads.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: measuring/monitoring F@H
Looks about right. Many GBs of data per WU, I didn't recall the actual per second speed, and wasn't going to go looking either.
On what kind of processor were you getting those speeds?
Waiting for latencies to be added in...
Good work so far.
On what kind of processor were you getting those speeds?
Waiting for latencies to be added in...

How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.