129.74.85.15 Observations - Projects P7000-7028, 10009-10090

Moderators: Site Moderators, FAHC Science Team

Post Reply
GreyWhiskers
Posts: 660
Joined: Mon Oct 25, 2010 5:57 am
Hardware configuration: a) Main unit
Sandybridge in HAF922 w/200 mm side fan
--i7 2600K@4.2 GHz
--ASUS P8P67 DeluxeB3
--4GB ADATA 1600 RAM
--750W Corsair PS
--2Seagate Hyb 750&500 GB--WD Caviar Black 1TB
--EVGA 660GTX-Ti FTW - Signature 2 GPU@ 1241 Boost
--MSI GTX560Ti @900MHz
--Win7Home64; FAH V7.3.2; 327.23 drivers

b) 2004 HP a475c desktop, 1 core Pent 4 HT@3.2 GHz; Mem 2GB;HDD 160 GB;Zotac GT430PCI@900 MHz
WinXP SP3-32 FAH v7.3.6 301.42 drivers - GPU slot only

c) 2005 Toshiba M45-S551 laptop w/2 GB mem, 160GB HDD;Pent M 740 CPU @ 1.73 GHz
WinXP SP3-32 FAH v7.3.6 [Receiving Core A4 work units]
d) 2011 lappy-15.6"-1920x1080;i7-2860QM,2.5;IC Diamond Thermal Compound;GTX 560M 1,536MB u/c@700;16GB-1333MHz RAM;HDD:500GBHyb w/ 4GB SSD;Win7HomePrem64;320.18 drivers FAH 7.4.2ß
Location: Saratoga, California USA

129.74.85.15 Observations - Projects P7000-7028, 10009-10090

Post by GreyWhiskers »

Image

A quick follow-on to my previous post on GPU server stats.

This is the server that handles the Projects P7000-7028, 10009-10090 - the projects managed by Dr. Izaguirre from Notre Dame.

Not so much analysis or speculation here - just that there is an enormous initial pool of WUs that are being assigned fairly quickly. Several hundred WUs returned each 35 minutes. Unlike my prior post on GPU WUs, the Available and Returned WU axis scales are vastly different. The "41 period moving average" is about 24 hours - 35 minutes x 41.

Again, thanks to PG for keeping things interesting.
izaguirr wrote:Projects 10012-10085 -> accelerate F@H 100 times?
@7im: yes, you've summarized our approach well.

For those of you who are curious, by the end of the summer 2011 we hope to have characterized the performance / accuracy tradeoff of this methodology. Things are looking good. Then we hope to deploy it for select projects on an OpenMM GPU core by the end of 2011. That means that the methodology will be at first restricted to the types of simulations that can use GPUs.

A second stage would include extending the methodology to work with general simulations that can be then incorporated into core A4 (or its successor). We plan on doing this in 2012.

Even at its most successful though, different scientific projects require different levels of modeling fidelity/accuracy. Thus, while we hope that this methodology can enable projects spanning much longer scientific timescales and eventually much larger systems, don't expect the standard CPU and GPU cores to go away ever.
Post Reply