What 's up with the bigadv server(s) ?
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 270
- Joined: Sun Dec 02, 2007 2:26 pm
- Hardware configuration: Folders: Intel C2D E6550 @ 3.150 GHz + GPU XFX 9800GTX+ @ 765 MHZ w. WinXP-GPU
AMD A2X64 3800+ @ stock + GPU XFX 9800GTX+ @ 775 MHZ w. WinXP-GPU
Main rig: an old Athlon Barton 2500+ @2.25 GHz & 2* 512 MB RAM Apacer, Radeon 9800Pro, WinXP SP3+ - Location: Belgium, near the International Sea-Port of Antwerp
What 's up with the bigadv server(s) ?
cfr. "no work to do" shit for at least two hours ...
report from a friend (who needed to hit the road) / so, I report it for him.
I see a few servers with 90+% CPU load !
.
report from a friend (who needed to hit the road) / so, I report it for him.
I see a few servers with 90+% CPU load !
.
- stopped Linux SMP w. HT on i7-860@3.5 GHz
....................................
Folded since 10-06-04 till 09-2010
....................................
Folded since 10-06-04 till 09-2010
Re: What 's up with the bigadv server(s) ?
The CPU load is not relevant in this case (and also not the bigadv servers). There is currently less bigadv work than demand. If you are having difficulty obtaining work, we would suggest swapping to standard -smp for a while. Additional bigadv projects are currently in testing.
-
- Posts: 270
- Joined: Sun Dec 02, 2007 2:26 pm
- Hardware configuration: Folders: Intel C2D E6550 @ 3.150 GHz + GPU XFX 9800GTX+ @ 765 MHZ w. WinXP-GPU
AMD A2X64 3800+ @ stock + GPU XFX 9800GTX+ @ 775 MHZ w. WinXP-GPU
Main rig: an old Athlon Barton 2500+ @2.25 GHz & 2* 512 MB RAM Apacer, Radeon 9800Pro, WinXP SP3+ - Location: Belgium, near the International Sea-Port of Antwerp
Re: What 's up with the bigadv server(s) ?
thx for that swift reply; forwarded the message to him ...
- stopped Linux SMP w. HT on i7-860@3.5 GHz
....................................
Folded since 10-06-04 till 09-2010
....................................
Folded since 10-06-04 till 09-2010
Re: What 's up with the bigadv server(s) ?
Seems to me as though the simple law of supply and demand doesn't apply here. Nor do people matter who spend many thousands of dollars to support the cause. It is plain shortsightedness and ineffective planning on Stanford's part, plain and simple.
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: What 's up with the bigadv server(s) ?
Welcome to the F@H Forum CSM580,
It seems that you aren't aware of the "full picture" so here are some bits that you may have missed:
1) Server Software being upgraded (Post)
2) Additional bigadv Projects entering Beta Testing soon (Thread)
I am sure that one you have the bigger picture, you can understand that this is a vital step to ensure that bigadv continues without any hiccups.
It seems that you aren't aware of the "full picture" so here are some bits that you may have missed:
1) Server Software being upgraded (Post)
2) Additional bigadv Projects entering Beta Testing soon (Thread)
I am sure that one you have the bigger picture, you can understand that this is a vital step to ensure that bigadv continues without any hiccups.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
-
- Posts: 460
- Joined: Sun Dec 02, 2007 10:15 pm
- Location: Michigan
Re: What 's up with the bigadv server(s) ?
...or people with less than ideal hardware taking so long to finish a WU that someone has to wait a bit for the next one.
Proud to crash my machines as a Beta Tester!
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: What 's up with the bigadv server(s) ?
...Or so many people with less than ideal hardware taking so many WUs that someone has to wait a bit for the next one.
With such a high bonus in -bigadv, a lot more people are pulling the trigger on 2600s instead of the cheaper i7s.
With such a high bonus in -bigadv, a lot more people are pulling the trigger on 2600s instead of the cheaper i7s.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 270
- Joined: Sun Dec 02, 2007 2:26 pm
- Hardware configuration: Folders: Intel C2D E6550 @ 3.150 GHz + GPU XFX 9800GTX+ @ 765 MHZ w. WinXP-GPU
AMD A2X64 3800+ @ stock + GPU XFX 9800GTX+ @ 775 MHZ w. WinXP-GPU
Main rig: an old Athlon Barton 2500+ @2.25 GHz & 2* 512 MB RAM Apacer, Radeon 9800Pro, WinXP SP3+ - Location: Belgium, near the International Sea-Port of Antwerp
Re: What 's up with the bigadv server(s) ?
Systems (even farms) running 'idle' isn't a plus for the Project and not everyone has the time or inclination for altering switches in all of their Clients to enable their hardware to keep on 'Work'ing.
With today's energy prices, this should be taken in to account.
The Project should be thinking 'Green', trying to avoid unnecessary idling of anyone's hardware.
Since the switches are there to be used, many - with the hardware that is able to - chose to run 'bigadv' units and expect to have Work for their expensive setups.
.
With today's energy prices, this should be taken in to account.
The Project should be thinking 'Green', trying to avoid unnecessary idling of anyone's hardware.
Since the switches are there to be used, many - with the hardware that is able to - chose to run 'bigadv' units and expect to have Work for their expensive setups.
.
- stopped Linux SMP w. HT on i7-860@3.5 GHz
....................................
Folded since 10-06-04 till 09-2010
....................................
Folded since 10-06-04 till 09-2010
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: What 's up with the bigadv server(s) ?
You know the "green" argument is very ironic, right? Which is greener? 10 Systems running at 100%, or 10 systems idling at 10%?
That ironic argument aside, I do agree with you. -bigadv should fall back to regular -smp automatically. No one should have to change any switches. But until that happens... there is a shared responsibility when one chooses to run high end hardware and software. Otherwise you should stop running SMP and run CPU work units.
The expense of the hardware statment is also ironic. One does expect Stanford to try to keep ALL systems churning WUs. And Stanford has good incentive to do so. No one wants to see any systems sitting idle. But also, if I had expensive hardware, I would also be doing my best to keep them churning WUs also. And that means if I have to change switches, then I do that myself, while waiting for Stanford to fix it. Again, it's a shared responsibility, otherwise you're the one letting those machines go to waste by not switching to -smp manually.
Remember, -bigadv is a trial program, not unlike software called beta. There will be some bumps in the road during the trial. And like the standard Stanford beta warning says, "DO NOT run a beta client if you or your machines cannot tolerate even the slightest instability or problems. Beta clients' and servers' performance may vary significantly from standard FAH clients during the development process, including but not limited to work unit shortages, server downtime for upgrades, short notice of client upgrades, and Points Per Day that differs a little or a lot from the developmental benchmark level."
I'm on your side, I'm just not sure that anything can be done about it right now. Stanford is in the middle of software updates all over the place.
That ironic argument aside, I do agree with you. -bigadv should fall back to regular -smp automatically. No one should have to change any switches. But until that happens... there is a shared responsibility when one chooses to run high end hardware and software. Otherwise you should stop running SMP and run CPU work units.
The expense of the hardware statment is also ironic. One does expect Stanford to try to keep ALL systems churning WUs. And Stanford has good incentive to do so. No one wants to see any systems sitting idle. But also, if I had expensive hardware, I would also be doing my best to keep them churning WUs also. And that means if I have to change switches, then I do that myself, while waiting for Stanford to fix it. Again, it's a shared responsibility, otherwise you're the one letting those machines go to waste by not switching to -smp manually.
Remember, -bigadv is a trial program, not unlike software called beta. There will be some bumps in the road during the trial. And like the standard Stanford beta warning says, "DO NOT run a beta client if you or your machines cannot tolerate even the slightest instability or problems. Beta clients' and servers' performance may vary significantly from standard FAH clients during the development process, including but not limited to work unit shortages, server downtime for upgrades, short notice of client upgrades, and Points Per Day that differs a little or a lot from the developmental benchmark level."
I'm on your side, I'm just not sure that anything can be done about it right now. Stanford is in the middle of software updates all over the place.
Last edited by 7im on Tue May 31, 2011 11:40 pm, edited 1 time in total.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: What 's up with the bigadv server(s) ?
As with any project, be it a new building being built or rolling out a new type of bigadv project (-smp 12+), there can be some inconveniences involved with the development.
I'm confident that things will be up and running quickly, or else users will be able to adjust their clients in order to ensure they're folding to their 'potential'. After all, -bigadv is an 'advanced' flag, so users may have to "babysit" their machines from time to time. A couple of days of folding regular smp work units can't be too harmful can it? Especially if everyone else is doing regular -smp work units as well.
I know that any downtime affects PG just as much as it affects folders who have invested money into buying hardware to fold - after all, any downtime delays their research!
In the mean time, I think it's about time I did some maintenance on my bigadv machine
I'm confident that things will be up and running quickly, or else users will be able to adjust their clients in order to ensure they're folding to their 'potential'. After all, -bigadv is an 'advanced' flag, so users may have to "babysit" their machines from time to time. A couple of days of folding regular smp work units can't be too harmful can it? Especially if everyone else is doing regular -smp work units as well.
I know that any downtime affects PG just as much as it affects folders who have invested money into buying hardware to fold - after all, any downtime delays their research!
In the mean time, I think it's about time I did some maintenance on my bigadv machine
Re: What 's up with the bigadv server(s) ?
Welcome to Foldingforum.org, CSM580.CSM580 wrote:Seems to me as though the simple law of supply and demand doesn't apply here. Nor do people matter who spend many thousands of dollars to support the cause. It is plain shortsightedness and ineffective planning on Stanford's part, plain and simple.
You seem to think that Stanford's only purpose is to provide you with whatever you want. Sorry, but that's not the way it works. Stanford is running a series of research projects into how proteins fold. When a project has enough data, that project ends. When a new research project is started, new WUs are available. Those two types of events don't happen to suit your goals, they happen because of the needs of science. Stanford does not create unscientific work assignments just to keep your machine busy.
Shortages of -bigadv have happened before and will continue to happen from time to time. There's always a uproar until either new projects are released or everybody decides that no new projects are on the horizon.noorman wrote:Systems (even farms) running 'idle' isn't a plus for the Project and not everyone has the time or inclination for altering switches in all of their Clients to enable their hardware to keep on 'Work'ing.
With today's energy prices, this should be taken in to account.
The Project should be thinking 'Green', trying to avoid unnecessary idling of anyone's hardware.
Since the switches are there to be used, many - with the hardware that is able to - chose to run 'bigadv' units and expect to have Work for their expensive setups.
Last time this happened, a few of suggestions were made to deal with the situation when a machine can't get a bigadv WU. As far as I know, none have been implemented, but if still doesn't hurt to discuss them and see what the Donors want to be done.
1) What do you think about randomly assigning standard SMP projects to people who have asked for -bigadv when there's a shortage of the preferred projects?
2) What do you think about dynamically reducing the bigadv bonus to reduce demand and make standard SMP more equitable?
3) What do you think about dynamically increasing the requirements for -bigadv (Give increasing priority to machines with higher numbers of core or larger RAM or machines that return WUs by increasingly wide margins ahead of deadlines)
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 41
- Joined: Mon Feb 25, 2008 5:04 pm
- Location: TeAm Anandtech
Re: What 's up with the bigadv server(s) ?
Then something must have been changed in assigning WU because I did get regular SMP occasionally even as little as 4 days ago.7im wrote:-bigadv should fall back to regular -smp automatically. No one should have to change any switches. But until that happens...
-
- Posts: 227
- Joined: Sun Dec 02, 2007 4:01 am
- Location: Willis, Texas
Re: What 's up with the bigadv server(s) ?
I see not much has changed around here Do like I did, save your money dont buy any more hardware and when what you have is out of date (6 months from now) turn it off or sell it, find a new cheaper hobby...
Re: What 's up with the bigadv server(s) ?
RR, lol
I can feel your pain. I have 2 sr2 they all can not get bigadv.
Probably need to shutdown to save electric, while AS out of commission.
I can feel your pain. I have 2 sr2 they all can not get bigadv.
Probably need to shutdown to save electric, while AS out of commission.
-
- Posts: 660
- Joined: Mon Oct 25, 2010 5:57 am
- Hardware configuration: a) Main unit
Sandybridge in HAF922 w/200 mm side fan
--i7 2600K@4.2 GHz
--ASUS P8P67 DeluxeB3
--4GB ADATA 1600 RAM
--750W Corsair PS
--2Seagate Hyb 750&500 GB--WD Caviar Black 1TB
--EVGA 660GTX-Ti FTW - Signature 2 GPU@ 1241 Boost
--MSI GTX560Ti @900MHz
--Win7Home64; FAH V7.3.2; 327.23 drivers
b) 2004 HP a475c desktop, 1 core Pent 4 HT@3.2 GHz; Mem 2GB;HDD 160 GB;Zotac GT430PCI@900 MHz
WinXP SP3-32 FAH v7.3.6 301.42 drivers - GPU slot only
c) 2005 Toshiba M45-S551 laptop w/2 GB mem, 160GB HDD;Pent M 740 CPU @ 1.73 GHz
WinXP SP3-32 FAH v7.3.6 [Receiving Core A4 work units]
d) 2011 lappy-15.6"-1920x1080;i7-2860QM,2.5;IC Diamond Thermal Compound;GTX 560M 1,536MB u/c@700;16GB-1333MHz RAM;HDD:500GBHyb w/ 4GB SSD;Win7HomePrem64;320.18 drivers FAH 7.4.2ß - Location: Saratoga, California USA
Re: What 's up with the bigadv server(s) ?
Good discussion. A couple of questions/comments:
1. re: kasson's announcement. I took that as saying the one server listed would be down. For the -bigadv WUs that I have processed, there are three or more different bigadv servers.
a. 130.237.232.141 for p6900
b. 130.237.232.237 for p6901
c. 171.67.108.22 for p2684/2685
d ??? for others
Are all of these servers to be out of service, or just 130.237.232.141, as noted in the announcement below??? That will say whether all of the bigadv folders need to change their flag NOW to get regular SMP WUs next, or whether the mix will change. e.g., give all of us 2684s to chew on for the next few weeks, if there are that many WUs queued up.
2. re: Bruce's post.
DISCLAIMER. I'm folding bigadvs on my new Sandy bridge i7 2600k system. This system folds a p6900/6901 bigadv, with only a moderate overclock, in 2.2 days vs the Stanford deadline of 4 days. The p2685s take a few hours longer, and the p2684s a lot longer, but still well within the established deadlines. I'm more than willing to abide by whatever decision is made to best further the science, and hope that objective value functions are used to make these decisions.
My view of FAH is that there is a finite but large set of things to be studied - driven by the capability and capacity of the researchers front end preparation and back end processing of results/issuing next-gen WUs. There may be some problems that become "solved", and we should celebrate just that (HINT: it would be very good for PL to let the donors know when one series of projects has come to an end - that the set of trajectories has taken a particular problem set to a logical conclusion.)
I think there are definitely things to be said for the options. I myself think that the PROJECT would be better served by implementing something like #3. I know there was a recent thread asking about the assignment server being able to make decisions based on the Performance Fraction (to use a v6 term) - to preference assignment to those best able to do it.
But, it is an interesting queuing problem. the PROJECT wants the best and fastest completion of its study trajectories.
(Disclaimer. I'm just pulling all of these numbers out of the air for talking purposes. I have no idea how many there really are. The conclusions will differ depending on the actual pouplations)
If there are 10,000 WUs, should they all be reserved for the, say, 100 users who can complete one of the 2684s in less than a day, thus taking ~ 100 wall-clock days to complete the 10,000 WUs, or should the, say, 500 users who can process one of the 2684s in 2.5 days (which still beats the deadline by 1.5 days) be also folding? There is a serialization of the trajectories to consider too - the results of one generation needed to seed the subsequent generations.
a. 100 fastest only complete 10,000 WUs in 100 days
b. 100 fastest plus 500 i7s complete 10,000 WUs in 33 days? (I apologize for what may be fuzzy math.)
What exactly, then, is Stanford's value proposition for the completion?
I threw out somewhat random numbers - but those who know what the real numbers can use an analysis something like this to help determine what the value function is.
Along the way, if we do tweak with the QRB and tweak with the deadlines (say 3 days vs 4 days), we want to be able to up front weed out the incapable folders. The idea of preferencing, or even filtering, the users who can complete with the best performance fraction is good - but we don't want to cut the pool of folders so low that the net result is delaying the science. That's why Stanford needs to articulate their REAL capacity to generate new projects, and process the incremental returns.
It's not rocket science to come up with preferencing formulas. That's a good classroom assignment for undergrads. But the system does need to articulate its value function. i.e., what's the value of finishing an arc of 10,000 bigadv WUs in 33 days vs 100 days for the assumptions I've made? What's the value function for the REAL population of folders we are seeing? The "folder in the street" like me see none of those stats, nor do I think I really want to. But, someone knows them.
1. re: kasson's announcement. I took that as saying the one server listed would be down. For the -bigadv WUs that I have processed, there are three or more different bigadv servers.
a. 130.237.232.141 for p6900
b. 130.237.232.237 for p6901
c. 171.67.108.22 for p2684/2685
d ??? for others
Are all of these servers to be out of service, or just 130.237.232.141, as noted in the announcement below??? That will say whether all of the bigadv folders need to change their flag NOW to get regular SMP WUs next, or whether the mix will change. e.g., give all of us 2684s to chew on for the next few weeks, if there are that many WUs queued up.
Re: Updates thread
by kasson » Tue May 31, 2011 9:10 am
bigadv server 130.237.232.141 is going into accept-only mode for a few days. We're getting ready to upgrade the server software there, and we can't have any outstanding jobs when we do that.
2. re: Bruce's post.
DISCLAIMER. I'm folding bigadvs on my new Sandy bridge i7 2600k system. This system folds a p6900/6901 bigadv, with only a moderate overclock, in 2.2 days vs the Stanford deadline of 4 days. The p2685s take a few hours longer, and the p2684s a lot longer, but still well within the established deadlines. I'm more than willing to abide by whatever decision is made to best further the science, and hope that objective value functions are used to make these decisions.
My view of FAH is that there is a finite but large set of things to be studied - driven by the capability and capacity of the researchers front end preparation and back end processing of results/issuing next-gen WUs. There may be some problems that become "solved", and we should celebrate just that (HINT: it would be very good for PL to let the donors know when one series of projects has come to an end - that the set of trajectories has taken a particular problem set to a logical conclusion.)
I think there are definitely things to be said for the options. I myself think that the PROJECT would be better served by implementing something like #3. I know there was a recent thread asking about the assignment server being able to make decisions based on the Performance Fraction (to use a v6 term) - to preference assignment to those best able to do it.
But, it is an interesting queuing problem. the PROJECT wants the best and fastest completion of its study trajectories.
(Disclaimer. I'm just pulling all of these numbers out of the air for talking purposes. I have no idea how many there really are. The conclusions will differ depending on the actual pouplations)
If there are 10,000 WUs, should they all be reserved for the, say, 100 users who can complete one of the 2684s in less than a day, thus taking ~ 100 wall-clock days to complete the 10,000 WUs, or should the, say, 500 users who can process one of the 2684s in 2.5 days (which still beats the deadline by 1.5 days) be also folding? There is a serialization of the trajectories to consider too - the results of one generation needed to seed the subsequent generations.
a. 100 fastest only complete 10,000 WUs in 100 days
b. 100 fastest plus 500 i7s complete 10,000 WUs in 33 days? (I apologize for what may be fuzzy math.)
What exactly, then, is Stanford's value proposition for the completion?
I threw out somewhat random numbers - but those who know what the real numbers can use an analysis something like this to help determine what the value function is.
Along the way, if we do tweak with the QRB and tweak with the deadlines (say 3 days vs 4 days), we want to be able to up front weed out the incapable folders. The idea of preferencing, or even filtering, the users who can complete with the best performance fraction is good - but we don't want to cut the pool of folders so low that the net result is delaying the science. That's why Stanford needs to articulate their REAL capacity to generate new projects, and process the incremental returns.
It's not rocket science to come up with preferencing formulas. That's a good classroom assignment for undergrads. But the system does need to articulate its value function. i.e., what's the value of finishing an arc of 10,000 bigadv WUs in 33 days vs 100 days for the assumptions I've made? What's the value function for the REAL population of folders we are seeing? The "folder in the street" like me see none of those stats, nor do I think I really want to. But, someone knows them.
Re: What 's up with the bigadv server(s) ?
by bruce » Tue May 31, 2011 4:54 pm
...
1) What do you think about randomly assigning standard SMP projects to people who have asked for -bigadv when there's a shortage of the preferred projects?
2) What do you think about dynamically reducing the bigadv bonus to reduce demand and make standard SMP more equitable?
3) What do you think about dynamically increasing the requirements for -bigadv (Give increasing priority to machines with higher numbers of core or larger RAM or machines that return WUs by increasingly wide margins ahead of deadlines)