Please don't let this go to waste
Moderators: Site Moderators, FAHC Science Team
Re: Please don't let this go to waste
No one expected this sudden surge in donors. You can always donate financially. They can use those funds to upgrade their infrastructure.
Also, if you are bored and want to help, install Foldit. Through this 'game' you can fold proteins yourself They are currenyly also dedicated to fighting COVID-19!
https://fold.it/portal/
Also, if you are bored and want to help, install Foldit. Through this 'game' you can fold proteins yourself They are currenyly also dedicated to fighting COVID-19!
https://fold.it/portal/
-
- Posts: 9
- Joined: Tue Mar 17, 2020 8:28 pm
Re: Please don't let this go to waste
You can get more people to be here "for a long haul".
Make them feel important.
Right now your answers are "yeah, we are here for a long haul, we don't care blah blah intel nvidia blah massive numbers blah results will be open publicized blah blah 10 years".
Do better for more people that came in this unprecedented wave or loose them in 2 weeks.
Make them feel important.
Right now your answers are "yeah, we are here for a long haul, we don't care blah blah intel nvidia blah massive numbers blah results will be open publicized blah blah 10 years".
Do better for more people that came in this unprecedented wave or loose them in 2 weeks.
-
- Posts: 9
- Joined: Tue Mar 17, 2020 8:28 pm
Re: Please don't let this go to waste
Dude I tried. People are lazy. I would rather "donate" all my devices to do something than spend 10 minutes learning this "game". I have a daughter which is now for at least 3 weeks at home and I need to keep her busy.JonazzDJ wrote:No one expected this sudden surge in donors. You can always donate financially. They can use those funds to upgrade their infrastructure.
Also, if you are bored and want to help, install Foldit. Through this 'game' you can fold proteins yourself They are currenyly also dedicated to fighting COVID-19!
https://fold.it/portal/
Donate financially? Where? Did you make an announcement of what you need? Did you reach out to hosting providers? After all, we all have computers, but spare money in this time? I doubt it.
Just get it - you have the unprecedented publicity, people willing to donate a lot of computational resources, storage, and you are being snobs saying "long term blah blah".
Communicate, educate newcomers, make it easier, make an app for phones to fold, reach out and make your needs public so others can donate.
Use, LORD FORGIVE ME for this word, blockchain to distribute not only WUs but results as well.
Make results public not only in some scientific papers but explain them and publish for all that participate in a ELI5 form.
Keep the hype and grow or go back to where you all were 2 weeks ago.
Re: Please don't let this go to waste
Read this: https://foldingathome.org/2020/03/15/co ... ple-terms/
They can't make an app overnight you know. Funding is limited.
They can't make an app overnight you know. Funding is limited.
-
- Posts: 20
- Joined: Thu Mar 12, 2020 10:35 pm
Re: Please don't let this go to waste
While I may disagree with coarseness of the OP's remarks, I do agree with the substance.
(Being in tech) I can understand the difficulty associated with scaling up, especially if existing systems are on-premise, and as I've read working with huge amounts of data. I imagine that the expertise @ FAH may be weighted more towards biochemistry vs. specialized technologists or other skill sets. I can also appreciate the overhead of on-boarding new volunteers securely. That said, surely, the teams could spare an hour or two to have a conference call with people willing to volunteer their own personal/professional time to provide advice and potential support in areas where the FAH team's skillsets may be short.
I think it's important (and timely) that FAH take up volunteers current offers to help in these areas, as the OP said these offers aren't indefinite. I offered to help out with getting the Stats API functioning more reliably almost a week ago, which fell on deaf ears: viewtopic.php?f=61&t=32320 To date, I've only successfully got data out of the Stats API twice, rest of the time it timed out -- given the data is only updated hourly, it should be static and distributed of a CDN -- hard to understand why it's overloaded.
From what I've read so far, it sounds like FAH's infrastructure (WU servers, Stats, etc...) is all centralized, owned hardware, and likely not cloud enabled.
If FAH is willing to leverage the expertise that volunteers have been offering as of late, I'm sure important components that need to scale automatically could easily be offloaded to a cloud provider (who probably would be willing to donate the resources). Static resources like stats could be easily made static with hourly updates behind a CDN able to scale up considerably. It's 2020, and if people are spending time bringing physical servers online one-by-one for such a massively distributed computing network, there must be a skills shortage at play -- that time would be better utilized engaging individuals that can more optimally take advantage of the latest tech available to organizations today.
I'm one of (I'm sure many) who are actually paying hourly to provide optimized cloud CPU and GPU instances to this project, the fact that the servers I've dedicated to the project have been idle for much of the last 24 hours is making me seriously reconsider continuing to pay for the resources.
Hope this perspective helps.
Aaron
(Being in tech) I can understand the difficulty associated with scaling up, especially if existing systems are on-premise, and as I've read working with huge amounts of data. I imagine that the expertise @ FAH may be weighted more towards biochemistry vs. specialized technologists or other skill sets. I can also appreciate the overhead of on-boarding new volunteers securely. That said, surely, the teams could spare an hour or two to have a conference call with people willing to volunteer their own personal/professional time to provide advice and potential support in areas where the FAH team's skillsets may be short.
I think it's important (and timely) that FAH take up volunteers current offers to help in these areas, as the OP said these offers aren't indefinite. I offered to help out with getting the Stats API functioning more reliably almost a week ago, which fell on deaf ears: viewtopic.php?f=61&t=32320 To date, I've only successfully got data out of the Stats API twice, rest of the time it timed out -- given the data is only updated hourly, it should be static and distributed of a CDN -- hard to understand why it's overloaded.
From what I've read so far, it sounds like FAH's infrastructure (WU servers, Stats, etc...) is all centralized, owned hardware, and likely not cloud enabled.
If FAH is willing to leverage the expertise that volunteers have been offering as of late, I'm sure important components that need to scale automatically could easily be offloaded to a cloud provider (who probably would be willing to donate the resources). Static resources like stats could be easily made static with hourly updates behind a CDN able to scale up considerably. It's 2020, and if people are spending time bringing physical servers online one-by-one for such a massively distributed computing network, there must be a skills shortage at play -- that time would be better utilized engaging individuals that can more optimally take advantage of the latest tech available to organizations today.
I'm one of (I'm sure many) who are actually paying hourly to provide optimized cloud CPU and GPU instances to this project, the fact that the servers I've dedicated to the project have been idle for much of the last 24 hours is making me seriously reconsider continuing to pay for the resources.
Hope this perspective helps.
Aaron
Re: Please don't let this go to waste
Why not donate (part of) that money to the project directly? There is currently an abundance of computing power, but a lack of funds.aaronbrighton wrote: I'm one of (I'm sure many) who are actually paying hourly to provide optimized cloud CPU and GPU instances to this project, the fact that the servers I've dedicated to the project have been idle for much of the last 24 hours is making me seriously reconsider continuing to pay for the resources.
Aaron
-
- Posts: 20
- Joined: Thu Mar 12, 2020 10:35 pm
Re: Please don't let this go to waste
Definitely willing to donate the funds to be used to help scale up the core tech stack supporting the distributed network. However, based on my external perspective from reading the blog posts, social media posts, and forum posts here it's not clear to me the funds would be used in the most effective way to this end. It would be best to get the tech stack positioned correctly, so that the funds could be used most effectively.JonazzDJ wrote:Why not donate (part of) that money to the project directly? There is currently an abundance of computing power, but a lack of funds.aaronbrighton wrote: I'm one of (I'm sure many) who are actually paying hourly to provide optimized cloud CPU and GPU instances to this project, the fact that the servers I've dedicated to the project have been idle for much of the last 24 hours is making me seriously reconsider continuing to pay for the resources.
Aaron
Are there any whitepapers or info on how the core infrastructure (Stats/WU Servers) are setup, how they scale, etc...? What the money would be used for?
-
- Posts: 137
- Joined: Fri Oct 21, 2011 3:24 am
- Hardware configuration: Rig1 (Dedicated SMP): AMD Phenom II X6 1100T, Gigabyte GA-880GMA-USB3 board, 8 GB Kingston 1333 DDR3 Ram, Seasonic S12 II 380 Watt PSU, Noctua CPU Cooler
Rig2 (Part-Time GPU): Intel Q6600, Gigabyte 965P-S3 Board, EVGA 460 GTX Graphics, 8 GB Kingston 800 DDR2 Ram, Seasonic Gold X-650 PSU, Artic Cooling Freezer 7 CPU Cooler - Location: United States
Re: Please don't let this go to waste
Here's the link with the info for how to donate financially.
https://foldingathome.org/about/donate/
Direct donation link:
https://gifts.wustl.edu/med/index.html? ... 1=71&sc=NG
https://foldingathome.org/about/donate/
Direct donation link:
https://gifts.wustl.edu/med/index.html? ... 1=71&sc=NG
-
- Posts: 20
- Joined: Thu Mar 12, 2020 10:35 pm
Re: Please don't let this go to waste
Thanks for the links, was only able to find this very brief description about how the funds are used: https://foldingathome.org/support/faq/donation/Paragon wrote:Here's the link with the info for how to donate financially.
https://foldingathome.org/about/donate/
Direct donation link:
https://gifts.wustl.edu/med/index.html? ... 1=71&sc=NG
Has the project ever approached one of the cloud providers to see if they'd be willing to donate resources in support of this mission? Sounds like the project is spending the funds on physical hardware to be housed on-prem?
For instance AWS Glacier storage at the commercial rates for 500TB of data could be as low as $1,250/mo -- sounds like a lot, but when you add up the costs of tapes, disks (multiples due to raid), costs are close to on-prem if not less, not even considering whether AWS or another cloud provider would be willing to donate the resources. The economics parallel into the compute side and distribution side as well. Allow the team to focus on what their specialties are, the projects they're trying to compute, instead of standing up servers and juggling storage.
Anyhow, would be nice to get an idea of long term strategy for the platform -- whether it's to go P2P and take advantage of the fleet to distribute the load amongst everyone, or whether it's to go cloud, or whether it's to remain on-prem, and if so why.
-
- Site Moderator
- Posts: 2850
- Joined: Mon Jul 18, 2011 4:44 am
- Hardware configuration: OS: Windows 10, Kubuntu 19.04
CPU: i7-6700k
GPU: GTX 970, GTX 1080 TI
RAM: 24 GB DDR4 - Location: Western Washington
Re: Please don't let this go to waste
The trick is that you need fast storage to organize and serve out the workunits. AWS Glacier is by design very very slow and long-term.aaronbrighton wrote: Has the project ever approached one of the cloud providers to see if they'd be willing to donate resources in support of this mission? Sounds like the project is spending the funds on physical hardware to be housed on-prem?
For instance AWS Glacier storage at the commercial rates for 500TB of data could be as low as $1,250/mo -- sounds like a lot, but when you add up the costs of tapes, disks (multiples due to raid), costs are close to on-prem if not less, not even considering whether AWS or another cloud provider would be willing to donate the resources. The economics parallel into the compute side and distribution side as well. Allow the team to focus on what their specialties are, the projects they're trying to compute, instead of standing up servers and juggling storage.
Folding@home, like nearly every distributing computing project, is client-server with the server handling the workunits for clients to process. Many many years ago there was a project called Storage@home that shared encrypted data between peers, but it never really got off the ground. A similar system would be through torrents and magnetic links. I don't think such a distributed system is fast enough and reliable enough to handle the F@h data that is extremely computationally difficult to rebuild if portions of it are lost. I think though that the project will continue its long pattern of centralizing the workunits and distributing them that way.aaronbrighton wrote: Anyhow, would be nice to get an idea of long term strategy for the platform -- whether it's to go P2P and take advantage of the fleet to distribute the load amongst everyone, or whether it's to go cloud, or whether it's to remain on-prem, and if so why.
F@h is now the top computing platform on the planet and nothing unites people like a dedicated fight against a common enemy. This virus affects all of us. Lets end it together.
-
- Posts: 522
- Joined: Mon Dec 03, 2007 4:33 am
- Location: Australia
Re: Please don't let this go to waste
I have to agree as well. I joined Folding@home in 2004 and it has been great to be a part of this project. I started folding when I read an article in some magazine online. If the software crashed or FAH was out of WUs that particular day, it's highly unlikely that I would have given this a second thought. I came for the pretty visualization but I stayed for the work being done and this community. Over the years, I've hopped on to other projects but have always come back to FAH for some reason.aaronbrighton wrote:While I may disagree with coarseness of the OP's remarks, I do agree with the substance.
...
Aaron
With the exposure that FAH is getting, I see a vast potential for a number of people who are in it for the long haul. Ensuring these volunteers have a good first experience is crucial to the long term success of the project.
I feel the project needs to embrace the community in a better fashion for software development and infrastructure related areas. I don't deny that the folks at FAH labs have their hands full but there are people in the folding community with expertise in areas that have traditionally not been the project's strong points who are willing to step up and contribute. We need a mechanism to engage these volunteers better. Isn't this official support forum, created by uncle_fungus and managed via the inputs from hundreds of "power" users, a testament to the power of what volunteers can do?
Re: Please don't let this go to waste
Not long ago, I watched a massive rapid influx to Mastodon (decentralized social media) from Twitter. They were tired of censorship, and the Mastodon pioneers were eager to have them. It could have been the beginning of Mastodon becoming the new Fakebook, and Twitter becoming the new MySpace. Particularly since newcomers were highly motivated to help improve things, including scaling up.
Instead, even what seemed like good ideas repeatedly got terse shutdowns from a sizable contingent who felt like their special role/society was being usurped by invaders. Over and over, some variant on this played out:
"That's bad, stop."
"Why? There's no logic to it."
"That's just how we do things."
"I'm listening. Why can't this be improved?"
"If you'd been here as long as me, you'd understand."
"So basically we should eff off, then?"
*meaningful silence*
Which is why you've probably never heard of Mastodon.
If you want the barbarian invaders to go away and let F@H resume being a niche project for a Special Few, that's easy to do. Just keep meeting their offers to help scale up with meaningful silence.
It's true that 90% of everything - including unsolicited advice from noobs - is crap. Often for reasons that they really would understand, if they had been here for as long as you have. But no system is perfect, and rejecting all input out of hand (because you "know" you're already doing it the best way) means rejecting any chance to make something good into something great.
Instead, even what seemed like good ideas repeatedly got terse shutdowns from a sizable contingent who felt like their special role/society was being usurped by invaders. Over and over, some variant on this played out:
"That's bad, stop."
"Why? There's no logic to it."
"That's just how we do things."
"I'm listening. Why can't this be improved?"
"If you'd been here as long as me, you'd understand."
"So basically we should eff off, then?"
*meaningful silence*
Which is why you've probably never heard of Mastodon.
If you want the barbarian invaders to go away and let F@H resume being a niche project for a Special Few, that's easy to do. Just keep meeting their offers to help scale up with meaningful silence.
It's true that 90% of everything - including unsolicited advice from noobs - is crap. Often for reasons that they really would understand, if they had been here for as long as you have. But no system is perfect, and rejecting all input out of hand (because you "know" you're already doing it the best way) means rejecting any chance to make something good into something great.
-
- Posts: 20
- Joined: Thu Mar 12, 2020 10:35 pm
Re: Please don't let this go to waste
Definitely, which is why it wouldn't be as simple as "just" glacier, historical results, often untouched, which is I imagine 95% or more of that 500TB would be in Glacier, the rest would be in more readily available S3 storage or on-prem under an overhead minimal object storage platform.Jesse_V wrote:The trick is that you need fast storage to organize and serve out the workunits. AWS Glacier is by design very very slow and long-term.
Without sponsorship from a Cloud platform (though Google is named as a sponsor on the website, so worth talking to them), it's the out-to-internet transfer costs that really add up -- so a hybrid approach where compute and historical is handled in-cloud, and active bandwidth heavy objects are handled on-prem.
Anyhow, it's not storage that's the main problem. It's the compute, it's not terribly expensive to scale up the computer either using EC2 or even cloud native services like SQS. For the API, same deal, have a job dump the latest stats to static files in S3, and serve them up over CloudFront/API Gateway -- again not terribly expensive for small text data.
What would have to change is the minimization of data component, that has to happen in the cloud if the results from volunteer computers is being pushed there -- as pulling the data out is expensive, so analysis of the result data would likely have to happen on computers in the Cloud before being extracted locally for scientific research papers.
Thanks for the context and the background, centralized is perfectly fine, if there is a the right architecture and funding to support it.Jesse_V wrote:Folding@home, like nearly every distributing computing project, is client-server with the server handling the workunits for clients to process. Many many years ago there was a project called Storage@home that shared encrypted data between peers, but it never really got off the ground. A similar system would be through torrents and magnetic links. I don't think such a distributed system is fast enough and reliable enough to handle the F@h data that is extremely computationally difficult to rebuild if portions of it are lost. I think though that the project will continue its long pattern of centralizing the workunits and distributing them that way.
The reason I suggested P2P (or potentially semi/hybrid P2P) model was given the struggles to keep the centralized infrastructure performing adequately. Something along the lines of F@H maintains a centralized storage system for needing to maintain the results and allow researchers to gain access to the results of their compute jobs. However, the WU servers could easily be turned into roles that a # of volunteer donated systems take on to help distribute jobs throughout the network, instead of overloading the centralized WU servers. Much like how DNS, Tor, Skype, etc... work.
-
- Posts: 37
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Re: Please don't let this go to waste
I saw F@h yesterday via a Techspot article. About 24 hours later I have 8 old HP Z-series workstations and an old desktop and a bunch of old GPUs working on bits and pieces and when I last looked they had racked up 37,907 points between them. They get pretty warm when all cores are running at 100% (75 Deg C or so)for hours so a break between assignments does no harm. And I've told them all to go to sleep if nothing happens for an hour. So what was the problem again?
Re: Please don't let this go to waste
I agree with Aaron.
On the point of the "fast storage, and Glacier is not that". This is accurate, but also missing an element that would be in place.
You could easily use an /EFS mount point for fast scaling storage, and put a policy in place (30 days or something) that would move any file not touched from this fast storage over to Glacier to save on costs. You don't need "fast" and have it also be "large long term storage". The fast part is needed only for the generation of work, and not once data is collected from user's client apps. Collection of work would still be saved to the faster storage units until the 30 day policy moves it to Glacier in this example.
On the point of the "fast storage, and Glacier is not that". This is accurate, but also missing an element that would be in place.
You could easily use an /EFS mount point for fast scaling storage, and put a policy in place (30 days or something) that would move any file not touched from this fast storage over to Glacier to save on costs. You don't need "fast" and have it also be "large long term storage". The fast part is needed only for the generation of work, and not once data is collected from user's client apps. Collection of work would still be saved to the faster storage units until the 30 day policy moves it to Glacier in this example.