Also, if you are bored and want to help, install Foldit. Through this 'game' you can fold proteins yourself

https://fold.it/portal/
Moderators: Site Moderators, FAHC Science Team
Dude I tried. People are lazy. I would rather "donate" all my devices to do something than spend 10 minutes learning this "game". I have a daughter which is now for at least 3 weeks at home and I need to keep her busy.JonazzDJ wrote:No one expected this sudden surge in donors. You can always donate financially. They can use those funds to upgrade their infrastructure.
Also, if you are bored and want to help, install Foldit. Through this 'game' you can fold proteins yourselfThey are currenyly also dedicated to fighting COVID-19!
https://fold.it/portal/
Why not donate (part of) that money to the project directly? There is currently an abundance of computing power, but a lack of funds.aaronbrighton wrote: I'm one of (I'm sure many) who are actually paying hourly to provide optimized cloud CPU and GPU instances to this project, the fact that the servers I've dedicated to the project have been idle for much of the last 24 hours is making me seriously reconsider continuing to pay for the resources.
Aaron
Definitely willing to donate the funds to be used to help scale up the core tech stack supporting the distributed network. However, based on my external perspective from reading the blog posts, social media posts, and forum posts here it's not clear to me the funds would be used in the most effective way to this end. It would be best to get the tech stack positioned correctly, so that the funds could be used most effectively.JonazzDJ wrote:Why not donate (part of) that money to the project directly? There is currently an abundance of computing power, but a lack of funds.aaronbrighton wrote: I'm one of (I'm sure many) who are actually paying hourly to provide optimized cloud CPU and GPU instances to this project, the fact that the servers I've dedicated to the project have been idle for much of the last 24 hours is making me seriously reconsider continuing to pay for the resources.
Aaron
Thanks for the links, was only able to find this very brief description about how the funds are used: https://foldingathome.org/support/faq/donation/Paragon wrote:Here's the link with the info for how to donate financially.
https://foldingathome.org/about/donate/
Direct donation link:
https://gifts.wustl.edu/med/index.html? ... 1=71&sc=NG
The trick is that you need fast storage to organize and serve out the workunits. AWS Glacier is by design very very slow and long-term.aaronbrighton wrote: Has the project ever approached one of the cloud providers to see if they'd be willing to donate resources in support of this mission? Sounds like the project is spending the funds on physical hardware to be housed on-prem?
For instance AWS Glacier storage at the commercial rates for 500TB of data could be as low as $1,250/mo -- sounds like a lot, but when you add up the costs of tapes, disks (multiples due to raid), costs are close to on-prem if not less, not even considering whether AWS or another cloud provider would be willing to donate the resources. The economics parallel into the compute side and distribution side as well. Allow the team to focus on what their specialties are, the projects they're trying to compute, instead of standing up servers and juggling storage.
Folding@home, like nearly every distributing computing project, is client-server with the server handling the workunits for clients to process. Many many years ago there was a project called Storage@home that shared encrypted data between peers, but it never really got off the ground. A similar system would be through torrents and magnetic links. I don't think such a distributed system is fast enough and reliable enough to handle the F@h data that is extremely computationally difficult to rebuild if portions of it are lost. I think though that the project will continue its long pattern of centralizing the workunits and distributing them that way.aaronbrighton wrote: Anyhow, would be nice to get an idea of long term strategy for the platform -- whether it's to go P2P and take advantage of the fleet to distribute the load amongst everyone, or whether it's to go cloud, or whether it's to remain on-prem, and if so why.
I have to agree as well. I joined Folding@home in 2004 and it has been great to be a part of this project. I started folding when I read an article in some magazine online. If the software crashed or FAH was out of WUs that particular day, it's highly unlikely that I would have given this a second thought. I came for the pretty visualization but I stayed for the work being done and this community. Over the years, I've hopped on to other projects but have always come back to FAH for some reason.aaronbrighton wrote:While I may disagree with coarseness of the OP's remarks, I do agree with the substance.
...
Aaron
Definitely, which is why it wouldn't be as simple as "just" glacier, historical results, often untouched, which is I imagine 95% or more of that 500TB would be in Glacier, the rest would be in more readily available S3 storage or on-prem under an overhead minimal object storage platform.Jesse_V wrote:The trick is that you need fast storage to organize and serve out the workunits. AWS Glacier is by design very very slow and long-term.
Thanks for the context and the background, centralized is perfectly fine, if there is a the right architecture and funding to support it.Jesse_V wrote:Folding@home, like nearly every distributing computing project, is client-server with the server handling the workunits for clients to process. Many many years ago there was a project called Storage@home that shared encrypted data between peers, but it never really got off the ground. A similar system would be through torrents and magnetic links. I don't think such a distributed system is fast enough and reliable enough to handle the F@h data that is extremely computationally difficult to rebuild if portions of it are lost. I think though that the project will continue its long pattern of centralizing the workunits and distributing them that way.