Page 2 of 2

Re: Own hardware vs. cloud computing

Posted: Fri Apr 10, 2020 3:21 pm
by nivedita
iceman1992 wrote:So you didn't use docker? Okay, I guess I'll try it out after they figure out the overload. No use renting machines if they'll just idle
It does use docker. But you can start from the base image nvidia/opencl:... which is already setup as a normal linux system with nvidia drivers, and just SSH in to install FAH and run it. If you have docker-fu, you can set up your own docker container to automate it and avoid having to do the SSH bit.

Vast.ai slim tutorial

Posted: Fri Apr 10, 2020 3:22 pm
by Jorgeminator
These quick steps should get you started with folding on the Vast.ai platform.
  • 1. Create an account on Vast.ai and add some funds. There should be a $1 free bonus when you add a credit card for the first time, even without adding any funds.

    2. Setting up SSH

    - Download PuTTY from here: https://www.chiark.greenend.org.uk/~sgt ... atest.html. Install.
    - Open PuTTYgen and click 'Generate'.
    - Copy everything in the 'Public key...' box. Go to https://vast.ai/console/account/ and paste the public key under 'Change SSH Key', click 'Set SSH Key'.
    - In PuTTYgen, click 'Save private key' and save it in a convenient location. This will be needed when connecting.

    3. Renting

    - Go to https://vast.ai/console/create/. Click 'Edit image & config...'. Scroll down and select 'nvidia/opencl'. In the dropdown list, select 'devel-ubuntu18.04'.
    - Choose how much disk space you want to allocate. 2.00 GB was enough for me. Click 'Select' at the bottom.
    - Find yourself a machine you want to rent and choose 'RENT'. From now on you will be billed as long as the instance exists.

    4. Connecting

    - You will find your newly created instance in https://vast.ai/console/instances/.
    - Wait for the instance to be provisioned, this may take a few minutes.
    - In the top part of the instance information window you will find the 'address', for example 'ssh3.vast.ai', and the 'port number', for example '22424'.
    - In PuTTY's 'Host Name' box you should enter the address in the following format: root@address, the example above would become root@ssh3.vast.ai
    - Paste the port number into the 'Port' box.
    - In the Category list to the left, go to Connection --> SSH --> Auth. In the 'Private key file...' box, browse for the private key you saved in step 2.
    - In the Category list, go back to Session and make sure the address and port at still the same. Select 'Open' at the bottom.
    - You should receive a popup asking to accept the key. Click Yes.
    - You should now be connected and greeted with the command line.

    5. Installing and configuring FAH

    - Update and upgrade the instance:

    Code: Select all

    apt update
    apt upgrade -y
    apt install -y wget nano
    - Install FAH:

    Code: Select all

    wget https://download.foldingathome.org/releases/public/release/fahclient/debian-stable-64bit/v7.5/fahclient_7.5.1_amd64.deb
    dpkg -i fahclient_7.5.1_amd64.deb
    - Enter your username, team number and passkey. Choose if you want FAH to start at boot. The installer might fail to perform the post-install steps, ignore it.
    - Edit the config file with your own data if it hasn't been populated already, and remember every GPU needs its own slot (ask me how I know :lol: ):

    Code: Select all

    nano /etc/fahclient/config.xml

    Code: Select all

    <config>
    
    <!-- User Information -->
    
    <passkey v='1234567890xxxxxxxxx'/>
    
    <team v='123456'/>
    
    <user v='ItsMe'/>
    
    <!-- Folding Slots -->
    
    <slot id='0' type='GPU'/>
    
    </config>
    - To save the config file and exit nano: 1. Ctrl+X, 2. Y

    6. Start FAH

    Code: Select all

    FAHClient

Re: Own hardware vs. cloud computing

Posted: Fri Apr 10, 2020 6:32 pm
by excelblue
A note on the economics:

I have a box that does ~6mil PPD: 2x RTX 2080 Ti + AMD Ryzen 3960X. It uses approximately 850W. Even with the sky high electricity rates where I live ($0.30/kWh), that works out to ~$6/day.

Yeah, the hardware costed me just under $5k, but you can imagine the economics of this if you actually ran this somewhere where electricity is cheap.

The takeaway here is: the cloud providers actually make a lot of profit with their GPUs. If you don't need the amortize the cost of the hardware (i.e. someone bought the GPU for gaming anyways), it's only a fraction of the price to operate.

Re: Own hardware vs. cloud computing

Posted: Fri Apr 10, 2020 10:27 pm
by PantherX
Appreciate the detailed instructions, Jorgeminator :)

Re: Own hardware vs. cloud computing

Posted: Sun Apr 12, 2020 1:18 am
by v00d00
Endgame124 wrote:
v00d00 wrote:It would be interesting to see a case study based on what could be run from a 8kw solar system as part of a household as well. If the power was generated for free and you maximised production by using low wattage cards like those mining 1060's @ 75w maybe into a low power 12-24v setup using a PicoPSU. How viable would that be as a long term, 'fire and forget', style folding solution. Connect them in directly to the battery bank and not via the inverter.

I have a 9kw Solar system on my home, and on average am producing 6kwh / day surplus while also accounting for all my home usage and folding with a 1080ti and running Rosetta at home on 4 older systems (q9650, A10-5800k, A10-7870K, i3-370m).

I have a EVGA 1660 super on the way to experiment with best PPD / watt - depending what I find, I may step up to a 2060 super
Cheers for that. Im not in a position to do it currently, but down the line, the thought of running a solar or hybrid system to fold would interest me. I already built several systems that run from PicoPSU, for use in campervans, mostly on itx boards running the lowest power intel cpus. With the right setup its easy to keep power usage very low, but not low enough that you could run it 24 hours a day. But for a few hours a day its possible. One of the systems had a 1050 in it and was capable of reasonable gaming while still only using about 130w. Taking that principle and increasing to something that could drive 4 low power nvidia cards would interest me, but while keeping total wattage to less than 400w (but that would require about 5kw of solar and a 500ah battery based on 2 hours of solar per day, a 30% overprovision and a 24v system voltage).

Re: Own hardware vs. cloud computing

Posted: Fri Apr 24, 2020 4:48 pm
by iceman1992
https://twitter.com/foldingathome/statu ... 1171490816
I wonder if this is applicable for vast.ai ? Will this make setup easier?

Re: Own hardware vs. cloud computing

Posted: Tue Sep 29, 2020 1:15 pm
by gunnarre
If you just select the regular "nvidia/opencl" container on vast.ai and put in a deploy script, then I think that is less work than using the folding@home container. That is, unless vast.ai puts a folding container in as one of their standard selections. I guess they might do that if someone asks?

By the way, thanks to core 22 v 0.0.13 with CUDA support, the cost calculation changes a bit: Previously, under OpenCL folding, it was often best to rent a 2080Ti, but now a pair of 1080Ti's can give a better PPD per USD efficiency.

Re: Own hardware vs. cloud computing

Posted: Tue Sep 29, 2020 4:50 pm
by MeeLee
I've tried the free tiers from AWS, Cloud, and Azure, and they just throttle the crap out of any serious compute engines.
It costs around $100/yr to run a quad core 2,3Ghz CPU on average.
Meanwhile my Atomic Pi units, who don't get throttled, but operate at 4 cores 1,7Ghz outdid the cloud services in performance.
Meanwhile their initial cost was ~$40-50/unit (lower with more units, higher with less units), and run about $1.5 a month per unit.

I haven't tried the GPU services, but from what I read, the amount you pay on GPU compute is ridiculous!
You can quite literally buy a top end GPU for it, with 1 year of folding on cloud service!
Especially with the RTX3080 with is ridiculously low priced (same price as a 2070 Super, same performance as a 2080 Ti).

Re: Own hardware vs. cloud computing

Posted: Tue Sep 29, 2020 6:44 pm
by gunnarre
GPU folding on AWS, Google Cloud, Azure and IBM Cloud are all prohibitively expensive compared to folding at home, unless you're benefiting from a free offer from them. Vast.ai is not quite as terrible:

One year of folding on vast.ai, with a non-interruptible 1080Ti instance now costs about 10 cents/hour for 3M PPD in CUDA. That works out to about 876,6 USD per year. The 1080Ti had an MSRP of $699 and is now selling for about $400 used.

Power usage of the whole PC would be around 350 W, or 3068 kWh/year. If you're paying California rates of about 15 cents/kWh that's about $460/year in electricity. With Norwegian electricity prices of 4 cents/kWh, that is only 123 USD/year in electricity costs.

So if you're in California, buying a used 1080Ti and folding with it yourself is a bit of a wash compared to folding in vast.ai. If you have to run an air conditioner because of folding, it pushes cloud folding into being better than folding at home with this particular card. Cloud folding is not economical in Norway, particulary because waste heat is less of an issue here - we can use it for heating.
MeeLee wrote:Especially with the RTX3080 with is ridiculously low priced (same price as a 2070 Super, same performance as a 2080 Ti).
Indeed. Thanks to the CUDA update, less powerful cards like the GTX 1060, GTX 1660 Ti and the P106-100 mining card have gained a boost in folding power. And the RTX 3080 at the same MSRP as the 1080 Ti is folding at a much higher rate. So with these cards folding at home can be much better than folding on vast.ai, even if you live in California.

Re: Own hardware vs. cloud computing

Posted: Tue Sep 29, 2020 11:54 pm
by MeeLee
gunnarre wrote: Indeed. Thanks to the CUDA update, less powerful cards like the GTX 1060, GTX 1660 Ti and the P106-100 mining card have gained a boost in folding power. And the RTX 3080 at the same MSRP as the 1080 Ti is folding at a much higher rate. So with these cards folding at home can be much better than folding on vast.ai, even if you live in California.
How does CUDA aid in performance?
My understanding is that it only helps out GPUS when you have more than 1 GPU, one of them being not fully utilized.

Re: Own hardware vs. cloud computing

Posted: Wed Sep 30, 2020 12:00 am
by PantherX
MeeLee wrote:...How does CUDA aid in performance?
My understanding is that it only helps out GPUS when you have more than 1 GPU, one of them being not fully utilized.
Instead of the simulation using OpenCL platform for simulation, it will use the CUDA platform which does provide a decent performance gain. Kepler or newer GPUs will be using the CUDA functionality and the speed up can be from 15% to 100% where traditional Projects are in the 15% range while the free energy Projects (Moonshot currently uses them) will be near the 100% range. For some more details, you can read the blog post: https://foldingathome.org/2020/09/28/fo ... a-support/

Re: Own hardware vs. cloud computing

Posted: Wed Sep 30, 2020 7:11 pm
by bruce
OpenCL 1.2 is sufficient to process FAH assignments on either NVidia or AMD GPUs (or some others). There is little incentive to enhance this code since being Open and being universally accepted are its main objectives. On the other hand, CUDA is proprietary code that is a great competitive advantage for NVidia. It is regularly enhanced/optimized to support every feature in their latest hardware. After a number of years of upgrades, CUDA does work better than it did when OpenCL was originally distributed ... but you can't use it on your non nV GPUs.

Re: Own hardware vs. cloud computing

Posted: Tue Nov 24, 2020 10:12 am
by gunnarre
Cloud folders in particular should be aware that there is a vulnerability in the FAHControl GUI before version 7.6.20 that could allow your cloud instances to execute code on your GUI machine: viewtopic.php?f=108&t=36471

Re: Own hardware vs. cloud computing

Posted: Tue Nov 24, 2020 7:02 pm
by foldy
This is more a theoretical issue as you would not expect your own cloud instance to be evil and attack your PC running FahControl GUI. But if you update to FahClient 7.6.21 or later then you are also theoretical safe.

Re: Own hardware vs. cloud computing

Posted: Tue Nov 24, 2020 7:12 pm
by gunnarre
7.6.20 is also safe from the vulnerability. 7.6.13 has the vulnerability.

If you're running an instance on Azure or AWS, then it's even less likely than your PC coming with pre-installed malware, yes. But if you rent on places like vast.ai which is an open marketplace where anyone can sell computing, then it's still a slim chance but slightly higher.