Page 2 of 2

Re: Folding on vast.ai

Posted: Sat Jan 17, 2026 4:03 pm
by HandOverFist
muziqaz wrote: Sat Jan 17, 2026 3:54 pm
HandOverFist wrote: Sat Jan 17, 2026 3:42 pm Previously was 20M ppd...now 54M ppd on the same WU.

Edit: Just made over 213M ppd in less than 25 minutes with (4) 5090's. Lowest scoring gpu was 48M ppd.
magic ;)
Thanks
Been my experience that LAR's ppd estimates for gpus are not far off. Currently all four gpus are producing 50-54M ppd. :D This makes my own rig with (2) 4080's look foolish lol. :lol:

Re: Folding on vast.ai

Posted: Sat Jan 17, 2026 5:16 pm
by Joe_H
Except when they are totally bogus. We have seen some really off numbers for a number of GPUs from the LARS ppd estimates.

Re: Folding on vast.ai

Posted: Sat Jan 17, 2026 5:20 pm
by HandOverFist
Joe_H wrote: Sat Jan 17, 2026 5:16 pm Except when they are totally bogus. We have seen some really off numbers for a number of GPUs from the LARS ppd estimates.
At least we know the 4080 and 5090 are pretty close. :wink:

Re: Folding on vast.ai

Posted: Sat Jan 17, 2026 8:09 pm
by calxalot
Which docker image did you choose?

Re: Folding on vast.ai

Posted: Mon Jan 19, 2026 7:00 pm
by HandOverFist
calxalot wrote: Sat Jan 17, 2026 8:09 pm Which docker image did you choose?
Either appears to work equally well, but using this one atm... https://cloud.vast.ai/?ref_id=260637&te ... 580760e7fd

Re: Folding on vast.ai

Posted: Sun Jan 25, 2026 3:27 pm
by toTOW
Be careful with PPD display from the client :
- it is not accurate at the beginning of the WU, it will settle after a while
- it is completely delirious if stop and resume FAH, until the WU completes and a new one starts.

And of course, it also varies from project to project on a given GPU.

Re: Folding on vast.ai

Posted: Thu Apr 16, 2026 4:01 pm
by Andreas
From my experience with vast.ai, results can vary a lot even with seemingly identical hardware (same GPU model, same CPU class, etc.). The underlying host matters more than it looks - things like PCIe bandwidth, CPU scheduling, thermal limits, and overall system load can make a noticeable difference.

I’ve seen cases where two RTX cards of the same model produce significantly different PPD depending on the host.

Also, a bit counterintuitive, but in some setups disabling CPU folding actually improves overall GPU performance. The CPU can become a bottleneck (or compete for resources), especially on shared or constrained systems.

A couple of things that helped in my case:
• prioritizing GPU slots over CPU
• checking PCIe configuration / bandwidth if available
• trying different hosts rather than assuming identical specs = identical performance

Curious if others have observed the same variability on vast.ai or similar platforms.

Re: Folding on vast.ai

Posted: Sat Apr 18, 2026 2:38 pm
by toTOW
It is a known fact that CPU folding slows down GPU folding. It's not specific to Vast.ai. The choice of using the CPU or not while folding on GPU is only a matter of your own convictions.

Knowing that, rule number one when choosing a Vast.ai offer is : make sure that no one else will be able to use the CPU. It implies to avoid renting a single GPU from a multi GPU system.

And if GPU folding is still much slower than expected if you're not folding on CPU and the only user that has access to it, login to the instance using SSH and make sure that load averages don't show abnormal values : it should be around 1 per GPU that are folding.