Page 1 of 1
Question
Posted: Sat Oct 15, 2016 7:25 pm
by Jimmyouyang
Does Folding@home have anything that can prevent donor cheating? I'm just curious
Re: Question
Posted: Sat Oct 15, 2016 8:01 pm
by Joe_H
Yes, there a number of steps PG has taken to preclude various forms of cheating.
Also see the FAQ's on the PG site. One FAQ on security -
https://folding.stanford.edu/home/faq/#ntoc45 mentions some software and other measures that also make it difficult to cheat:
What about security issues?
We have worked very hard to maintain the best security possible with modern computer science methodology. Our software will upload and download data only from our data server here at Stanford. Also, we only interact with FAH files on your computer (we don’t read, write, or transmit any other files, as we don’t need to do so and doing so would violate our privacy policy). The Cores are also digitally signed (see below) to make sure that you’re getting the true Stanford cores and nothing else.
How is this possible? We take extensive measures to check all of the data entering your computer and the results we send back to Stanford with 2048 bit digital signatures. If the signatures don’t match (on either the input or the output) the client will throw away the data and start again. This ensures, using the best software security measures developed to date (digital signatures and PKI in version 3.0), that we are keeping the tightest possible security. Finally, the clients are available for download only from this web site (or in certain cases, also from our commercial partners such as Sony, NVIDIA, and ATI), so that we can guarantee the integrity of the software. We do not support Folding@home software obtained elsewhere and prohibit others to distribute the software.
Re: Question
Posted: Sat Oct 15, 2016 9:49 pm
by Nathan_P
Jimmyouyang wrote:Does Folding@home have anything that can prevent donor cheating? I'm just curious
I've been on the forum for 7 years now, every time some one has found a way of cheating, be it misreporting the number of cores to the client, installing the client from a 3rd party, blocking server ip's to cherry pick work units etc it has always been found out and acted upon. Back when you could spoof the old v6 client and pretend that a quad core was a 4c/8t they cut the deadlines on the projects and made the WU bigger so a quad/hex core could make the deadlines, individuals and whole teams have had their points cancelled out for a whole host of reasons so yes, you can cheat - but its not worth it as you will get caught and be zero'd out and potentially ruin projects for donors with the hardware that can do the job in hand - for more on this read up on the old v6 bigadv projects and the shenanigans that went out at the start of the project by donors
Re: Question
Posted: Sun Oct 16, 2016 12:17 am
by 7im
I would call those things client manipulation, not cheating in a strict sense. Unlike some other well known projects, where fake work units were awarded points, that kind of cheating has never happened with Fah.
Re: Question
Posted: Sun Oct 16, 2016 2:26 am
by Jimmyouyang
7im wrote:I would call those things client manipulation, not cheating in a strict sense. Unlike some other well known projects, where fake work units were awarded points, that kind of cheating has never happened with Fah.
So what do you mean by client manipulation, like changing the client set up?
Re: Question
Posted: Sun Oct 16, 2016 3:34 am
by JimboPalmer
Here is a harmless example:
By default, F@H preloads the Next WU at 99% completion of the last. This maximizes the science, as the next Work Units is ready to start when the current one finishes.
You can disable this, and (trivially) reduce the science and (trivially) get more Points Per Day.
Several here do so, they value PPD more than the science.
Is this cheating? No, Folding@Home allows it and offers the chance to configure your client.
Re: Question
Posted: Sun Oct 16, 2016 7:24 am
by foldy
Sorry but your example does not reduce science but increase it slightly. If a next work unit is downloaded at 99% and it takes another 10min to finish the last 1% then the next work unit idles for 10min doing nothing for science. So for fast internet connections it is a good tweak to set next work unit to 100% which means after current work unit finished the next work unit is downloaded in e.g. 10 sec and starts immidiatly. So the lost time for science is 10 sec compared to 10min with default 99%. The slightly increased PPD reflects that higher PPD means more value for the science. This is not a cheat but a good tweak
Re: Question
Posted: Sun Oct 16, 2016 7:37 am
by JimboPalmer
I am sorry, is not a 10 second delay waiting for the next WU to download hurting the science? How how does any part of that strategy increase the speed of returning the current or next WU?
All it does is (very slightly) increase your PPD. no more actual computing gets done anywhere in the process.
Re: Question
Posted: Sun Oct 16, 2016 8:52 am
by Joe_H
For a reasonable download speed, most WU's will be downloaded in the 10-20 seconds it takes from the WU reaching the last step and then going through end processing to be packed up for return. That is much shorter than for instance the 9 minutes a WU would have sat on one of my systems while waiting for the core to process from 99 to 100%. These examples I am taking from one of my systems, my download connection is DSL at about 5.5-6 mbps.
If the system is processing 3 or 4 WU's a day, that 9 minutes adds up to about half an hour of no processing. Even if a WU takes a few seconds longer than the time between 100% and the WU being completely packed up for upload, that is at most a minute or two a day that my system is not processing. More science gets done on my systems with the download set at 100%.
If you have a very slow network connection, then the 99% setting still makes sense. But for many that is not an issue.
Re: Question
Posted: Sun Oct 16, 2016 9:47 am
by rwh202
The 99% / 100% issue is getting a bit off topic, but my tuppence:
The 'next-unit-percentage' was a feature too far - it was introduced as a 'bonus' when the original v6 problem of no simultaneous upload / download was solved in v7. Back in v6 times (unless you used 3rd party proxies like langouste etc.) you'd have an idle system for the 20 minutes of final checkpointing, 50 minutes or so it took to upload a bigadv WU and then the 10 minutes to download a new WU. Next unit download at 100% was all that was required to optimally solve that problem.
With regards to the science produced, then if you look at a client in isolation then 99% might seem better, but there are many 1000's of clients - in which case things get more complicated depending on the supply /demand and priority of assigned projects - here 100% could make more sense since a WU isn't sat idle for 10 minutes - it is downloaded and run by another client that's ready to start processing immediately.
Maybe Stanford have run the model and know the answer, but I somehow doubt that a universal 99% is correct. An intelligent client could perhaps set the optimal solution depending on client and network speed (maybe download when eta = 30 sec remaining?)
Re: Question
Posted: Sun Oct 16, 2016 10:28 am
by artoar_11
Years ago I saw a manipulation for "cherry picking". In projects for single-core processors, external program changed the number of cores (in "client.cfg"; v6.34) .
Before downloading the WU, reduces #cores to 1 (for FAH servers). After downloading the WU, the number of cores is automatically changed to the max, according to the processor. So QRB points increases to 50-60k. Very cleverly invented. I asked a member of the then DAB, to explain Dr. Vj.Pande for it.
After a few days the servers settings were changed. I've forgotten the details.
Re: Question
Posted: Mon Oct 17, 2016 12:19 am
by 7im
In every complex system, there arises a way to potentially game the system to increase your points awarded. But those were done within a grey area of the rules. It wasn't outright cheating, just taking advantage of an existing situation. Some saw it as poor sportsmanship, and the grey area was closed.