Question regarding the News post on July 8

Please confine these topics to things that would be of general interest to those who are interested in FAH which don't fall into any other category.

Moderator: Site Moderators

Simplex0
Posts: 69
Joined: Sun Oct 06, 2013 10:35 am

Question regarding the News post on July 8

Post by Simplex0 »

In a post on the News page on July 8 here https://foldingathome.org/2020/07/08/ci ... -covid-19/ it says.....

"An unprecedented 0.1 seconds of simulation of the viral proteome reveal how the spike complex uses conformational masking to evade an immune response"

Does that means that a covid-19 work unit that commonly takes several hours to process only covering 0,1 seconds in real life?
Hopfgeist
Posts: 70
Joined: Thu Jul 09, 2020 12:07 pm
Hardware configuration: Dell T420, 2x Xeon E5-2470 v2, NetBSD 10, SunFire X2270 M2, 2x Xeon X5675, NetBSD 9; various other Linux/NetBSD PCs, Macs and virtual servers.
Location: Germany

Re: Question regarding the News post on July 8

Post by Hopfgeist »

It is a lot worse than that. A single work unit is more on the order of a few nanoseconds. They are talking about millions of work units, in combination simulating a total of 100 milliseconds real-time.

Which is what makes this result unprecedented. Never before have there been atomic-scale protein simulations for such a long timespan. Typically they only simulate microseconds up to a few milliseconds.

Chemical reactions are unbelievably fast, because the constituents involved are unbelievably small.

For reference: to measure the timing of chemical reactions in the real wold it takes femtosecond resolution. Take a look at this presentation on how awesome femtosecond x-ray lasers are.

There are 100 trillion femtoseconds in 0.1 seconds. I think F@H may use a slightly coarser timescale, but probably not by much, and it takes an extraordinarily large number of steps to simulate 0.1 seconds.

Bernd
Image
Dell PowerEdge T420: 2x Xeon E5-2470 v2
uyaem
Posts: 219
Joined: Sat Mar 21, 2020 7:35 pm
Location: Esslingen, Germany

Re: Question regarding the News post on July 8

Post by uyaem »

Simplex0 wrote:Does that means that a covid-19 work unit that commonly takes several hours to process only covering 0,1 seconds in real life?
I think it's actually much "worse" than that, it's the combination of several thousand hours of computing that resulted in those 0.1 seconds.
Image
CPU: Ryzen 9 3900X (1x21 CPUs) ~ GPU: nVidia GeForce GTX 1660 Super (Asus)
Neil-B
Posts: 1996
Joined: Sun Mar 22, 2020 5:52 pm
Hardware configuration: 1: 2x Xeon E5-2697v3@2.60GHz, 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon E3-1505Mv5@2.80GHz, 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: i7-960@3.20GHz, 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21
Location: UK

Re: Question regarding the News post on July 8

Post by Neil-B »

If I understand it correctly "several" might be quite a large number tbh? ... either that or thousands might be millions !! ... or I might have misunderstood
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070

(Green/Bold = Active)
Joe_H
Site Admin
Posts: 7927
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: Question regarding the News post on July 8

Post by Joe_H »

If I remember the timescale correctly, for most projects each "step" listed in the log for a WU is 2 femtoseconds. In some of the GPU projects they were testing the use of steps that were twice as long. But for the 2 femtosecond time step, if the WU ran for a total of 500,000 steps that was for the length of 1 nanosecond.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
Simplex0
Posts: 69
Joined: Sun Oct 06, 2013 10:35 am

Re: Question regarding the News post on July 8

Post by Simplex0 »

Thank you all for helping me understand this.
Neil-B
Posts: 1996
Joined: Sun Mar 22, 2020 5:52 pm
Hardware configuration: 1: 2x Xeon E5-2697v3@2.60GHz, 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon E3-1505Mv5@2.80GHz, 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: i7-960@3.20GHz, 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21
Location: UK

Re: Question regarding the News post on July 8

Post by Neil-B »

Joe_H wrote:If I remember the timescale correctly, for most projects each "step" listed in the log for a WU is 2 femtoseconds. In some of the GPU projects they were testing the use of steps that were twice as long. But for the 2 femtosecond time step, if the WU ran for a total of 500,000 steps that was for the length of 1 nanosecond.
So scratches head to get grey matter working .. 0.1 secs would be about 100 million (or 50 million for twice as long steps) WUs based on what you recall .. grief !!
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070

(Green/Bold = Active)
Hopfgeist
Posts: 70
Joined: Thu Jul 09, 2020 12:07 pm
Hardware configuration: Dell T420, 2x Xeon E5-2470 v2, NetBSD 10, SunFire X2270 M2, 2x Xeon X5675, NetBSD 9; various other Linux/NetBSD PCs, Macs and virtual servers.
Location: Germany

Re: Question regarding the News post on July 8

Post by Hopfgeist »

Neil-B wrote:
Joe_H wrote:If I remember the timescale correctly, for most projects each "step" listed in the log for a WU is 2 femtoseconds. In some of the GPU projects they were testing the use of steps that were twice as long. But for the 2 femtosecond time step, if the WU ran for a total of 500,000 steps that was for the length of 1 nanosecond.
So scratches head to get grey matter working .. 0.1 secs would be about 100 million (or 50 million for twice as long steps) WUs based on what you recall .. grief !!
Yes, I did the math, too. But I didn't post it because I'm still a bit sceptical. According to the stats on extremeoverclocking.org there have only been 915 million WUs total, so this project alone would have used between 5 and 10% of the whole effort. Given the total number of projects I'm not sure that is right, but I cannot find statistics about which project has been finishing how many work units.

And that also assumes roughly equally-sized work units, which isn't the case, but still. Mind blown.

Cheers,
HG.
Image
Dell PowerEdge T420: 2x Xeon E5-2470 v2
JimF
Posts: 651
Joined: Thu Jan 21, 2010 2:03 pm

Re: Question regarding the News post on July 8

Post by JimF »

Hopfgeist wrote:According to the stats on extremeoverclocking.org there have only been 915 million WUs total, so this project alone would have used between 5 and 10% of the whole effort. Given the total number of projects I'm not sure that is right, but I cannot find statistics about which project has been finishing how many work units.
That seem reasonable enough to me. Folding is one of the oldest distributed computing projects around, and they have done a LOT of stuff, long before COVID came around.
They are one of the few that does GPU work also, which attracts a lot of crunchers. And the work supply has been steady. They really don't run out. So it all adds up.
Simplex0
Posts: 69
Joined: Sun Oct 06, 2013 10:35 am

Re: Question regarding the News post on July 8

Post by Simplex0 »

After reading parts of the full text here https://www.biorxiv.org/content/10.1101 ... 430v1.full
it says.....

"simulating every protein that is relevant to SARS-CoV-2 for biologically relevant timescales would require compute resources on an unprecedented scale."

But I have not found any information on exactly how many proteins "every protein", relevant to this search, is. Does anyone have any information on this?

They also say that.....

"Using this resource, we constructed quantitative maps of the structural ensembles of over two dozen proteins and complexes that pertain to SARS-CoV-2."

Should I take this as that Folding@home so far have covered a little more than 12 proteins of all the proteins that needs to be covered?
uyaem
Posts: 219
Joined: Sat Mar 21, 2020 7:35 pm
Location: Esslingen, Germany

Re: Question regarding the News post on July 8

Post by uyaem »

Okay, some clarification on the matter... I remembered this being answered indirectly on Discord a while ago, and I finally worked the search function correctly:
SlinkyDolphinClock wrote:@Grayfox @Uyaem nanoseconds may not seem like a lot, but atoms move around really quickly and the client computes/updates the forces/positions between atoms (i.e. a new "snapshot") either every 2 or 4 femtoseconds. Those snapshots are usually saved every 1-100 picoseconds, and all those frames/snapshots constitute the trajectory that is sent back, so a lot can actually happen in a couple nanoseconds of simulation
Link to Discord screenshot here: https://ibb.co/FxRyLmk.

So every snapshot is already 1+ picoseconds.
With a GPU project being 100 snapshots normally (assuming the snapshot is the same as a viewer snapshot), we have at least 100ps/WU

So that's 0,1s = 100ms = 100,000µs = 100,000,000ns = 100,000,000,000ps, so 1bn WUs.

Based on a comment from PantherX on Discord the day before
PantherX wrote:[...]and I think it is few nanoseconds which allows the researchers to have a good "feel" for the project.
I would guess those 0.1s are the sum of trajectories and not a single one.
Image
CPU: Ryzen 9 3900X (1x21 CPUs) ~ GPU: nVidia GeForce GTX 1660 Super (Asus)
Brad_C
Posts: 8
Joined: Sat Apr 18, 2020 10:01 pm

Re: Question regarding the News post on July 8

Post by Brad_C »

By comparison, here's an article about using a supercomputer for 100 days in 2010 to set a new record simulating a protein for one millisecond.
https://www.nature.com/news/2010/101014 ... ews.2010.5
Image
Hopfgeist
Posts: 70
Joined: Thu Jul 09, 2020 12:07 pm
Hardware configuration: Dell T420, 2x Xeon E5-2470 v2, NetBSD 10, SunFire X2270 M2, 2x Xeon X5675, NetBSD 9; various other Linux/NetBSD PCs, Macs and virtual servers.
Location: Germany

Re: Question regarding the News post on July 8

Post by Hopfgeist »

uyaem wrote:Okay, some clarification on the matter... I remembered this being answered indirectly on Discord a while ago, and I finally worked the search function correctly:
[...]
So every snapshot is already 1+ picoseconds.
With a GPU project being 100 snapshots normally (assuming the snapshot is the same as a viewer snapshot), we have at least 100ps/WU

So that's 0,1s = 100ms = 100,000µs = 100,000,000ns = 100,000,000,000ps, so 1bn WUs.
That doesn't work out, since the (preprint) paper says explicitly that the stepsize is 4 femtoseconds. So 100 ms would require 25,000,000,000,000 steps, and with a typical work unit consisting of 250,000 steps that would be "only" 100,000,000 work units, or 50 million WUs of 500,000 steps.
I would guess those 0.1s are the sum of trajectories and not a single one.
Yes, quite clearly, when one actually reads the paper.

Cheers,
HG
Image
Dell PowerEdge T420: 2x Xeon E5-2470 v2
Joe_H
Site Admin
Posts: 7927
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: Question regarding the News post on July 8

Post by Joe_H »

... and with a typical work unit consisting of 250,000 steps...
250,000 steps is a typical size for a CPU project WU. GPU projects typically have 1,000,000 or more steps done in each WU
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
uyaem
Posts: 219
Joined: Sat Mar 21, 2020 7:35 pm
Location: Esslingen, Germany

Re: Question regarding the News post on July 8

Post by uyaem »

Using the combined info from the postings of Hopfgeist and Joe_H, let's take an average of 500k steps (across the sum of CPU and GPU WUs):
That would mean 50 million WUs to get 100ms.
With an assignment rate of above 100k WUs/h at the time (seen on https://apps.foldingathome.org/serverstats), that would mean roughly 2.5 million WUs per day, so 20 days - I think that could check out given the overall time frame :) (whilst still keeping in mind that some WUs were dumped, faulty, for other projects, expired, ... then server downtimes, ... )
Image
CPU: Ryzen 9 3900X (1x21 CPUs) ~ GPU: nVidia GeForce GTX 1660 Super (Asus)
Post Reply