Hi
Just wanted to bring to your attention that the 18125 WU's are reporting much lower numbers (est ppd) in the FAH client.
EVGA 3080 TI FTW3 Ultra Gaming showing 3.5 mil ppd
EVGA 3080 FTW3 Ultra Gaming showing 2.5 mil ppd.
They both normally would be averaging twice and close to three times that amount, on other work units.
Strickly FYI
I am just one in a million and I felt I should let you know
18125 Low PPD
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 18125 Low PPD
Yeah. I have reported the same, as well as a couple others. That project is just super low PPD compared to everything else… and has a lot of WUs for past couple of months.
-
- Site Admin
- Posts: 7922
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: 18125 Low PPD
Project 18125 is a low atom count GPU project compared to others, it doesn't scale up as well on the fast wide GPUs like the 3080s and 3090s. Normally they would get assigned to the lesser GPUs where they scale better, but they may be in higher supply than the larger atom count projects.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 18125 Low PPD
Thanks, Joe_H. I think that finally clicked in my brain this time. Is this a (good) analogy: the 3080/3090 are like 128 lanes of roadway moving at 100mph in each lane… 18125 only needs like 64 lanes though and the cars on it go 65mph… so… for roadways (GPUs) that have 64 lanes and run at 65mph all the time, 18125 doesn’t actually get more cars per day through (PPD) for them. Because with workloads that have MORE atoms cars on the road at the same time, they get the same number of total PPD (cars through per day) as they could would other workloads which max out their number of lanes and speed limits. They just take longer since they have less and/or slower lanes.
Well, not sure if it was a GOOD analogy, but now I understand it I think.
Any way to put more projects (groups of cars) on the wider number of lanes when they are empty? Is that something the next version will do? Ensure all the lanes are full?
No matter what, thanks again Joe_H!
Well, not sure if it was a GOOD analogy, but now I understand it I think.
Any way to put more projects (groups of cars) on the wider number of lanes when they are empty? Is that something the next version will do? Ensure all the lanes are full?
No matter what, thanks again Joe_H!
-
- Posts: 513
- Joined: Fri Apr 03, 2020 2:22 pm
- Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X
Re: 18125 Low PPD
Lazvon,
I'd say that is a close analogy.
The "lanes" are dictated by shaders, CUDA cores, compute units, etc depending on terminology.
"Speed" is dictated by GPU frequency, VRAM frequency, and appropriate bus speeds and throughput bottlenecks or lack of.
Atom counts dictate how many lanes can be filled at once.
I've seen GPU work units as small as 2,500 atoms and some over 700k. Considering the vast amount of hardware variability that F@H supports, it's just a matter of time before you either hit the lottery or go bankrupt. It will be interesting to see over time if the atom counts keep up with new hardware, or if it will reach a point where buying the newest and fastest won't have much if any advantage over current GPUs. And in the same line of thinking, will there always be really small atom count work units that the big wide cards dislike, the the older narrow cards thrive on?
One thing I have found that might help in some cases is to look at the current projects, and found out which ones fit your various equipment "sweet spots" in terms of atom counts vs PPD/time, etc. If the majority that fit your gear are say Cancer projects, select Cancer projects as your preference. It's not a guarantee, but it might help. I've used this method at times to avoid projects that assign to my gear and won't meet the timeout requirements. By selecting a project cause that is other than those work units, I don't get them assigned as often, and as such don't have to dump them and slow the science down. In your case with fast gear, it would possibly help you get assignments that make better use of your resources.
I'd say that is a close analogy.
The "lanes" are dictated by shaders, CUDA cores, compute units, etc depending on terminology.
"Speed" is dictated by GPU frequency, VRAM frequency, and appropriate bus speeds and throughput bottlenecks or lack of.
Atom counts dictate how many lanes can be filled at once.
I've seen GPU work units as small as 2,500 atoms and some over 700k. Considering the vast amount of hardware variability that F@H supports, it's just a matter of time before you either hit the lottery or go bankrupt. It will be interesting to see over time if the atom counts keep up with new hardware, or if it will reach a point where buying the newest and fastest won't have much if any advantage over current GPUs. And in the same line of thinking, will there always be really small atom count work units that the big wide cards dislike, the the older narrow cards thrive on?
One thing I have found that might help in some cases is to look at the current projects, and found out which ones fit your various equipment "sweet spots" in terms of atom counts vs PPD/time, etc. If the majority that fit your gear are say Cancer projects, select Cancer projects as your preference. It's not a guarantee, but it might help. I've used this method at times to avoid projects that assign to my gear and won't meet the timeout requirements. By selecting a project cause that is other than those work units, I don't get them assigned as often, and as such don't have to dump them and slow the science down. In your case with fast gear, it would possibly help you get assignments that make better use of your resources.
Fold them if you get them!
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 18125 Low PPD
I don’t mind, just if I am dedicating this much hardware (thus $$) to folding, I’d like to keep them as busy as possible is all.
If 18125 is the priority project, then that’s fine. Just wish I could rub two 18125s at the same time to fill the lanes up.
If 18125 is the priority project, then that’s fine. Just wish I could rub two 18125s at the same time to fill the lanes up.
-
- Site Admin
- Posts: 7922
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: 18125 Low PPD
It is something they are thinking of, but no idea if they have figured out how to do this yet. On an older version of the client which configure GPUs differently some people hacked ways to do this, but there were other problems encountered. It may be more possible on newer cards and drivers as the makers have added some rudimentary scheduling features that might be usable to have two different WUs be processed at the same time. They would still need to resolve possible conflicts that would come up if the next WU downloaded could use the wider resources of the GPU.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 21
- Joined: Sun Nov 19, 2017 3:24 am
Re: 18125 Low PPD
I was looking at the project itself and I think all of them are important.
Just me thinking I should get more recognition...lol
Shame on me