3950x LOW PPD
Moderator: Site Moderators
Forum rules
Please read the forum rules before posting.
Please read the forum rules before posting.
-
- Posts: 21
- Joined: Sat Apr 25, 2020 8:22 pm
- Hardware configuration: AMD R9-3950X
ASUS Crosshair VIII X570 Motherboard
64GB DDR4 RAM
Gigabyte GTX 1080 Ti
Sabrent 2TB M.2 NVME Gen 4 SSD
1000W PSU
~30TB total storage attached (plus NAS) - Location: Memphis, TN
Re: 3950x LOW PPD
Hey!
Sorry, it's been awhile, but things have been busy, but I thought I'd drop in to respond to the recent replies.
As for the system itself, it's been perfectly stable for the last 2-3 months. No crashes, etc, that would indicate slowly failing hardware (CPU, GPU, or memory).
I have a 1000w PS, so I don't think that is the problem. Again, no strange crashes.
The CPU has been running steadily at 4+ GHz, all-cores. Never drops below 4 GHz, but does get up to about 4.2 GHz sometimes. This has never been a super fast CPU and I have never overclocked it. The temps never go over 82C and are usually around 73-78C.
My BIOS on my ASUS X570 Crosshair VIII Hero MB are the latest (as least as of about a month ago).
@JimF - Wow! 903K PPD on your 5950x?!? That is about 3.5 times what I'm getting on my 3950x.
@JimF - How can you specify particular projects (e.g. P16959)? I can see specifying particular diseases, but not projects.
*** JUST TO REITERATE, my issue is the (relatively gradual) DROP in PPD performance I've seen over the last year or so. I've gone from an average of 2.5-3.0M PPD to 1.8-2.4M PPD, around a 25% drop during this period. This problem appears to be only with FAH. All of my other programs seem to be running at about the same speed as they always have been. CineBench R23 still shows my CPU performance within +/- 2% of what I would expect (~23500 multi-core). My games are still performing as expected.
I don't know what compiler the FAH team uses, but if it an Intel compiler, that certainly isn't going to help because of built-in optimizations that Intel uses that, big surprise, don't work as well on AMD CPUs. If the compiler hasn't been specifically optimized for AMD (i.e. detects the AMD CPU and runs optimized code on it), then that is probably going to be a problem. As far as I know ALL compilers (except for AMD's) have been at least mostly optimized for Intel.
Same for GPUs. I seem to remember that FAH uses FP32 operations. If that is the case, then AMD GPUs should work nearly as well as, if not better than, NVidia's GPUs. Their FP32 calcs seem to be about the same (+/- 10%), but NVidia has the major advantage in RT and some other operations.
No, I'm not saying this is a conspiracy, but saying that unless specific operations have been optimized for AMD products, they will suffer a bit by comparison with FAH.
Finally, I have to wonder if the programming efficiency of the particular project affects its PPD? Frankly, I want to get better PPD, but also help the most needy/worthy projects.
Anyway, thanks for the suggestions. I'm going to keep playing with this and see if I can get this to work better.
Scott
Sorry, it's been awhile, but things have been busy, but I thought I'd drop in to respond to the recent replies.
As for the system itself, it's been perfectly stable for the last 2-3 months. No crashes, etc, that would indicate slowly failing hardware (CPU, GPU, or memory).
I have a 1000w PS, so I don't think that is the problem. Again, no strange crashes.
The CPU has been running steadily at 4+ GHz, all-cores. Never drops below 4 GHz, but does get up to about 4.2 GHz sometimes. This has never been a super fast CPU and I have never overclocked it. The temps never go over 82C and are usually around 73-78C.
My BIOS on my ASUS X570 Crosshair VIII Hero MB are the latest (as least as of about a month ago).
@JimF - Wow! 903K PPD on your 5950x?!? That is about 3.5 times what I'm getting on my 3950x.
@JimF - How can you specify particular projects (e.g. P16959)? I can see specifying particular diseases, but not projects.
*** JUST TO REITERATE, my issue is the (relatively gradual) DROP in PPD performance I've seen over the last year or so. I've gone from an average of 2.5-3.0M PPD to 1.8-2.4M PPD, around a 25% drop during this period. This problem appears to be only with FAH. All of my other programs seem to be running at about the same speed as they always have been. CineBench R23 still shows my CPU performance within +/- 2% of what I would expect (~23500 multi-core). My games are still performing as expected.
I don't know what compiler the FAH team uses, but if it an Intel compiler, that certainly isn't going to help because of built-in optimizations that Intel uses that, big surprise, don't work as well on AMD CPUs. If the compiler hasn't been specifically optimized for AMD (i.e. detects the AMD CPU and runs optimized code on it), then that is probably going to be a problem. As far as I know ALL compilers (except for AMD's) have been at least mostly optimized for Intel.
Same for GPUs. I seem to remember that FAH uses FP32 operations. If that is the case, then AMD GPUs should work nearly as well as, if not better than, NVidia's GPUs. Their FP32 calcs seem to be about the same (+/- 10%), but NVidia has the major advantage in RT and some other operations.
No, I'm not saying this is a conspiracy, but saying that unless specific operations have been optimized for AMD products, they will suffer a bit by comparison with FAH.
Finally, I have to wonder if the programming efficiency of the particular project affects its PPD? Frankly, I want to get better PPD, but also help the most needy/worthy projects.
Anyway, thanks for the suggestions. I'm going to keep playing with this and see if I can get this to work better.
Scott
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 3950x LOW PPD
Oh yes, this thread.
While my PPD average over time started decreasing last few weeks… it certainly appears to vary greatly for the WUs. I finish most WUs in 1-2 hours IIRC. However I seem to be getting a lot more WUs these days that end up with “2-3M PPD” instead of what used to (and the daily results showed it, “7-9.5M PPD”.
Is it just there are lower value workloads these days? I should probably search, but how the workload base credits actually assigned? Does the researcher “pay” (with grant money I assume) more to get their WU base credit higher? Committee that votes based on critical need comparisons?
I certainly do not mind, am not complaining at all… just curious. If already discussed somewhere I can read, certainly happy take a redirection to that info.
No matter what, just want to make sure I am donating as much as I can to these causes.
While my PPD average over time started decreasing last few weeks… it certainly appears to vary greatly for the WUs. I finish most WUs in 1-2 hours IIRC. However I seem to be getting a lot more WUs these days that end up with “2-3M PPD” instead of what used to (and the daily results showed it, “7-9.5M PPD”.
Is it just there are lower value workloads these days? I should probably search, but how the workload base credits actually assigned? Does the researcher “pay” (with grant money I assume) more to get their WU base credit higher? Committee that votes based on critical need comparisons?
I certainly do not mind, am not complaining at all… just curious. If already discussed somewhere I can read, certainly happy take a redirection to that info.
No matter what, just want to make sure I am donating as much as I can to these causes.
-
- Posts: 515
- Joined: Fri Apr 03, 2020 2:22 pm
- Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X
Re: 3950x LOW PPD
Do you monitor work units with HFM?Lazvon wrote: ↑Sat Jul 30, 2022 2:10 pm Oh yes, this thread.
While my PPD average over time started decreasing last few weeks… it certainly appears to vary greatly for the WUs. I finish most WUs in 1-2 hours IIRC. However I seem to be getting a lot more WUs these days that end up with “2-3M PPD” instead of what used to (and the daily results showed it, “7-9.5M PPD”.
Is it just there are lower value workloads these days? I should probably search, but how the workload base credits actually assigned? Does the researcher “pay” (with grant money I assume) more to get their WU base credit higher? Committee that votes based on critical need comparisons?
I certainly do not mind, am not complaining at all… just curious. If already discussed somewhere I can read, certainly happy take a redirection to that info.
No matter what, just want to make sure I am donating as much as I can to these causes.
If not, it might be a useful tool for you to use. Work unit variations happen, though in your case it seems the spread is very extreme. But as a person with a slow system, maybe that's normal for the much faster cards you are running. PPD and points returns are related to speed, and generally speaking the higher base credit work units have a tighter deadline. This rewards faster cards such as the ones you are running. But work units with longer deadlines will still be issued as the science still needs to happen, but they often reward lower points returns.
Here and there certain cards just don't perform well with certain work units. If you just happen to get a lot of these, it can significantly lower your averages over time. By the same token, that same card might give an unusually high PPD return on other work units, so over time it averages out.
Just be glad you have quick stuff running. The number of work units that completed in 1-2 hours on my system can be counted in a few dozen. Usually more likely to be a day or more.....
Fold them if you get them!
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 3950x LOW PPD
Hi, yes, I use HFM. And it certainly seems Project based... P18125 is the lowest I regularly see... if I was running nothing but that, I'd be getting 2.5M PPD on a 3090... If I was running say nothing but P18500 (the other 3090) - 8.4M PPD. For many months there I never really saw any low PPD WUs, and still can't explain why suddenly went from 32-35M PPD to 22-25M PPD total. Just seems like a big drop for just Project variability.
The P18125 (2.5M PPD) just finished... next workload on the same 3090 is P18448 and it is 6.6M PPD.
On the 3090Ti ... P17604 - 4.1M ... versus I see some WUs at estimated 10M PPD on that card.
One of the 3080Tis - P18127 - 4.4M... other P18707 - 5.4M.
I still wonder if it is driver related; I'm running 516.59. Hmmm. Just ran the eVGA Precision X1 on the 3090Ti system... says I need to update firmware... trying that.
The P18125 (2.5M PPD) just finished... next workload on the same 3090 is P18448 and it is 6.6M PPD.
On the 3090Ti ... P17604 - 4.1M ... versus I see some WUs at estimated 10M PPD on that card.
One of the 3080Tis - P18127 - 4.4M... other P18707 - 5.4M.
I still wonder if it is driver related; I'm running 516.59. Hmmm. Just ran the eVGA Precision X1 on the 3090Ti system... says I need to update firmware... trying that.
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 3950x LOW PPD
on difference. For the P17604 WU ... still 4.1M ... power consumption is at 50%. Folding power is set to FULL. Like this workload doesn't even try and stress the GPU. I do limit to 83 degrees, but it is running at 55 degrees right now... no where near a limit.
-
- Posts: 515
- Joined: Fri Apr 03, 2020 2:22 pm
- Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X
Re: 3950x LOW PPD
I find the same trends even on (much) slower hardware. Those that give the higher PPD often generate more heat, and those that give lower less. In my case the highest PPD WU's are often the ones I am lucky to meet deadlines with, but that is a classification issue unique to AMD stuff.Lazvon wrote: ↑Wed Aug 03, 2022 1:19 am on difference. For the P17604 WU ... still 4.1M ... power consumption is at 50%. Folding power is set to FULL. Like this workload doesn't even try and stress the GPU. I do limit to 83 degrees, but it is running at 55 degrees right now... no where near a limit.
In your case there are so many Nvidia users running high end gear that usually the work will spread quickly if there are driver or firmware issues that impact folding.
PPD variances are normal, and at times very large. From my understanding there are a number of reasons behind it, and they do the best they can. During peak COVID when the sprints were taking place some work units gave unreal PPD returns even on slow gear. It's just the luck of the draw what we get, and it impacts everyone. I wouldn't be concerned about your hardware or drivers unless you see others reporting the same issues and/or finding solutions.
Fold them if you get them!
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 3950x LOW PPD
Thanks. That is generally the concern… went from months of high PPD to all of the sudden lower. Thought perhaps it was the driver updates… or systems somehow starting to fail or behave differently for some localized reasons. If a few weeks of depressed PPD is normal though. That’s fine.
-
- Posts: 21
- Joined: Sat Apr 25, 2020 8:22 pm
- Hardware configuration: AMD R9-3950X
ASUS Crosshair VIII X570 Motherboard
64GB DDR4 RAM
Gigabyte GTX 1080 Ti
Sabrent 2TB M.2 NVME Gen 4 SSD
1000W PSU
~30TB total storage attached (plus NAS) - Location: Memphis, TN
Re: 3950x LOW PPD
I was looking over the LAR Folding database and noticed that almost all systems are seeing a decline in average PPD (per system) over the last 12 weeks. I wish I could get a daily system average summary for the Folding@Home for the last couple years to see if there is actually a drop in PPD per system.
I can't help but think that this is either a F@H Core programming issue or an issue with project programming. Perhaps F@H is becoming a victim of its own success? When it was less powerful, programming projects efficiently was more important to get timely results, but now that it has gotten more powerful, there isn't as much of an incentive to program efficiently when it's "free"...
If anything, you'd think that average system PPD would be rising as programming efficiency improved.
Frankly, I wish they'd do something to increase transparency about the projects. If users could determine which are the highest efficiency, then maybe the programmers would attempt to code their projects better. All computing is about math. But with various hardware, there are poor, good, and "better" ways to do the calculations.
I'm not disparaging those people who have gotten this project started. The math is way beyond me, but, like so many, I know enough to be dangerous. IMO, more communication with the user base would help them even more.
PS. FWIW, AMD released an updated GPU driver that is supposed to about double PPD in F@H systems. A 6900XT is supposed to be getting 5M+ PPD now as opposed to the old 2.5-3M PPD.
PPS. @Lazvon - How did you get that "ranking" in your "tag" at the bottom of your posts?
I can't help but think that this is either a F@H Core programming issue or an issue with project programming. Perhaps F@H is becoming a victim of its own success? When it was less powerful, programming projects efficiently was more important to get timely results, but now that it has gotten more powerful, there isn't as much of an incentive to program efficiently when it's "free"...
If anything, you'd think that average system PPD would be rising as programming efficiency improved.
Frankly, I wish they'd do something to increase transparency about the projects. If users could determine which are the highest efficiency, then maybe the programmers would attempt to code their projects better. All computing is about math. But with various hardware, there are poor, good, and "better" ways to do the calculations.
I'm not disparaging those people who have gotten this project started. The math is way beyond me, but, like so many, I know enough to be dangerous. IMO, more communication with the user base would help them even more.
PS. FWIW, AMD released an updated GPU driver that is supposed to about double PPD in F@H systems. A 6900XT is supposed to be getting 5M+ PPD now as opposed to the old 2.5-3M PPD.
PPS. @Lazvon - How did you get that "ranking" in your "tag" at the bottom of your posts?
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 3950x LOW PPD
Hrmph. Again? Long as the research that needs to be done first it getting done, happy. Just don’t like things sitting around half idle.
You include the “img” tag in your Signature in your Profile settings. Can also link it to a page. The image URL you can by right clicking on mine and “open image in new tab” then you can see where to change my userID for yours.
You include the “img” tag in your Signature in your Profile settings. Can also link it to a page. The image URL you can by right clicking on mine and “open image in new tab” then you can see where to change my userID for yours.
-
- Posts: 21
- Joined: Sat Apr 25, 2020 8:22 pm
- Hardware configuration: AMD R9-3950X
ASUS Crosshair VIII X570 Motherboard
64GB DDR4 RAM
Gigabyte GTX 1080 Ti
Sabrent 2TB M.2 NVME Gen 4 SSD
1000W PSU
~30TB total storage attached (plus NAS) - Location: Memphis, TN
Re: 3950x LOW PPD
I hear you about stuff sitting around not being fully utilized! Right now my AMD 3950 is running at about 17% utilization (but, then again, I'm running a "test" to see what the PPD is for each "CPU" I'm using (i.e. 1, 2, 3, 4, etc.) to see where the PPD maxes out at). I'm running this text to see if there is a SMT issue over, say, 8 cores or whatever.Lazvon wrote: ↑Thu Aug 04, 2022 1:09 am Hrmph. Again? Long as the research that needs to be done first it getting done, happy. Just don’t like things sitting around half idle.
You include the “img” tag in your Signature in your Profile settings. Can also link it to a page. The image URL you can by right clicking on mine and “open image in new tab” then you can see where to change my userID for yours.
I'm going to post my findings. So far, there seems to be a fairly linear increase as I add cores, but it isn't perfectly linear, probably because of inter-project variability, but should "be good enough for government work"...
I'm systematically testing to see if there is a peak number of cores/threads that maximizes my PPD for the CPU. I'm also going to be looking for where the CPU cores/threads begins to impact the GPU production.
Over the last week or so, I've had PPD on my GPU that have varied from 1.1M-2.6M (that last only one time), with an average of about 1.8-2.0M PPD.
I checked the link. Unfortunately, it says that my two user IDs are "not found/not available". I'm assuming that you have to be a part of the Extreme Overclockers Folding Team to get these stats?
I also noticed that my original ID was just over #24,000. Then again, I've been doing this, off and on, since the very early 2000s. Now I do more in one day than the 100+ PCs I had working back then did in weeks. But back then they didn't have Passkey bonuses, either.
Anyway, I'll keep checking in to see what you and others are finding out. I, for one, would very much like to get some clarity about/to the bottom of this apparent PPD decline.
Scott
-
- Posts: 21
- Joined: Sat Apr 25, 2020 8:22 pm
- Hardware configuration: AMD R9-3950X
ASUS Crosshair VIII X570 Motherboard
64GB DDR4 RAM
Gigabyte GTX 1080 Ti
Sabrent 2TB M.2 NVME Gen 4 SSD
1000W PSU
~30TB total storage attached (plus NAS) - Location: Memphis, TN
Re: 3950x LOW PPD
@Lazvon,
Ahh! While I wasn't able to get my user included, at least I was able the team I'm currently contributing to to display! Woot! Thanks for your help!
Scott
Ahh! While I wasn't able to get my user included, at least I was able the team I'm currently contributing to to display! Woot! Thanks for your help!
Scott
-
- Posts: 100
- Joined: Wed Jan 05, 2022 1:06 am
- Hardware configuration: 4080 / 12700F, 3090Ti/12900KS, 3090/12900K, 3090/10940X, 3080Ti/12700K, 3080Ti/9900X, 3080Ti/9900X
Re: 3950x LOW PPD
Code: Select all
https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1206282
This isn’t the forums, I don’t use their forums.
-
- Posts: 515
- Joined: Fri Apr 03, 2020 2:22 pm
- Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X
Re: 3950x LOW PPD
You might want to check out this thread...GlueFactoryBJJ wrote: ↑Sat Aug 13, 2022 8:21 pmI hear you about stuff sitting around not being fully utilized! Right now my AMD 3950 is running at about 17% utilization (but, then again, I'm running a "test" to see what the PPD is for each "CPU" I'm using (i.e. 1, 2, 3, 4, etc.) to see where the PPD maxes out at). I'm running this text to see if there is a SMT issue over, say, 8 cores or whatever.Lazvon wrote: ↑Thu Aug 04, 2022 1:09 am Hrmph. Again? Long as the research that needs to be done first it getting done, happy. Just don’t like things sitting around half idle.
You include the “img” tag in your Signature in your Profile settings. Can also link it to a page. The image URL you can by right clicking on mine and “open image in new tab” then you can see where to change my userID for yours.
I'm going to post my findings. So far, there seems to be a fairly linear increase as I add cores, but it isn't perfectly linear, probably because of inter-project variability, but should "be good enough for government work"...
I'm systematically testing to see if there is a peak number of cores/threads that maximizes my PPD for the CPU. I'm also going to be looking for where the CPU cores/threads begins to impact the GPU production.
Over the last week or so, I've had PPD on my GPU that have varied from 1.1M-2.6M (that last only one time), with an average of about 1.8-2.0M PPD.
I checked the link. Unfortunately, it says that my two user IDs are "not found/not available". I'm assuming that you have to be a part of the Extreme Overclockers Folding Team to get these stats?
I also noticed that my original ID was just over #24,000. Then again, I've been doing this, off and on, since the very early 2000s. Now I do more in one day than the 100+ PCs I had working back then did in weeks. But back then they didn't have Passkey bonuses, either.
Anyway, I'll keep checking in to see what you and others are finding out. I, for one, would very much like to get some clarity about/to the bottom of this apparent PPD decline.
Scott
viewtopic.php?t=35286
He's done a bunch of the testing you are in the process of doing, and you should find similar results. Keep in mind that most of his testing was the A7 cpu core IIRC, and many if not most of the A8 core work has rewarded higher PPD returns. He also test for efficiency in terms of watts/point and has some good suggestions in that aspect.
As for the variations, if you look at HFM and sort by atom counts you will generally find trends. Large cards with lots of shaders and cores usually run fast on higher atom counts. Cards with less shader cores usually run better on smaller atom counts. But even then, you have variations in the length of the simulation, memory speed factors, driver factors, OS factors, and on and on. As such, for any individual system, what the most efficient work units are for that machine will vary. The work units that give low average returns on your system could and often are the ones that "lesser" machines run very well on. I've seen cases where my onboard integrated graphics would approach the PPD return of cards that are much more powerful and capable, but my system is strong with smaller atom counts, and the others start to stretch their legs as atom counts go up.
The above applies more to GPU folding. Though I don't do a lot of CPU folding, I find somewhat the opposite is true even for my little 8 core Ryzen 2400G. If the work unit has an atom count too small, it doesn't load the system as much, nor the memory. So to some extent the larger atom count project usually return more PPD. As with GPU folding there are exceptions to the rule, but the trends are clear. Being your system has 4 times as many cores, much more room for variation in returns.
I think more than likely the larger variations in recent times is due to the larger variations in hardware available to FAH over time, combined with the variations with individual work units and their priority level. The level of priority will dictate that at times the work units might fold on systems other than the optimum, but that ensures the work keeps flowing. If they restrict work units only to machines that are "best" for them, then we have instances where some people aren't getting work to do since their system doesn't fall into the proper capability bracket.
Fold them if you get them!
-
- Posts: 21
- Joined: Sat Apr 25, 2020 8:22 pm
- Hardware configuration: AMD R9-3950X
ASUS Crosshair VIII X570 Motherboard
64GB DDR4 RAM
Gigabyte GTX 1080 Ti
Sabrent 2TB M.2 NVME Gen 4 SSD
1000W PSU
~30TB total storage attached (plus NAS) - Location: Memphis, TN
Re: 3950x LOW PPD
I tried that and it said something like, "User not found". Don't know why, but the team is good enough for now...Lazvon wrote: ↑Sun Aug 14, 2022 12:51 amReplace my user number after the u= with your user number… find your user stats page, and should see your user number on the left hand column.Code: Select all
https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1206282
This isn’t the forums, I don’t use their forums.
Scott
-
- Posts: 21
- Joined: Sat Apr 25, 2020 8:22 pm
- Hardware configuration: AMD R9-3950X
ASUS Crosshair VIII X570 Motherboard
64GB DDR4 RAM
Gigabyte GTX 1080 Ti
Sabrent 2TB M.2 NVME Gen 4 SSD
1000W PSU
~30TB total storage attached (plus NAS) - Location: Memphis, TN
Re: 3950x LOW PPD
While I am having problems with the GPU (steadily declining from its highs about 12-ish months ago), as you noted, this thread is about AMD CPUs. More specifically, the Zen 2 CPUs (possibly also Zen 3 and beyond).BobWilliams757 wrote: ↑Sun Aug 14, 2022 5:56 am You might want to check out this thread...
viewtopic.php?t=35286
He's done a bunch of the testing you are in the process of doing, and you should find similar results. Keep in mind that most of his testing was the A7 cpu core IIRC, and many if not most of the A8 core work has rewarded higher PPD returns. He also test for efficiency in terms of watts/point and has some good suggestions in that aspect.
As for the variations, if you look at HFM and sort by atom counts you will generally find trends. Large cards with lots of shaders and cores usually run fast on higher atom counts. Cards with less shader cores usually run better on smaller atom counts. But even then, you have variations in the length of the simulation, memory speed factors, driver factors, OS factors, and on and on. As such, for any individual system, what the most efficient work units are for that machine will vary. The work units that give low average returns on your system could and often are the ones that "lesser" machines run very well on. I've seen cases where my onboard integrated graphics would approach the PPD return of cards that are much more powerful and capable, but my system is strong with smaller atom counts, and the others start to stretch their legs as atom counts go up.
The above applies more to GPU folding. Though I don't do a lot of CPU folding, I find somewhat the opposite is true even for my little 8 core Ryzen 2400G. If the work unit has an atom count too small, it doesn't load the system as much, nor the memory. So to some extent the larger atom count project usually return more PPD. As with GPU folding there are exceptions to the rule, but the trends are clear. Being your system has 4 times as many cores, much more room for variation in returns.
I think more than likely the larger variations in recent times is due to the larger variations in hardware available to FAH over time, combined with the variations with individual work units and their priority level. The level of priority will dictate that at times the work units might fold on systems other than the optimum, but that ensures the work keeps flowing. If they restrict work units only to machines that are "best" for them, then we have instances where some people aren't getting work to do since their system doesn't fall into the proper capability bracket.
I have a 3600x (about 3.8-3.9GHz) that is doing about 165K PPD. My 3950x hit a high (so far in my testing) at about 300K PPD at 4.0-4.2GHz before dropping back down. I should be getting, technically, about 2.67 times what the 3600x (6 cores, 12 threads) is doing... BEFORE adding in the higher clock speed of the 3950x. I'm getting less than double right now.
In the distant past (10+ years ago) I've run into problems with Intel compilers (non-enterprise/server versions (40-80 threads per physical CPU)) that stop handling thread counts above what their mainstream CPUs can handle. Since the main stream Intel CPUs (until the 12th generation) maxed out at 8 cores, 16 threads, that kind of matches up with what I'm seeing in my, admittedly, less than perfectly set up testing right now.
I would expect that the compilers for the 12th-plus generation will support more threads, but I doubt that they will EVER maximize AMD Zen CPU performance. I haven't done any programming for a long time and can only hope that newer third-party compilers will continuously improve their AMD optimizations as much as they have for Intel CPUs.
Anyway, back to testing...
Scott