nchowlett,
Without knowing specifics, it's hard to say if overclocking the CPU's would help. If the CPU and motherboard combinations are old enough, and taxed at 100% currently, it might. With newer hardware I would say doubtful, but could be wrong in that opinion. On my Ryzen 2400G, I decided that it wasn't worth even allowing the PBO auto boost features to run. They didn't really add points to the GPU side of things, and having the folding loads plus out other uses on the system made it harder to nail down good fan curves that didn't get loud at times.... for no real performance boost.
But what works for one system might not work for the next. The specifics of the hardware might favor certain conditions more than on the next system. If you are running multiple GPU's on an older CPU then higher CPU clocks might work. If that 100% use is due to also CPU folding with the GPU's you might still see an increase in CPU folding performance with higher clocks.
But everything has tradeoffs. Sometimes maximum points also means more fan noise, less power efficiency, and less available overhead left for other tasks if you use that computer for things other than folding. Only you know what you want to tolerate to get more points. As for me, keeping it 100% stable, reasonably quiet, fairly efficient for the GPU I run, and trouble free is worth a lot. That way I only deal with it on the rare occasion I have updates, maintain the fans and such, or power issues.
The ONLY single thing I have found that I have seen no negatives from at all is using Hardware Accelerated GPU in Windows. Points go up, and I see no negative side to the change. Power use increases very slightly, but the bump in productivity overcomes that. But even then, it's not a huge change in PPD either, just a minor tweak.
The one way to know for sure is to test changes to your system and see. Do your best to keep conditions the same, get enough samples to filter out the noise, and see for yourself. As with the evidence above, some of the things claimed by some don't always amount to being 100% true with all the various factors tested. Compare only the same projects to each other, and run enough to allow for project to project variances. As an example, some projects seem harder on a CPU and will raise temps much more than the next. Some will use less power at any set power cap, others will always use less.
Best bang for the buck build?
Moderator: Site Moderators
Forum rules
Please read the forum rules before posting.
Please read the forum rules before posting.
-
- Posts: 519
- Joined: Fri Apr 03, 2020 2:22 pm
- Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X
Re: Best bang for the buck build?
Fold them if you get them!
-
- Posts: 48
- Joined: Tue Feb 28, 2023 4:05 am
- Hardware configuration: Modified 'mining' rig with 4 parallel GPUs. For more info check out this article: https://tessellate.science/microscoped- ... ercomputer
- Location: Melbourne, Australia.
Re: Best bang for the buck build?
I was going to do this, but didn't want to cook my system if evidence already existed. But turns out I can't overclock my Xeon CPU.The one way to know for sure is to test changes to your system and see.
Testing of my hypothesis of dependency of GPU use on CPU will have to wait until the next folding machine is built - using a K-series OR X-series instead of Xeon.
Thanks, as always, Bob.
-
- Posts: 511
- Joined: Mon May 21, 2018 4:12 pm
- Hardware configuration: Ubuntu 22.04.2 LTS; NVidia 525.60.11; 2 x 4070ti; 4070; 4060ti; 3x 3080; 3070ti; 3070
- Location: Great White North
Re: Best bang for the buck build?
I have done some testing with NVIDIA GPUs and found that the peak efficiency tends to be around the Base Clock Frequency for most modern (2000-series or newer) GPUs. Thus I run all my GPUs clock-limitedpyrocyborg wrote: ↑Tue Oct 10, 2023 12:42 am ...
With Nvidia cards, it seems like adjusting the power limit is the way to go. How low it can go is dependant on the card family, the model itself and whether it won the silicon lottery or not. For example, my EVGA FT3 RTX 3090 can go as low as 55% with a loss of maybe 10% in PPD. Anything lower than that, and the PPD drops in a non linear fashion. A Gigabyte Eagle 3060 Ti could be set a 71% before it would lose a lot of PPD in exchange for a few watts.
Overclocking or downclocking is a no-go with Nvidia RX 3000 cards when it comes to folding. Reducing the core or memory values tend to reduce stability (which we don't want) and don't give anything in return. Increasing the core or memory values tend to induce a risk of errors and give almost nothing in exchange, so yeah, stock values it is.
I never tested the limit, but it's possible the Nvidia cards would crash or throw errors at one point.
Code: Select all
nvidia-smi -i 0 - lgc 0,2205
Code: Select all
nvidia-smi -i 0 -pl 150
As my mission is to do the most amount of "Work" for the least Cost I don't run CPU Folding as it is so inefficient in comparison to GPU Folding but to squeeze more efficiency out of my systems I also clock-limit my CPUs to 2.8GHz which shaves about 20-50W off the CPU running at the stock settings.
As I am fortunate to have Time-of-Use electricity rates I also only run my systems for 12-hours overnight during the summer when rates are cheapest ("Off-Peak") and for 18 hours during the Spring, Fall and Winter adding the "Mid-Peak" periods when the additional heat load doesn't need to be compensated for by the Air Conditioning and the waste Heat can reduce the Heating required.
-
- Posts: 519
- Joined: Fri Apr 03, 2020 2:22 pm
- Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X
Re: Best bang for the buck build?
Gordonbb,
Do you have anything supported by data?
Seriously though, that's some in depth testing and solid methods. I am curious though, did you always limit clocks along with power limits?
Just out of curiosity I did a bit of minor testing using Afterburner, but not quite so in depth. Starting from stock, I did find that production went up slightly when limiting clocks, but power use stayed about the same. However I also found that my usual overclock (done simply using Afterburner OC scanner) still performed best watt per watt, though it is reasonably aggressive at lower power levels.... over 100Mhz in the lower power ranges.
I might do some further testing just out of curiosity.
I'm also wondering if you have ever done any memory clock testing on your GPU's. After seeing some posters note it here, I did some testing and found some small but consistent gains reducing the memory clocks at any limited power level. The theory being that most GPU's have excess bandwidth for FAH needs, so "wasting" any extra power on memory serves no real purpose.
In my case only running a 1660 Super power really isn't a concern, but it's shocking to me that so many people running multiple higher powered GPU's don't do some tweaking to save on the energy bill. Long term they are tossing money on the energy bill that would be better put to newer gear running how and when it's efficient.
Do you have anything supported by data?
Seriously though, that's some in depth testing and solid methods. I am curious though, did you always limit clocks along with power limits?
Just out of curiosity I did a bit of minor testing using Afterburner, but not quite so in depth. Starting from stock, I did find that production went up slightly when limiting clocks, but power use stayed about the same. However I also found that my usual overclock (done simply using Afterburner OC scanner) still performed best watt per watt, though it is reasonably aggressive at lower power levels.... over 100Mhz in the lower power ranges.
I might do some further testing just out of curiosity.
I'm also wondering if you have ever done any memory clock testing on your GPU's. After seeing some posters note it here, I did some testing and found some small but consistent gains reducing the memory clocks at any limited power level. The theory being that most GPU's have excess bandwidth for FAH needs, so "wasting" any extra power on memory serves no real purpose.
In my case only running a 1660 Super power really isn't a concern, but it's shocking to me that so many people running multiple higher powered GPU's don't do some tweaking to save on the energy bill. Long term they are tossing money on the energy bill that would be better put to newer gear running how and when it's efficient.
Fold them if you get them!
-
- Posts: 62
- Joined: Fri Apr 03, 2020 4:49 pm
- Hardware configuration: Manjaro Linux - AsRock B550 Taichi - Ryzen 5950X - NVidia RTX 4070ti
FAH v8-4.3 - Location: Yorktown, Virginia, USA
Re: Best bang for the buck build?
Agree - great discussion and plenty of data packed for me to unravel.
Linux Manjaro B550 AsRock motherboard, Ryzen 5950X with RTX 4070 Super and 4070 producing a combined 18-20,000 ppd.
Checking the plain jane NVidia X-Server Settings shows each card running PCIe Gen4 at 98% GPU utilization and 2% PCIe bandwidth.
I cannot see where boosting CPU power overclocking would do much if anything to speed things up.
-Phil
Linux Manjaro B550 AsRock motherboard, Ryzen 5950X with RTX 4070 Super and 4070 producing a combined 18-20,000 ppd.
Checking the plain jane NVidia X-Server Settings shows each card running PCIe Gen4 at 98% GPU utilization and 2% PCIe bandwidth.
I cannot see where boosting CPU power overclocking would do much if anything to speed things up.
-Phil
Re: Best bang for the buck build?
Think you may have missed some zeroes on your PPD.pcwolf wrote: ↑Fri Aug 30, 2024 2:39 am Agree - great discussion and plenty of data packed for me to unravel.
Linux Manjaro B550 AsRock motherboard, Ryzen 5950X with RTX 4070 Super and 4070 producing a combined 18-20,000 ppd.
Checking the plain jane NVidia X-Server Settings shows each card running PCIe Gen4 at 98% GPU utilization and 2% PCIe bandwidth.
I cannot see where boosting CPU power overclocking would do much if anything to speed things up.
-Phil
i7 7800x RTX 3070 OS= win10. AMD 3700x RTX 2080ti OS= win10 .
Team page: https://www.rationalskepticism.org/viewtopic.php?t=616