I’ve been looking at the PPD (Points Per Day) stats for the 2026 hardware lineup, and the gap has become absurd. An RTX 5090 is pulling 100 million PPD, while a high-end CPU struggles to hit 2 or 3 million. Even though the 5090 uses double the watts, it's still 20 to 40 times more efficient per unit of science.
Why are we still allowing this overlap in the scheduler? It feels like we’re subsidizing massive energy waste just to keep "inclusive" support for hardware that should have been retired years ago. Every work unit that sits on a CPU for days is a unit that a modern GPU could have cleared in minutes.
How can we justify this "carbon tax" on the project in 2026? Are we afraid to set an efficiency floor because it might offend people with older rigs, or is there a genuine scientific reason to keep wasting cycles like this? I’m curious how the rest of you balance the "volunteer spirit" with the reality that a single modern GPU does more for the science than a room full of legacy CPUs ever will.
Why is FAH still assigning the same work to CPUs that an RTX 5090 finishes 40x faster?
Moderators: Site Moderators, FAHC Science Team
-
muziqaz
- Posts: 2537
- Joined: Sun Dec 16, 2007 6:22 pm
- Hardware configuration: 9950x, 9950x3d, 7950x3d, 5950x, 5800x3d
7900xtx, RX9070, Radeon 7, 5700xt, 6900xt, RX550, Intel B580 - Location: London
- Contact:
Re: Why is FAH still assigning the same work to CPUs that an RTX 5090 finishes 40x faster?
Not you again.
CPU projects are there for a reason.
GPUs cannot do them. Plain and simple.
Any workloads which can be run on the GPUs are being run on GPUs.
While GPUs are super fast, they cannot do certain simulations.
CPU projects are there for a reason.
GPUs cannot do them. Plain and simple.
Any workloads which can be run on the GPUs are being run on GPUs.
While GPUs are super fast, they cannot do certain simulations.