Page 3 of 3

Re: Points CPU vs GPU

Posted: Thu May 07, 2020 10:49 am
by PantherX
There are some architectural differences between the CPU and GPU. This is a nice summary:
...A GPU can handle large amounts of data in many streams, performing relatively simple operations on them, but is ill-suited to heavy or complex processing on a single or few streams of data. A CPU is much faster on a per-core basis (in terms of instructions per second) and can perform complex operations on a single or few streams of data more easily, but cannot efficiently handle many streams simultaneously...
https://www.howtogeek.com/128221/why-ar ... d-of-gpus/

When it comes down to mathematics (F@H uses specific instruction sets) there is a fundamental difference between them:
...CPUs dedicate the majority of their core real-estate to scalar/superscalar operations...GPUs, on the other hand, dedicate most of their core real-estate to a Single Instruction Multiple Data (SIMD)...
https://www.quora.com/Why-does-a-GPU-pe ... than-a-CPU

Re: Points CPU vs GPU

Posted: Thu May 07, 2020 11:59 am
by HaloJones
For what it's worth, I have a P13872 running on a 3600/11t for an estimated 367Kppd. That's pretty serious points for a 90W part.

Re: Points CPU vs GPU

Posted: Thu May 07, 2020 12:04 pm
by Neil-B
... I like that particular Project, not because of the points but because it is so very quick to process (TPF 17s) - for some reason that makes me feel good - definitely anomalous points (as there is no way my CPU slot is doing 4x normal levels of science) but they turn up every now and then - I see 655k ppd for the 28 minutes or so they take to process (32core slot on slowish Xeon) … but it is perhaps the exception rather than the rule :)

Re: Points CPU vs GPU

Posted: Thu May 07, 2020 4:46 pm
by MeeLee
It may seem unfair, but I just figured out, that a Ryzen 3900x gets more points than the work it does, when compared to an RTX 2080 Ti.

A Ryzen 3900x is rated at 820 GFLOPS, which is a performance right inbetween a GT730 and a GT 1030; but more often than not, will it run much less than this (as it never reaches the optimal 4,4Ghz on all cores).
While an RTX 2080 Ti is rated at ~14-15Tflops (aftermarket models have a higher boost frequency, but the reference cards are rated at 14,2Tflops).

An RTX 2080 Ti is rated about 17,5x faster in GFLOPS terms. So a fair rating for comparing a 3,2-4M PPD RTX 2080 Ti, would be if a 3900x could get between 180 and 228 k PPD.
And that's where it's reported to land. A user mentioned getting 170-200k PPD on his Ryzen 3900x.

So while GPUs fold a lot more efficient, and it's PPD difference may seem unfair; it appears I was wrong about the scoring.
The scoring IS fair between the two!
You get about the same score per performance.
Although I think by default a CPU should get a slightly higher bonus, for being more flexible, and for engineers writing code in that old, inefficient language; and users actually crunching on it.
A lot of users probably crunch out of pity, because if people knew about the efficiency differences, everyone would fold on GPUs. And CPU folding would come near to a halt.