Tensor Processing for FaH?

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.
Post Reply
dapple26
Posts: 2
Joined: Thu Apr 20, 2023 2:11 am

Tensor Processing for FaH?

Post by dapple26 »

I have been looking around trying to find out if TPU cards such as ASUS AI Accelerator PCIe Card or the Coral Edge, both of which seems to improve processing power of the system is plugged into, could be used to improve my performance with Folding at Home. Trying to find any hard information is limited right now.
Joe_H
Site Admin
Posts: 8118
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Studio M1 Max 32 GB smp6
Mac Hack i7-7700K 48 GB smp4
Location: W. MA

Re: Tensor Processing for FaH?

Post by Joe_H »

I looked up specs for the TPU cards, it is unlikely they would be useful for F@h. Calculations are 8-bit based on these TPUs, F@h uses mostly 32-bit single precision (FP32) with some double precision (FP64) calculations where needed to maintain accuracy.
Image
muziqaz
Posts: 1722
Joined: Sun Dec 16, 2007 6:22 pm
Hardware configuration: 9950x, 7950x3D, 5950x, 5800x3D
7900xtx, RX9070, Radeon 7, 5700xt, 6900xt, RX 550 640SP
Location: London
Contact:

Re: Tensor Processing for FaH?

Post by muziqaz »

It is entirely possible that at some point in the future AI algorithms become stable enough and reliable enough to be incorporated in openmm, so that tensor cores could assist cuda cores with simulations. However using tensor cores as replacement for cuda cores, that will never happen, since they are using very low precision, as Joe mentioned
FAH Omega tester
Image
dapple26
Posts: 2
Joined: Thu Apr 20, 2023 2:11 am

Re: Tensor Processing for FaH?

Post by dapple26 »

Joe_H wrote: Thu Apr 20, 2023 10:27 pm I looked up specs for the TPU cards, it is unlikely they would be useful for F@h. Calculations are 8-bit based on these TPUs, F@h uses mostly 32-bit single precision (FP32) with some double precision (FP64) calculations where needed to maintain accuracy.
Thanks for making it clear before I spend money on trying to make a tensor card run Folding.
Post Reply