Tensor Processing for FaH?

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.
Post Reply
dapple26
Posts: 2
Joined: Thu Apr 20, 2023 2:11 am

Tensor Processing for FaH?

Post by dapple26 »

I have been looking around trying to find out if TPU cards such as ASUS AI Accelerator PCIe Card or the Coral Edge, both of which seems to improve processing power of the system is plugged into, could be used to improve my performance with Folding at Home. Trying to find any hard information is limited right now.
Joe_H
Site Admin
Posts: 7922
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: Tensor Processing for FaH?

Post by Joe_H »

I looked up specs for the TPU cards, it is unlikely they would be useful for F@h. Calculations are 8-bit based on these TPUs, F@h uses mostly 32-bit single precision (FP32) with some double precision (FP64) calculations where needed to maintain accuracy.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
muziqaz
Posts: 938
Joined: Sun Dec 16, 2007 6:22 pm
Hardware configuration: 7950x3D, 5950x, 5800x3D, 3900x
7900xtx, Radeon 7, 5700xt, 6900xt, RX 550 640SP
Location: London
Contact:

Re: Tensor Processing for FaH?

Post by muziqaz »

It is entirely possible that at some point in the future AI algorithms become stable enough and reliable enough to be incorporated in openmm, so that tensor cores could assist cuda cores with simulations. However using tensor cores as replacement for cuda cores, that will never happen, since they are using very low precision, as Joe mentioned
FAH Omega tester
dapple26
Posts: 2
Joined: Thu Apr 20, 2023 2:11 am

Re: Tensor Processing for FaH?

Post by dapple26 »

Joe_H wrote: Thu Apr 20, 2023 10:27 pm I looked up specs for the TPU cards, it is unlikely they would be useful for F@h. Calculations are 8-bit based on these TPUs, F@h uses mostly 32-bit single precision (FP32) with some double precision (FP64) calculations where needed to maintain accuracy.
Thanks for making it clear before I spend money on trying to make a tensor card run Folding.
Post Reply