Tensor Processing for FaH?
Moderator: Site Moderators
Forum rules
Please read the forum rules before posting.
Please read the forum rules before posting.
Tensor Processing for FaH?
I have been looking around trying to find out if TPU cards such as ASUS AI Accelerator PCIe Card or the Coral Edge, both of which seems to improve processing power of the system is plugged into, could be used to improve my performance with Folding at Home. Trying to find any hard information is limited right now.
-
- Site Admin
- Posts: 7922
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: Tensor Processing for FaH?
I looked up specs for the TPU cards, it is unlikely they would be useful for F@h. Calculations are 8-bit based on these TPUs, F@h uses mostly 32-bit single precision (FP32) with some double precision (FP64) calculations where needed to maintain accuracy.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 938
- Joined: Sun Dec 16, 2007 6:22 pm
- Hardware configuration: 7950x3D, 5950x, 5800x3D, 3900x
7900xtx, Radeon 7, 5700xt, 6900xt, RX 550 640SP - Location: London
- Contact:
Re: Tensor Processing for FaH?
It is entirely possible that at some point in the future AI algorithms become stable enough and reliable enough to be incorporated in openmm, so that tensor cores could assist cuda cores with simulations. However using tensor cores as replacement for cuda cores, that will never happen, since they are using very low precision, as Joe mentioned
FAH Omega tester
Re: Tensor Processing for FaH?
Thanks for making it clear before I spend money on trying to make a tensor card run Folding.