That's really impressive. My 1080Ti maxes out at 1.5M ppd.foldy wrote:Oh my god it's Nvidia Volta: 5120 shaders, 15 TFlops, ~1700k PPD
Tesla V100-SXM2-16GB
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 147
- Joined: Mon May 21, 2012 10:28 am
Re: Tesla V100-SXM2-16GB
Re: Tesla V100-SXM2-16GB
Thinkmate is selling V100 rackmount systems for purchase right now, including a 4U 2P 10 GPU variant. 17 million PPD out of a single box That's more than what most TEAMS make.
http://www.thinkmate.com/system/gpx-xt24-24s1-10gpu
http://www.thinkmate.com/system/gpx-xt24-24s1-10gpu
Re: Tesla V100-SXM2-16GB
FAH should works but I'm getting some error if i add it in GPU list file manually.
I just want it to be officially supported asap.
FAHBench wokrs. FYR
Loading plugins from plugin directory
Number of registered plugins: 3
Deserializing input files: system
Deserializing input files: state
Deserializing input files: integrator
Creating context (may take several minutes)
Checking accuracy against reference code
Creating reference context (may take several minutes)
Comparing forces and energy
Starting Benchmark
Benchmarking finished
Final score: 230.1101
Scaled score: 230.1101 (23558 atoms)
I just want it to be officially supported asap.
FAHBench wokrs. FYR
Loading plugins from plugin directory
Number of registered plugins: 3
Deserializing input files: system
Deserializing input files: state
Deserializing input files: integrator
Creating context (may take several minutes)
Checking accuracy against reference code
Creating reference context (may take several minutes)
Comparing forces and energy
Starting Benchmark
Benchmarking finished
Final score: 230.1101
Scaled score: 230.1101 (23558 atoms)
Re: Tesla V100-SXM2-16GB
You cannot add to GPUs.txt manually -- the server's copy must match.
Run fahclient --lspci or obtain the lspci identifiers elsewhere and post them here.
At the present time, FAH uses OpenCL, so you're going to be limited to what can be done with OpenCL. OpenMM is not written to support tensor math so performance is going to be reduced to whatever can be done with the CUDA cores. My guess is that FAH won't load up that many CUDA cores simultaneously, either.
What version of CUDA is installed and what version of OpenCL is supported?
Please describe your hardware.
Run fahclient --lspci or obtain the lspci identifiers elsewhere and post them here.
At the present time, FAH uses OpenCL, so you're going to be limited to what can be done with OpenCL. OpenMM is not written to support tensor math so performance is going to be reduced to whatever can be done with the CUDA cores. My guess is that FAH won't load up that many CUDA cores simultaneously, either.
What version of CUDA is installed and what version of OpenCL is supported?
Please describe your hardware.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Tesla V100-SXM2-16GB
@Luscious: Price for the rack with 10 nvidia Teslas: only $100000
-
- Site Moderator
- Posts: 6349
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: Tesla V100-SXM2-16GB
I added 0x1db1 / GV100 [Tesla V100 SXM2] and 0x1db4 / GV100 [Tesla V100 PCIe] to the GPU.txt file ... let us know if something goes wrong ...
Re: Tesla V100-SXM2-16GB
it's working now : )
Thank you guys,
Thank you guys,
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Tesla V100-SXM2-16GB
Can you post some PPD numbers? I guess current work units are too small for Tesla so you may not get more than 1000k PPD currently
-
- Posts: 1
- Joined: Thu Jun 28, 2018 9:25 pm
Re: Tesla V100-SXM2-16GB
Burning off some AWS EC2 credit since its the end of the month, it would have expired otherwise . Right now AWS EC2 spots in USE1 are $7.80/hour, on-demand is normally $24.
Forgot to mention this is a single p3.16xlarge instance.
57 CPUs at 2.7GHz & 8 x Tesla V100's 16GB
15.5M PPD
Full size image here -> https://ibb.co/dTWduo
Forgot to mention this is a single p3.16xlarge instance.
57 CPUs at 2.7GHz & 8 x Tesla V100's 16GB
15.5M PPD
Full size image here -> https://ibb.co/dTWduo
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Tesla V100-SXM2-16GB
That's some folding power! Be sure to have enough CPUs left to feed the GPUs.