Tesla V100-SXM2-16GB

Post requests to add new GPUs to the official whitelist here.

Moderators: Site Moderators, FAHC Science Team

csvanefalk
Posts: 147
Joined: Mon May 21, 2012 10:28 am

Re: Tesla V100-SXM2-16GB

Post by csvanefalk »

foldy wrote:Oh my god it's Nvidia Volta: 5120 shaders, 15 TFlops, ~1700k PPD
That's really impressive. My 1080Ti maxes out at 1.5M ppd.
Luscious
Posts: 49
Joined: Sat Oct 13, 2012 6:38 am

Re: Tesla V100-SXM2-16GB

Post by Luscious »

Thinkmate is selling V100 rackmount systems for purchase right now, including a 4U 2P 10 GPU variant. 17 million PPD out of a single box :eo :eo :eo That's more than what most TEAMS make.

http://www.thinkmate.com/system/gpx-xt24-24s1-10gpu
84036980
Posts: 23
Joined: Fri Feb 06, 2015 7:18 pm

Re: Tesla V100-SXM2-16GB

Post by 84036980 »

FAH should works but I'm getting some error if i add it in GPU list file manually.
I just want it to be officially supported asap.


FAHBench wokrs. FYR

Loading plugins from plugin directory
Number of registered plugins: 3
Deserializing input files: system
Deserializing input files: state
Deserializing input files: integrator
Creating context (may take several minutes)
Checking accuracy against reference code
Creating reference context (may take several minutes)
Comparing forces and energy
Starting Benchmark

Benchmarking finished
Final score: 230.1101
Scaled score: 230.1101 (23558 atoms)
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Tesla V100-SXM2-16GB

Post by bruce »

You cannot add to GPUs.txt manually -- the server's copy must match.

Run fahclient --lspci or obtain the lspci identifiers elsewhere and post them here.

At the present time, FAH uses OpenCL, so you're going to be limited to what can be done with OpenCL. OpenMM is not written to support tensor math so performance is going to be reduced to whatever can be done with the CUDA cores. My guess is that FAH won't load up that many CUDA cores simultaneously, either.

What version of CUDA is installed and what version of OpenCL is supported?

Please describe your hardware.
foldy
Posts: 2040
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: Tesla V100-SXM2-16GB

Post by foldy »

@Luscious: Price for the rack with 10 nvidia Teslas: only $100000
toTOW
Site Moderator
Posts: 6349
Joined: Sun Dec 02, 2007 10:38 am
Location: Bordeaux, France
Contact:

Re: Tesla V100-SXM2-16GB

Post by toTOW »

I added 0x1db1 / GV100 [Tesla V100 SXM2] and 0x1db4 / GV100 [Tesla V100 PCIe] to the GPU.txt file ... let us know if something goes wrong ...
Image

Folding@Home beta tester since 2002. Folding Forum moderator since July 2008.
84036980
Posts: 23
Joined: Fri Feb 06, 2015 7:18 pm

Re: Tesla V100-SXM2-16GB

Post by 84036980 »

it's working now : )

Thank you guys,
foldy
Posts: 2040
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: Tesla V100-SXM2-16GB

Post by foldy »

Can you post some PPD numbers? I guess current work units are too small for Tesla so you may not get more than 1000k PPD currently
84036980
Posts: 23
Joined: Fri Feb 06, 2015 7:18 pm

Re: Tesla V100-SXM2-16GB

Post by 84036980 »

icemanncsu
Posts: 1
Joined: Thu Jun 28, 2018 9:25 pm

Re: Tesla V100-SXM2-16GB

Post by icemanncsu »

Burning off some AWS EC2 credit since its the end of the month, it would have expired otherwise :) . Right now AWS EC2 spots in USE1 are $7.80/hour, on-demand is normally $24.

Forgot to mention this is a single p3.16xlarge instance.

57 CPUs at 2.7GHz & 8 x Tesla V100's 16GB

15.5M PPD

Image
Full size image here -> https://ibb.co/dTWduo
foldy
Posts: 2040
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: Tesla V100-SXM2-16GB

Post by foldy »

That's some folding power! Be sure to have enough CPUs left to feed the GPUs.
Post Reply