GeForce RTX 3080 and 3090 support enabled !

Post requests to add new GPUs to the official whitelist here.

Moderators: Site Moderators, FAHC Science Team

ir_cow
Posts: 4
Joined: Sat Sep 19, 2020 2:18 am

Re: GeForce RTX 3080 and 3090 support enabled !

Post by ir_cow »

MeeLee wrote:@flarbear: I tend to agree with you.
The question I would ask here, is, if the pcie 4.0 bandwidth also consumes 1.5% more energy than 3.0? If your system runs at 350W, the extra 3.5W may be worth it, but it may not if the power draw is more like 10W higher...
And performance and power draw on Pcie 4.0 vs 3.0, and x16, vs x8, vs 4.0x4 speeds also need to be tested.
I don't see how the PCIe slot can "consume" more power. I also tried the Founders Edition which has a limit of 370 watts. No difference in PPD. Just those massive swings depending on the WU. Also the RTX 3080 doesn't even use 8x PCIe 3.0 for folding. I doesn't use the full 16x in games either. That .5% uplift is how the bits are encoded and lowers the overhead. Funny enough you "gain" .5% with PCIe, but being on AMD at lower resolutions you lose 15-30% FPS depending on the game. It is only when you reach 4K does the CPU not matter much. But we are talking folding here and I don't see any reason why PCIe 4.0 would help in folding.

Now what needs to happen is FAHCores being written for Tensor. If the RTX 2080 Ti gets 3.5~ mill, a extra 4000 CUDA cores, higher clock speed and memory frequency should be at least 50% faster. But at 4.5 Mill tops it is only 28%. This tells me things need to be optimized better for currently CUDA. Then add Tensor WUs.
ipkh
Posts: 173
Joined: Thu Jul 16, 2015 2:03 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Post by ipkh »

The Nvidia driver interprets the OpenCL and CUDA (Core 22 version 13) instructions. So it is up to Nvidias optimizations to make the dual fp32 work. For games the basic rule was that 30% of the commands were int32 so expect some reduction to the doubling of performance.
FAH has a difficult time here as it is has to split the work over many more cores (effective fp32 cores) and smaller WU will be very inefficient on large GPUs. We already see this disparity with just the gaps between the 900 series, 10 series and 20 series. But I have no doubt that they are working on it. I'm sure Nvidia has a vested interest in helping as well.
kiore
Posts: 921
Joined: Fri Jan 16, 2009 5:45 pm
Location: USA

Re: GeForce RTX 3080 and 3090 support enabled !

Post by kiore »

What is being seen with F@H not seeming to make the most of new hardware has happened previously with new gens, it can be a number of factors such as projects cores not aligned to new standards or drivers not yet fully utilizing capacities or combinations. However the project seems to be ahead of the curve this time with new core versions coming online, new bench marking and new ways to use the new generations of hardware like running multiple work units on single GPUs under development. I am optimistic that the work underway will see significant optimization improvements not too far into the future.
Image
i7 7800x RTX 3070 OS= win10. AMD 3700x RTX 2080ti OS= win10 .

Team page: https://www.rationalskepticism.org/viewtopic.php?t=616
PantherX
Site Moderator
Posts: 6986
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: GeForce RTX 3080 and 3090 support enabled !

Post by PantherX »

F@H can't use all the new GPU features since it doesn't render anything. Instead, it will use all features that helps it in protein simulation. There are some really cool ideas floating around and some are easier to implement than others. However, time will tell what happens next but it is definitely a good thing for F@H since new and exciting times lie ahead for GPU folding :)
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
HaloJones
Posts: 906
Joined: Thu Jul 24, 2008 10:16 am

Re: GeForce RTX 3080 and 3090 support enabled !

Post by HaloJones »

will be very interested to see what 0.0.13 can do with a 3080
single 1070

Image
PantherX
Site Moderator
Posts: 6986
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: GeForce RTX 3080 and 3090 support enabled !

Post by PantherX »

HaloJones wrote:will be very interested to see what 0.0.13 can do with a 3080
Some quick numbers from Project 11765 in Linux:

TPF 73s - GTX 1080Ti running OpenCL/ 1.554 M PPD
TPF 57s - GTX 1080Ti running CUDA / 2.253 M PPD
TPF 49s - RTX 2080Ti running OpenCL/ 2.826 M PPD
TPF 39s - RTX 2080Ti running CUDA / 3.981 M PPD
TPF 36s - RTX 3080 running OpenCL / 4.489 M PPD
TPF 31s - RTX 3080 running CUDA / 5.618 M PPD

I do expect that the numbers might potentially be better once the drivers have matured a bit, generally in about 6 months. By that time, we might have a new version of FahCore_22 that can unlock more performance too!
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
MeeLee
Posts: 1339
Joined: Tue Feb 19, 2019 10:16 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Post by MeeLee »

ir_cow wrote:
MeeLee wrote:@flarbear: I tend to agree with you.
The question I would ask here, is, if the pcie 4.0 bandwidth also consumes 1.5% more energy than 3.0? If your system runs at 350W, the extra 3.5W may be worth it, but it may not if the power draw is more like 10W higher...
And performance and power draw on Pcie 4.0 vs 3.0, and x16, vs x8, vs 4.0x4 speeds also need to be tested.
I don't see how the PCIe slot can "consume" more power. I also tried the Founders Edition which has a limit of 370 watts. No difference in PPD. Just those massive swings depending on the WU. Also the RTX 3080 doesn't even use 8x PCIe 3.0 for folding. I doesn't use the full 16x in games either. That .5% uplift is how the bits are encoded and lowers the overhead. Funny enough you "gain" .5% with PCIe, but being on AMD at lower resolutions you lose 15-30% FPS depending on the game. It is only when you reach 4K does the CPU not matter much. But we are talking folding here and I don't see any reason why PCIe 4.0 would help in folding.
11th gen Intel CPUs support PCIE Gen 4.
While the primary PCIE x16 slot, is generally seen as directly laned to the CPU, and should have very little wattage overhead,
Other slots (especially x4 slots, or m.2 slots) could go via a PCIE bridge chip, consuming extra power.
They actually use a controller that requires active cooling (a tiny 40mm fan in most cases, so I'd be estimating ~15-20W max).
You make a point about AMD CPUs being slower than Intel CPUs in PCIE data transfer.
Despite a 2080Ti not using more than a PCIE 3.0 x8 slot, when connecting it to an x16 slot, there's a marginal performance improvement (<10%, usually between 1-5%).
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: GeForce RTX 3080 and 3090 support enabled !

Post by bruce »

ipkh wrote:The Nvidia driver interprets the OpenCL and CUDA (Core 22 version 13) instructions. So it is up to Nvidias optimizations to make the dual fp32 work. For games the basic rule was that 30% of the commands were int32 so expect some reduction to the doubling of performance.
It's impossible to write code without integers, but I'd expect the ratio of INT to FP32 in a game to be inferior to FAH ... though the benchmarking results will be examined carefully and then the drivers will be improved, making them obsolete. 8-)
MeeLee
Posts: 1339
Joined: Tue Feb 19, 2019 10:16 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Post by MeeLee »

I don't think there'll be a lot of people running the 3090.
It's theoretical performance is a max of 20-25% higher than the 3080, costing twice the price.
I think the 3080 will be the best GPU for most people looking for a new high performance GPU.
road-runner
Posts: 227
Joined: Sun Dec 02, 2007 4:01 am
Location: Willis, Texas

Re: GeForce RTX 3080 and 3090 support enabled !

Post by road-runner »

yea the price of those I can buy a lot of electric for the 1080TI
Image
gunnarre
Posts: 559
Joined: Sun May 24, 2020 7:23 pm
Location: Norway

Re: GeForce RTX 3080 and 3090 support enabled !

Post by gunnarre »

MeeLee wrote: Other slots (especially x4 slots, or m.2 slots) could go via a PCIE bridge chip, consuming extra power.
They actually use a controller that requires active cooling (a tiny 40mm fan in most cases, so I'd be estimating ~15-20W max).
This is not a feature inherent to PCIe Gen 4 standard, right? It has more to do with having to use a less power efficient chip for the X570 chipset, which made an active chipset cooling fan necessary. In future chipsets from ASMedia, Intel or AMD, we might see PCIe 4 support with lower power dissipation.
Image
Online: GTX 1660 Super + occasional CPU folding in the cold.
Offline: Radeon HD 7770, GTX 1050 Ti 4G OC, RX580
MeeLee
Posts: 1339
Joined: Tue Feb 19, 2019 10:16 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Post by MeeLee »

gunnarre wrote: This is not a feature inherent to PCIe Gen 4 standard, right? It has more to do with having to use a less power efficient chip for the X570 chipset, which made an active chipset cooling fan necessary. In future chipsets from ASMedia, Intel or AMD, we might see PCIe 4 support with lower power dissipation.
I'm not sure,
I think it'll be like USB 3.0 protocol.
It does use more power than USB 2.0, but then data also moves at a higher rate.
However, the question would be, if you stick a USB 3.0 stick with USB 2.0 speeds, in a USB 3.0 port, will it run more or less power than a USB 2.0 port?
My estimation is that a PCIE 4.0 x4 port, uses nearly the same power as a PCIE 3.0 x8 port.
Saves a bit of power with less lanes, but wastes more to feed the GPU at a faster data rate.
Saves power again, because faster transactions mean quicker idling of the PCIE interface.
But uses both more idle power, as well as power under load.

If the load isn't 100%, but a constant 25%, the PCIE 4.0 should have slightly higher power consumption than a modern 3.0.

I think ultimately power consumption will depend on the CPU. So it'll depend on what nm the CPU process is made.
Like many, a 10nm CPU doesn't mean the entire CPU is made on a 10nm process die. Sometimes parts are still 14nm, or even 28nm.

So I think a new PCIE 4.0 will consume less power than an old 3.0 port,
Things will be more interesting when trying to compare 4.0 to 3.0 of the same node CPU.

In the grand scheme of things, answers to these questions will more than likely be useless, as we're going to PCIE 4.0 regardless; and PCIE 5.0, and 6.0 is on the table already.
Both 5.0 and 6.0 might make problems with finding good risers that can support these speeds.
Lockheed_Tvr
Posts: 14
Joined: Thu Aug 03, 2017 12:23 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Post by Lockheed_Tvr »

PantherX wrote:
HaloJones wrote:will be very interested to see what 0.0.13 can do with a 3080
Some quick numbers from Project 11765 in Linux:

TPF 73s - GTX 1080Ti running OpenCL/ 1.554 M PPD
TPF 57s - GTX 1080Ti running CUDA / 2.253 M PPD
TPF 49s - RTX 2080Ti running OpenCL/ 2.826 M PPD
TPF 39s - RTX 2080Ti running CUDA / 3.981 M PPD
TPF 36s - RTX 3080 running OpenCL / 4.489 M PPD
TPF 31s - RTX 3080 running CUDA / 5.618 M PPD

I do expect that the numbers might potentially be better once the drivers have matured a bit, generally in about 6 months. By that time, we might have a new version of FahCore_22 that can unlock more performance too!
Is there any way to force it to use CUDA or is that just for that new Beta core that recently came out?
kiore
Posts: 921
Joined: Fri Jan 16, 2009 5:45 pm
Location: USA

Re: GeForce RTX 3080 and 3090 support enabled !

Post by kiore »

Only with the new core. The new core is under beta level testing, still a few bugs it seems as some 'escaped' to general users and some issues found. Serious progress though for optimization let us see, I am optimistic.
Image
i7 7800x RTX 3070 OS= win10. AMD 3700x RTX 2080ti OS= win10 .

Team page: https://www.rationalskepticism.org/viewtopic.php?t=616
PantherX
Site Moderator
Posts: 6986
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: GeForce RTX 3080 and 3090 support enabled !

Post by PantherX »

Lockheed_Tvr wrote:...Is there any way to force it to use CUDA or is that just for that new Beta core that recently came out?
In addition to what kiore mentioned, do note that you can't "force" to use CUDA... upon initialization, the FahCore_22 has this logic (simplified steps):
1) Let me see how many platforms I have access to
2) Let me try to use CUDA since you're an Nvidia GPU
3) Okay, I tired to use CUDA and failed so let me try to use OpenCL
4) Oh no, I can't use any platforms, let me collect all the information in an error report and send it back for debugging

Do note that AMD GPUs would skip step 2 since CUDA isn't present.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Post Reply