PCI-e bandwidth/capacity limitations
Moderator: Site Moderators
Forum rules
Please read the forum rules before posting.
Please read the forum rules before posting.
Re: PCI-e bandwidth/capacity limitations
It's like you say. There is no Chip/Logic on the USB risers. It's just cheaper to use standard cable an connector types.
Re: PCI-e bandwidth/capacity limitations
So, I purchased a pair of risers a while back, and started testing them out. Unfortunately, I haven't been able to even get to the testing stage, since I'm having issues getting the cards to fold at all in the first place.
Those of you with riser experience, any help? System setup is is currently:
2x GTX1080 connected directly via x16
1x GTX1080 connected via x1 USB powered riser
Linux Mint 18.2, kernel 4.11
What I know so far:
1: Power is not an issue. The system usually runs 4x 1080s directly off the slots
2: GPU is not faulty. In the standard hardware configuration, they run and fold just fine
3: Slots aren't faulty. Wouldn't work under standard configuration otherwise
No matter what happens, POST completes successfully, and all attempts to boot would get me at least to the OS loading screen. While running the slots at Gen3, the system freezes shortly after loading the desktop. F@H has not started at this point, so high load would not be the cause. The few times it had successfully started up, X Server does not detect the GPU on the riser. However, running the slots as Gen2 consistently results in a successful boot, and the riser card shows as detected.
This is as far as I can get. While my system is in this testing configuration of three cards, FAHClient does not start, and FAHControl is stuck trying to connect. Manually starting FAHClient will let FAHControl connect, and I am shown a message saying I need to configure my identity, similar to the first time one installs F@H. Naturally, I need to reconfigure my slots, which seems to work, until the work units start downloading. All of them immediately show bad work units, including the non riser cards.
I still don't know if the risers are faulty. Seems unlikely that both would be. And having all work units fail, riser GPU or non riser GPU, just raises further questions. Returning the system back to its original quad card configuration reverts everything back to normal, including the work units and slot configs from before the testing. At least that was convenient!
So what do you guys think? How should I proceed? I'm still set on doing a whole heap of testing to make these comparisons.
Those of you with riser experience, any help? System setup is is currently:
2x GTX1080 connected directly via x16
1x GTX1080 connected via x1 USB powered riser
Linux Mint 18.2, kernel 4.11
What I know so far:
1: Power is not an issue. The system usually runs 4x 1080s directly off the slots
2: GPU is not faulty. In the standard hardware configuration, they run and fold just fine
3: Slots aren't faulty. Wouldn't work under standard configuration otherwise
No matter what happens, POST completes successfully, and all attempts to boot would get me at least to the OS loading screen. While running the slots at Gen3, the system freezes shortly after loading the desktop. F@H has not started at this point, so high load would not be the cause. The few times it had successfully started up, X Server does not detect the GPU on the riser. However, running the slots as Gen2 consistently results in a successful boot, and the riser card shows as detected.
This is as far as I can get. While my system is in this testing configuration of three cards, FAHClient does not start, and FAHControl is stuck trying to connect. Manually starting FAHClient will let FAHControl connect, and I am shown a message saying I need to configure my identity, similar to the first time one installs F@H. Naturally, I need to reconfigure my slots, which seems to work, until the work units start downloading. All of them immediately show bad work units, including the non riser cards.
I still don't know if the risers are faulty. Seems unlikely that both would be. And having all work units fail, riser GPU or non riser GPU, just raises further questions. Returning the system back to its original quad card configuration reverts everything back to normal, including the work units and slot configs from before the testing. At least that was convenient!
So what do you guys think? How should I proceed? I'm still set on doing a whole heap of testing to make these comparisons.
Re: PCI-e bandwidth/capacity limitations
When you start FAHClient, are you using the script that's provided with the install?
sudo /etc/init.d/FAHClient start
sudo /etc/init.d/FAHClient start
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: PCI-e bandwidth/capacity limitations
I did not. All I did was type FAHClient. Can try that out later on though. What are the differences between the two?
Re: PCI-e bandwidth/capacity limitations
FAHClient is designed to run as a service. The script is SUPPOSED to install itself so that FAHClient start at boot time (just like other services)and run forever. FAHClient doesn't actually process anthing except commands from either FAHControl or WebControl, and if you want it to be idle, you manage that also from either of those control programs.
The script runs it as another Linux user, not as you, and it has its own set of permissions.
The configuration is also managed by *Control, NOT by an editor.
The script runs it as another Linux user, not as you, and it has its own set of permissions.
The configuration is also managed by *Control, NOT by an editor.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 65
- Joined: Mon Nov 02, 2015 2:57 am
Re: PCI-e bandwidth/capacity limitations
I'm also doing the "FAHClient" in a terminal because I cannot see "folding@home" in the OpenSUSE menu. I know that's a completely different issue, but, what would be the downside to running in the terminal (and seeing extra info using the control software)? Also, because "folding@home" is not listed in the menu, (and that I purposely broke the install, not having "bzip2-libs") would this be the reason that it is not (I assume) starting at boot?bruce wrote:FAHClient is designed to run as a service. The script is SUPPOSED to install itself so that FAHClient start at boot time (just like other services)and run forever. FAHClient doesn't actually process anthing except commands from either FAHControl or WebControl, and if you want it to be idle, you manage that also from either of those control programs.
The script runs it as another Linux user, not as you, and it has its own set of permissions.
The configuration is also managed by *Control, NOT by an editor.
-
- Posts: 1
- Joined: Wed Sep 06, 2017 1:20 pm
PCIe lanes and PPD
Mod: merged with existing topic - j
Is there a PPD difference in a GPU at x16 PCIe 3.0 vs x8 PCIe 3.0?
Is there a PPD difference in a GPU at x16 PCIe 3.0 vs x8 PCIe 3.0?
-
- Posts: 1164
- Joined: Wed Apr 01, 2009 9:22 pm
- Hardware configuration: Asus Z8NA D6C, 2 x5670@3.2 Ghz, , 12gb Ram, GTX 980ti, AX650 PSU, win 10 (daily use)
Asus Z87 WS, Xeon E3-1230L v3, 8gb ram, KFA GTX 1080, EVGA 750ti , AX760 PSU, Mint 18.2 OS
Not currently folding
Asus Z9PE- D8 WS, 2 E5-2665@2.3 Ghz, 16Gb 1.35v Ram, Ubuntu (Fold only)
Asus Z9PA, 2 Ivy 12 core, 16gb Ram, H folding appliance (fold only) - Location: Jersey, Channel islands
-
- Posts: 390
- Joined: Sun Dec 02, 2007 4:53 am
- Hardware configuration: FX8320e (6 cores enabled) @ stock,
- 16GB DDR3,
- Zotac GTX 1050Ti @ Stock.
- Gigabyte GTX 970 @ Stock
Debian 9.
Running GPU since it came out, CPU since client version 3.
Folding since Folding began (~2000) and ran Genome@Home for a while too.
Ran Seti@Home prior to that. - Location: UK
- Contact:
Re: PCI-e bandwidth/capacity limitations
Can't help you on the bzip2 issue, but generally you can add them using whichever package manager comes with your distro (common ones are yum, apt, pacman).GPU timpster wrote:I'm also doing the "FAHClient" in a terminal because I cannot see "folding@home" in the OpenSUSE menu. I know that's a completely different issue, but, what would be the downside to running in the terminal (and seeing extra info using the control software)? Also, because "folding@home" is not listed in the menu, (and that I purposely broke the install, not having "bzip2-libs") would this be the reason that it is not (I assume) starting at boot?bruce wrote:FAHClient is designed to run as a service. The script is SUPPOSED to install itself so that FAHClient start at boot time (just like other services)and run forever. FAHClient doesn't actually process anthing except commands from either FAHControl or WebControl, and if you want it to be idle, you manage that also from either of those control programs.
The script runs it as another Linux user, not as you, and it has its own set of permissions.
The configuration is also managed by *Control, NOT by an editor.
As for running it on the console, their isn't really any downside. The client will still offer you the web interface for checking on it, stopping/starting. Although personally I would recommend some sort of startup script that launched it using screen and checked the client was still running every minute or so. If you want to learn how to do that, have a look for a game server startup/check script, then modify it to look for FAHClient instead and add it to crontab with a check every minute.
Or you can use something like this if you wish.
checkfah.sh
Code: Select all
#!/bin/sh
process=`ps auxw | grep FAHClient | grep -v grep | awk '{print $13}'`
if [ -z "$process" ]; then
echo "Couldn't find FAH running, restarting it."
cd /path/to/fah
nohup screen -DmS fahgpu ./FAHClient &
echo ""
fi
exit
Code: Select all
0-59 * * * * /path/to/checkfah.sh 1> /dev/null
Code: Select all
screen -r fahgpu
To detach it again, hold down Ctrl and tap 'A' then 'D.
If you have issues re-attaching and have fah running under a fah user or different user and have switched to that user using 'su -' Then run the command
Code: Select all
script /dev/null
Re: PCI-e bandwidth/capacity limitations
It has been tested and shown that a PCI-E 2.0 8x slot has a noticeable negative performance impact on FAH work vs a PCI-E 2.0 16x slot OR a PCI-E 3.0 8x slot.
PCI 3.0 8x and 16x AT THIS POINT seem to have no significant performance difference even on Pascal-based Titan and GTX 1080ti cards.
For reference - a rig I built a couple months back was using a G4600 (dual core WITH hyperthreading Kaby Lake Pentium) and a pair of GTX 1080ti cards - and it was CPU LIMITING both the 2 actual cores AND the 2 "hyperthreading cores" on many work units, and very very close on the rest.
THAT MUCH DATA.
Cryptocoin mining doesn't involve nearly as much data, which is WHY you can get away with 1x riser rigs with no noticeable performance impact, and can run multple GPUs on old single-core CPUs like the Sempron 145.
This would also apply to a some BOINC work (Moo Wrapper in specific, probably Yoyo as well, not sure on other projects).
PCI 3.0 8x and 16x AT THIS POINT seem to have no significant performance difference even on Pascal-based Titan and GTX 1080ti cards.
For reference - a rig I built a couple months back was using a G4600 (dual core WITH hyperthreading Kaby Lake Pentium) and a pair of GTX 1080ti cards - and it was CPU LIMITING both the 2 actual cores AND the 2 "hyperthreading cores" on many work units, and very very close on the rest.
THAT MUCH DATA.
Cryptocoin mining doesn't involve nearly as much data, which is WHY you can get away with 1x riser rigs with no noticeable performance impact, and can run multple GPUs on old single-core CPUs like the Sempron 145.
This would also apply to a some BOINC work (Moo Wrapper in specific, probably Yoyo as well, not sure on other projects).
GPU folding on PCI-e 2.0 and 1.0
Mod note: merged with existing topic in appropriate sub-forum - j
Hi. I wanted to know if there are problems folding with a PCI-e 3.0 GPU put on a 2.0 or 1.0 slot, or if the tasks will slow down. Thanks.
Hi. I wanted to know if there are problems folding with a PCI-e 3.0 GPU put on a 2.0 or 1.0 slot, or if the tasks will slow down. Thanks.
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: PCI-e bandwidth/capacity limitations
What GPU do you have and what mainboard? PCIe speed is given in gen like 3.0 and speed like x4, what do you have? PCIE 3.0 has double speed than 2.0 than 1.0 PCIE limits mainly occur on Windows. Linux has only marginal limits.
-
- Posts: 50
- Joined: Mon Jan 16, 2017 11:40 am
- Hardware configuration: 4x1080Ti + 2x1050Ti
- Location: Russia, Moscow
Re: PCI-e bandwidth/capacity limitations
good news!
Dual 1080ti gpu rig on Linux Mint 17.1, one card @ pci-e v3.0 x16, the second @ pci-e v 2.0 x4
power limit 180 watt per card
cards show same productivity in similar tasks, over 1m ppd each
in my case it was ryzen platform
but now i guess that legacy platforms with pci-e v2.0 are still good to build folding rig
Dual 1080ti gpu rig on Linux Mint 17.1, one card @ pci-e v3.0 x16, the second @ pci-e v 2.0 x4
power limit 180 watt per card
cards show same productivity in similar tasks, over 1m ppd each
in my case it was ryzen platform
but now i guess that legacy platforms with pci-e v2.0 are still good to build folding rig
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: PCI-e bandwidth/capacity limitations
Power limit seems little low for a gtx 1080ti what PPD do you get with 250 watts?
pcie 2.0 x4 would match pcie 3.0 x2 so it is double the speed of an pcie 3.0 x1 riser.
Good to here there is no pcie bottleneck on Linux. But Windows would suck.
pcie 2.0 x4 would match pcie 3.0 x2 so it is double the speed of an pcie 3.0 x1 riser.
Good to here there is no pcie bottleneck on Linux. But Windows would suck.
-
- Posts: 410
- Joined: Mon Nov 15, 2010 8:51 pm
- Hardware configuration: 8x GTX 1080
3x GTX 1080 Ti
3x GTX 1060
Various other bits and pieces - Location: South Coast, UK
Re: PCI-e bandwidth/capacity limitations
I run my 1080ti's at 180W and 1080's at 130W. PPD is maybe 10% down but worth it for the power saving - I'm not actually concerned about cost or power per se, but I need to maximise what I can achieve with a 4 kW thermal budget in a small space.foldy wrote:Power limit seems little low for a gtx 1080ti what PPD do you get with 250 watts?
pcie 2.0 x4 would match pcie 3.0 x2 so it is double the speed of an pcie 3.0 x1 riser.
Good to here there is no pcie bottleneck on Linux. But Windows would suck.
(Also worth mentioning that folding struggles to get above 220W on a 1080Ti, regardless of powerlimit)