If anyone would know it would be you. Could you tell me how you determined that? I'm trying to come up with a way to find an average effectiveness for each of my different GPUs. I'm on Linux though. Also, what motherboard you like? ThanksPS3EdOlkkola wrote:@boristsybin I've got a sample size of 12 Titan X pascal GPUs with about half on PCIe 3.0 x8 and the other half on PCIe 3.0 x16 interfaces. There is no discernible difference in frame time or PPD between the two interfaces. All of them are in 2011-v3 motherboards with 40 PCIe lane CPUs.
PCI-e bandwidth/capacity limitations
Moderator: Site Moderators
Forum rules
Please read the forum rules before posting.
Please read the forum rules before posting.
-
- Posts: 389
- Joined: Fri Apr 15, 2016 12:42 am
- Hardware configuration: PC 1:
Linux Mint 17.3
three gtx 1080 GPUs One on a powered header
Motherboard = [MB-AM3-AS-SB-990FXR2] qty 1 Asus Sabertooth 990FX(+59.99)
CPU = [CPU-AM3-FX-8320BR] qty 1 AMD FX 8320 Eight Core 3.5GHz(+41.99)
PC2:
Linux Mint 18
Open air case
Motherboard: ASUS Crosshair V Formula-Z AM3+ AMD 990FX SATA 6Gb/s USB 3.0 ATX AMD
AMD FD6300WMHKBOX FX-6300 6-Core Processor Black Edition with Cooler Master Hyper 212 EVO - CPU Cooler with 120mm PWM Fan
three gtx 1080,
one gtx 1080 TI on a powered header
Re: PCI-e bandwidth/capacity limitations
1080 and 1080TI GPUs on Linux Mint
-
- Posts: 177
- Joined: Tue Aug 26, 2014 9:48 pm
- Hardware configuration: 10 SMP folding slots on Intel Phi "Knights Landing" system, configured as 24 CPUs/slot
9 AMD GPU folding slots
31 Nvidia GPU folding slots
50 total folding slots
Average PPD/slot = 459,500 - Location: Dallas, TX
Re: PCI-e bandwidth/capacity limitations
Two tools help to determine the frame times, FAHBench and HFM. I just looked at HFM at project 13500 and the variation was at most 2 seconds on the average frame time of that work unit (between 38 seconds and 40 seconds) on all Titan X pascals which translates to between 1,042,347 and 1,125,711 PPD. Your numbers would likely be better with Linux. The majority of the motherboards hosting the Titan X pascals are ASRock X99 OC Formula using either Xeon E5-1620, i7-5930 and recently an i7-6850 all with 16GB of DDR4. The motherboard choice was made primarily because it has the spacing to host 3 air-cooled GPUs (a GPU takes up 2 slots plus one slot for an air gap between the adjacent GPU) and the last PCIe slot doesn't have fan and usb connectors sticking up that prevent the GPU from seating correctly in the slot. The CPU choices would be considered overkill as I don't use them for CPU folding, but I wanted to get 40 PCIe lanes and they're generally the lowest-end of each CPU class having 40 lanes but still deliver decent single-core performance. I'm almost through consolidating all the Titan X pascals on GPUs with 3 on each motherboard; two system units are currently running with just two Titan X pascals now. All Titan X maxwells (9) are currently on the 3 on 1 consolidated configuration (3 system units).
Hardware config viewtopic.php?f=66&t=17997&p=277235#p277235
-
- Posts: 389
- Joined: Fri Apr 15, 2016 12:42 am
- Hardware configuration: PC 1:
Linux Mint 17.3
three gtx 1080 GPUs One on a powered header
Motherboard = [MB-AM3-AS-SB-990FXR2] qty 1 Asus Sabertooth 990FX(+59.99)
CPU = [CPU-AM3-FX-8320BR] qty 1 AMD FX 8320 Eight Core 3.5GHz(+41.99)
PC2:
Linux Mint 18
Open air case
Motherboard: ASUS Crosshair V Formula-Z AM3+ AMD 990FX SATA 6Gb/s USB 3.0 ATX AMD
AMD FD6300WMHKBOX FX-6300 6-Core Processor Black Edition with Cooler Master Hyper 212 EVO - CPU Cooler with 120mm PWM Fan
three gtx 1080,
one gtx 1080 TI on a powered header
Re: PCI-e bandwidth/capacity limitations
That's a sweet MB! Is it worth it to go 40 lanes versus 16PS3EdOlkkola wrote:The CPU choices...to get 40 PCIe lanes...
In Science We Trust
-
- Posts: 177
- Joined: Tue Aug 26, 2014 9:48 pm
- Hardware configuration: 10 SMP folding slots on Intel Phi "Knights Landing" system, configured as 24 CPUs/slot
9 AMD GPU folding slots
31 Nvidia GPU folding slots
50 total folding slots
Average PPD/slot = 459,500 - Location: Dallas, TX
Re: PCI-e bandwidth/capacity limitations
Having 3x GPUs per system means there is a minimum of an 8x 3.0 PCIe interface for each GPU, and ideally 16x for two and one at 8x. It might not make a difference in the future having the 16x capability, but with new folding cores on the way, I wanted to be prepared in the event that was an optimal configuration.
Hardware config viewtopic.php?f=66&t=17997&p=277235#p277235
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: PCI-e bandwidth/capacity limitations
If you want 4 GPUs and run them at pcie 3.0 x8 then you can use a mainboard with PLX pcie switch chip and still use a cheap 16 lanes CPU to get x8/x8/x8/x8.
e.g. Asus Z170-WS for $350 https://www.newegg.com/Product/Product. ... -_-Product
But even at pcie 3.0 x4 you only would loose 10% performance.
e.g. Asus Z170-WS for $350 https://www.newegg.com/Product/Product. ... -_-Product
But even at pcie 3.0 x4 you only would loose 10% performance.
Re: PCI-e bandwidth/capacity limitations
foldy that's a sweet MB but sure is expensive. How do you know it has a PLX pcie switch chip??? I was looking its web page and I don't see it mentioned.
I'd never have a need for dual M.2 nor 4 RAM slots. Be nice to find a cheaper version.
I'd never have a need for dual M.2 nor 4 RAM slots. Be nice to find a cheaper version.
In Science We Trust
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: PCI-e bandwidth/capacity limitations
Cheaper mainboards have only 3 instead of 4 x8 GPU slots or only x4 slots which is OK.
It has PLX because it says quad x8/x8/x8/x8 mode
https://www.asus.com/Motherboards/Z170- ... fications/
and overclockers board found the plx chip under a heat sink
http://www.overclockers.com/asus-z170-w ... rd-review/
Another disadvantage of quad x8 mainboards to the high price is, if you want to sell it in future, the demand is not so high because gamers only use dual sli mostly.
But you can put in a cheap $200 4-core CPU (matching 4 GPUs). The Z170-WS can carry up to four dual-slot graphics cards.
It has PLX because it says quad x8/x8/x8/x8 mode
https://www.asus.com/Motherboards/Z170- ... fications/
and overclockers board found the plx chip under a heat sink
http://www.overclockers.com/asus-z170-w ... rd-review/
Another disadvantage of quad x8 mainboards to the high price is, if you want to sell it in future, the demand is not so high because gamers only use dual sli mostly.
But you can put in a cheap $200 4-core CPU (matching 4 GPUs). The Z170-WS can carry up to four dual-slot graphics cards.
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: PCI-e bandwidth/capacity limitations
@PS3EdOlkkola: Would you say (Asus Z170-WS $350 + i5 6500 $200) is a good alternative to (ASRock X99 OC Formula $300 + E5-1620 v3 $300)?
Both having quad x8 pcie slots for dual-slot GPUs, what are the differences or advantages?
Both having quad x8 pcie slots for dual-slot GPUs, what are the differences or advantages?
-
- Posts: 177
- Joined: Tue Aug 26, 2014 9:48 pm
- Hardware configuration: 10 SMP folding slots on Intel Phi "Knights Landing" system, configured as 24 CPUs/slot
9 AMD GPU folding slots
31 Nvidia GPU folding slots
50 total folding slots
Average PPD/slot = 459,500 - Location: Dallas, TX
Re: PCI-e bandwidth/capacity limitations
Even though I regularly maintain each system unit by blowing out dust, etc, they do run 24x7x365 so I couldn't in all good conscience sell any motherboards or GPUs in the aftermarket. I've given away a bunch of stuff and have a whole lot more to give away as soon as I can package it up for Fedex (AMD Fury X's, Nvidia 980's, 980ti's, looking around it's about a dozen GPUs). After about 2 to 3 years, I upgrade each system unit with a new motherboard and GPUs, and every 5 years drop in a new power supply. It keeps my sanity in check by not chasing down intermittent hardware failures when equipment gets too old.
In terms of the comparison between the Asus Z170-WS and the ASRock X99 OC Formula, the Asus only has enough room between the PCIe slots to have two GPUs the way I configure my systems. There needs to be enough room between each GPU to allow for adequate cooling, which means having three full slots available per GPU. Putting one GPU right next to another will create an enormous amount of heat and will not only negatively impact performance, but shorten the life of the GPUs. Looking at the picture of the Z170-WS, PCIe slot 1 could host an x16 PCIe GPU, but to get three full slots between slot 1 and the next available slot, which would be slot 4, it turns out that slot 4 is physically an x8 slot so the x16 GPU won't fit even though it is compatible electrically. The next slot the second x16 GPU would fit in is slot 5, and even though a 3rd GPU would fit in the last slot, the problem is it would be right up against the second GPU. This MB wouldn't work for me because I like using 3 GPUs per system and due to the lack of properly spaced slots for adequate cooling, this particular motherboard won't do it. In addition, there looks like an internal USB 2.0 header and what appear to be internal power and reset buttons that would almost definitely interfere with properly seating a GPU in the last slot. I'm sure this motherboard also has a PLX chip to get the 4 x8 PCIe lanes to operate, since the 1151 socketed CPU only natively supports 16 lanes. That may not be an issue, but anytime a switch is involved, some level of performance is lost and another component adds yet one more reliability issue.
On the ASRock x99 OC, there is enough space to get 3 GPUs mounted using 3 full slot widths for cooling purposes See the Newegg pic here: https://www.newegg.com/Product/Product. ... 6813157596 . However, the case you use has to have room for 8 slots. Assuming it does, then 3 GPUs can be mounted with enough cooling space. You'll note that there is also a USB 2.0 header, but it's located near the back of the last slot, so it only slightly interferes with mounting the 3rd GPU, but it still "clicks" into place. It also has an extra molex connector that's turned 90 degrees, so an additional power connector can be used to augment power to the PCIe lanes and it does not interfere with mounting the GPU just above it. There also isn't a PLX chip on the board which might improve reliability and latency. It's rather surprising how few motherboards allow for this configuration.
In terms of "only" a 10% loss of performance by going to an x4 interface, that becomes a pretty big PPD penalty when you're using Titan X pascal cards. Right now, a Titan X pacal is processing an 11402 work unit with a frame time of 1:33 and a PPD estimate of 1,215,842. A 10% hit is 121,584 PPD. Over a month of folding that's about 3.6 million points, over a year 44ish million points given up by using an x4 slot vs an x8 slot.
FYI, a rack mountable case like the Chenbro RM-41300 https://www.newegg.com/Product/Product. ... 6811123178 has 8 slots and comes with 3x 120mm fans. I modify each case by adding another front 120mm fan and cutting a 120mm hole on each side of the case and mount two more 120mm fans blowing into the case, and add an 80mm fan in the front (comes with two 80mm fans in the rear). I also cut an 80mm hole on the left side located up and to the rear of the case and rivet an 80mm fan grill in place. The airflow is then 6 x 120mm fans and 1 x 80mm fan blowing into the case and two 80mm fans blowing out. Needless to say there is an enormous amount of airflow that exits out between the GPUs and through the 80mm hole at the rear. All the GPUs run about 58 to 70 deg C with a small overclock (temp variation depends on core and work unit type).
I'm not endorsing any vendor over another, I'm simply referencing what has worked for me. I'm sure there are other configurations that work just as well using components from other manufacturers. Hope this helps!
In terms of the comparison between the Asus Z170-WS and the ASRock X99 OC Formula, the Asus only has enough room between the PCIe slots to have two GPUs the way I configure my systems. There needs to be enough room between each GPU to allow for adequate cooling, which means having three full slots available per GPU. Putting one GPU right next to another will create an enormous amount of heat and will not only negatively impact performance, but shorten the life of the GPUs. Looking at the picture of the Z170-WS, PCIe slot 1 could host an x16 PCIe GPU, but to get three full slots between slot 1 and the next available slot, which would be slot 4, it turns out that slot 4 is physically an x8 slot so the x16 GPU won't fit even though it is compatible electrically. The next slot the second x16 GPU would fit in is slot 5, and even though a 3rd GPU would fit in the last slot, the problem is it would be right up against the second GPU. This MB wouldn't work for me because I like using 3 GPUs per system and due to the lack of properly spaced slots for adequate cooling, this particular motherboard won't do it. In addition, there looks like an internal USB 2.0 header and what appear to be internal power and reset buttons that would almost definitely interfere with properly seating a GPU in the last slot. I'm sure this motherboard also has a PLX chip to get the 4 x8 PCIe lanes to operate, since the 1151 socketed CPU only natively supports 16 lanes. That may not be an issue, but anytime a switch is involved, some level of performance is lost and another component adds yet one more reliability issue.
On the ASRock x99 OC, there is enough space to get 3 GPUs mounted using 3 full slot widths for cooling purposes See the Newegg pic here: https://www.newegg.com/Product/Product. ... 6813157596 . However, the case you use has to have room for 8 slots. Assuming it does, then 3 GPUs can be mounted with enough cooling space. You'll note that there is also a USB 2.0 header, but it's located near the back of the last slot, so it only slightly interferes with mounting the 3rd GPU, but it still "clicks" into place. It also has an extra molex connector that's turned 90 degrees, so an additional power connector can be used to augment power to the PCIe lanes and it does not interfere with mounting the GPU just above it. There also isn't a PLX chip on the board which might improve reliability and latency. It's rather surprising how few motherboards allow for this configuration.
In terms of "only" a 10% loss of performance by going to an x4 interface, that becomes a pretty big PPD penalty when you're using Titan X pascal cards. Right now, a Titan X pacal is processing an 11402 work unit with a frame time of 1:33 and a PPD estimate of 1,215,842. A 10% hit is 121,584 PPD. Over a month of folding that's about 3.6 million points, over a year 44ish million points given up by using an x4 slot vs an x8 slot.
FYI, a rack mountable case like the Chenbro RM-41300 https://www.newegg.com/Product/Product. ... 6811123178 has 8 slots and comes with 3x 120mm fans. I modify each case by adding another front 120mm fan and cutting a 120mm hole on each side of the case and mount two more 120mm fans blowing into the case, and add an 80mm fan in the front (comes with two 80mm fans in the rear). I also cut an 80mm hole on the left side located up and to the rear of the case and rivet an 80mm fan grill in place. The airflow is then 6 x 120mm fans and 1 x 80mm fan blowing into the case and two 80mm fans blowing out. Needless to say there is an enormous amount of airflow that exits out between the GPUs and through the 80mm hole at the rear. All the GPUs run about 58 to 70 deg C with a small overclock (temp variation depends on core and work unit type).
I'm not endorsing any vendor over another, I'm simply referencing what has worked for me. I'm sure there are other configurations that work just as well using components from other manufacturers. Hope this helps!
Hardware config viewtopic.php?f=66&t=17997&p=277235#p277235
Re: PCI-e bandwidth/capacity limitations
Ed, If you sent any of that gear to me I promise to dedicate it to protein folding and maintain it the best I can. I easily have the facility, and some of the gear, to expand my PPD 20-fold or more. I just don't have the cash to buy graphics cards.PS3EdOlkkola wrote:I've given away a bunch of stuff and have a whole lot more to give away as soon as I can package it up for Fedex (AMD Fury X's, Nvidia 980's, 980ti's, looking around it's about a dozen GPUs). After about 2 to 3 years, I upgrade each system unit with a new motherboard and GPUs, and every 5 years drop in a new power supply. It keeps my sanity in check by not chasing down intermittent hardware failures when equipment gets too old.
BTW, I'm #10 on the Hit Parade in PPD per day: http://folding.extremeoverclocking.com/ ... 1&t=224497
In Science We Trust
-
- Posts: 50
- Joined: Mon Jan 16, 2017 11:40 am
- Hardware configuration: 4x1080Ti + 2x1050Ti
- Location: Russia, Moscow
Re: PCI-e bandwidth/capacity limitations
The GA-X99-Ultra Gaming has a good pci-e slots combination for 3 air-cooled GPUs, providing pci-e 3.0 x16+x8+x8 for them and a slot space between cards.PS3EdOlkkola wrote:In terms of the comparison between the Asus Z170-WS and the ASRock X99 OC Formula
GA-X99-Designare EX is similar, but more expensive.
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: PCI-e bandwidth/capacity limitations
Thank you PS3EdOlkkola for your expert insight. So it is about the big mainboard featuring 3 GPU x8 slots with big space between so heat (and noise) don't make a problem. The mainboards with narrow GPU slots can only be used with 4 GPUs with x16 risers which then don't fit in standard cases.
@boristsybin: GA-X99-Ultra Gaming looks good for 3 GPUs with big space between for $270.
GA-X99-Designare EX for $470 has a too high price for also 3 GPU slots with big space possible.
@boristsybin: GA-X99-Ultra Gaming looks good for 3 GPUs with big space between for $270.
GA-X99-Designare EX for $470 has a too high price for also 3 GPU slots with big space possible.
-
- Posts: 177
- Joined: Tue Aug 26, 2014 9:48 pm
- Hardware configuration: 10 SMP folding slots on Intel Phi "Knights Landing" system, configured as 24 CPUs/slot
9 AMD GPU folding slots
31 Nvidia GPU folding slots
50 total folding slots
Average PPD/slot = 459,500 - Location: Dallas, TX
Re: PCI-e bandwidth/capacity limitations
@Aurum, I've promised the GPUs to the guys that run the CureCoin team so they can begin building out their Cloud Folding program as a 501(c)3 nonprofit. We're on the same team so ultimately the donations of the GPUs, motherboards, processors, etc will all be going to a cause we both support If they can't utilize what I'm sending them, I'll shoot it your way. Just PM me your physical mailing address.
@foldy You've very succinctly summarized my point, thank you! The "3 slot rule" per air-cooled GPU provides the best compromise between heat, performance, longevity and noise. Using lots of fans running at 60-70% of max RPM (using Noctua fans) lets the rigs run pretty quiet in the Chenbro case.
Other motherboards that support 3x GPUs with a space between them and a few plusses and minuses:
@foldy You've very succinctly summarized my point, thank you! The "3 slot rule" per air-cooled GPU provides the best compromise between heat, performance, longevity and noise. Using lots of fans running at 60-70% of max RPM (using Noctua fans) lets the rigs run pretty quiet in the Chenbro case.
Other motherboards that support 3x GPUs with a space between them and a few plusses and minuses:
- EVGA Classified X99 at $303 https://www.newegg.com/Product/Product. ... 6813188163 (note the fan connectors and additional PCIe power connectors are rotated 90 degrees - no GPU seating issue. I have 3 Titan X maxwells on this board
- The MSI X99A GodLike Gaming Carbon at $579 https://www.newegg.com/Product/Product. ... 6813130921 (note has very few connectors interfering with 3rd GPU seating correctly)
- The ASRock X99 WS at $273 https://www.newegg.com/Product/Product. ... 6813157536 (note internal USB 2 headers will prevent 3rd GPU from seating and covers up power and reset buttons)
- The MSI X99 XPOWER GAMING at $409 https://www.newegg.com/Product/Product. ... 6813130935 (note should easily fit 3rd GPU and USB 2 header near back of board)
- The Asus X99-E WS at $500 https://www.newegg.com/Product/Product. ... 2F84D69469 (note the USB 2.0 internal header might prevent proper seating of the last GPU, plus the internal power and reset buttons will likely be covered by the GPU
- The ASRock X99 WS-E at $399 https://www.newegg.com/Product/Product. ... 6813157538 (note the diagnostic LEDs and the internal power and reset buttons would be covered by the 3rd GPU) same for the similar model ASRock X99 WS-E/10G at $599 https://www.newegg.com/Product/Product. ... 6813157537
- The Asus Rampage V Extreme X99 at $507 https://www.newegg.com/Product/Product. ... 6813132262 (note the USB 3 internal header will almost certainly keep the 3rd GPU from seating in the slot, but obviously not an issue if USB 3 header won't be used) same for the similar model Asus ROG RAMPAGE V EXTREME/U3.1 at $479 https://www.newegg.com/Product/Product. ... 6813132505
- The MSI Gaming X99A GODlike Gaming at $524 https://www.newegg.com/Product/Product. ... 6813130878 looks like a good option. If the TPM interface is not used and using the USB 2 header furthest to the rear of the board and won't interfere with GPU seating.)
- The ASRock X99 Professional at $329 https://www.newegg.com/Product/Product. ... 6813157539 (note the internal power and reset buttons will be covered by the 3rd GPU
Hardware config viewtopic.php?f=66&t=17997&p=277235#p277235
Re: PCI-e bandwidth/capacity limitations
I propose that there's a direct relationship between the minimum speed of the PCIe bus and the GFLOPS of the GPU. For example, in some of my really old machines, I have an X16 slot and a 1x slot. Shouldn't I be able to add a really low performance GPU using using a 1x riser? If so, at what point would the "really low performance GPU" become too fast to be worthwhile?
How do I know if my system has enough lanes to add a 1x device without degrading the primary slot?
How do I know if my system has enough lanes to add a 1x device without degrading the primary slot?