Rack-Mount 8-GPU Dedicated Folding Rig
Moderator: Site Moderators
Forum rules
Please read the forum rules before posting.
Please read the forum rules before posting.
-
- Posts: 41
- Joined: Thu Oct 09, 2008 8:59 pm
Rack-Mount 8-GPU Dedicated Folding Rig
Dedicated GPU servers are now common. They often come in 4U rack-mountable boxes that support up to 8 GPUs with PCIe 3.0.
I'm currently running eight "beige desktops" with 2 nVidia GPUs per PC, so I have all of the administration and inefficiencies of eight power supplies, 8 OS images to update, etc.
I would consider an 8-GPU server that ran on a dedicated 20amp household 110v circuit. But I'm over my head already. For instance, the GPU server product literature talks about support for Quadro and Tesla workstation cards, but I use RTX 20xx gaming cards (mostly blower design). I don't know what questions to ask.
So, what are the gotchas, tradeoffs, and lessons learned from such a consolidation endeavor?
I'm currently running eight "beige desktops" with 2 nVidia GPUs per PC, so I have all of the administration and inefficiencies of eight power supplies, 8 OS images to update, etc.
I would consider an 8-GPU server that ran on a dedicated 20amp household 110v circuit. But I'm over my head already. For instance, the GPU server product literature talks about support for Quadro and Tesla workstation cards, but I use RTX 20xx gaming cards (mostly blower design). I don't know what questions to ask.
So, what are the gotchas, tradeoffs, and lessons learned from such a consolidation endeavor?
Re: Rack-Mount 8-GPU Dedicated Folding Rig
I'm not familiar with those 8-GPU boxes, but one thing I'd look at is how many PCIe lanes you get per GPU. If I remember correctly, GPU folding performance starts getting significantly affected if you drop below 8 lanes per card.
Another thing to look at is how many CPU cores are available. The current recommendation is 1 core per GPU, I think.
Another thing to look at is how many CPU cores are available. The current recommendation is 1 core per GPU, I think.
-
- Posts: 41
- Joined: Thu Oct 09, 2008 8:59 pm
Re: Rack-Mount 8-GPU Dedicated Folding Rig
Agree. The specs say eight PCIe 3.0 x16 slots. The motherboards support one or two Xeon, with 6 to more than I can afford cores. So that’s easy to fulfill as folding works fine on a core running at 1.5 GHz and up.
Re: Rack-Mount 8-GPU Dedicated Folding Rig
Well, the specs on my PC motherboards say 3 PCIe 3.0 x16 slots, but you only get 16 lanes in slot 1 if slots 2 & 3 are empty. Fill slots 1 & 2 and you get 8 lanes each; fill 3 slots and you get 8/4/4. So you need to read the fine print & find out how many lanes each slot gets if all 8 slots are actually occupied. My guess is it won't be anywhere close to 16 lanes each, not unless you have some exotic CPU in there. Which is not impossible - I hear AMD has been introducing some CPUs that have a higher number of PCIe lanes, but I haven't been keeping up with the latest info on that.The specs say eight PCIe 3.0 x16 slots.
-
- Posts: 511
- Joined: Mon May 21, 2018 4:12 pm
- Hardware configuration: Ubuntu 22.04.2 LTS; NVidia 525.60.11; 2 x 4070ti; 4070; 4060ti; 3x 3080; 3070ti; 3070
- Location: Great White North
Re: Rack-Mount 8-GPU Dedicated Folding Rig
Most of those 4U servers are built for Tesla and Quadro cards that are fanless and the chassis has 4000rpm fans with shrouds (really loud) to force air over them to cool them.
Current Desktop AMD Processors (Ryzen 2000) have 24 PCIe Lanes. 16 for the GPU, 4 for NVMe m.2 slots and 4 for the link to the Chipset which usually feed the third GPU slot at PCIe Gen2
To get lots of PCIe lanes you have to move to High end Desktop (HEDT) (Threadripper or Intel Xtreme) or Server platforms (Epyc or Xeon)
In general most GPUs under a 2080Ti can be fed off a PCIe3 x4 and see little to no loss in performance.
I’m taking of advantage of this and using a mining frame with m.2 to PCIe x4 risers to run 6 GPUs off one Ryzen 2700
Current Desktop AMD Processors (Ryzen 2000) have 24 PCIe Lanes. 16 for the GPU, 4 for NVMe m.2 slots and 4 for the link to the Chipset which usually feed the third GPU slot at PCIe Gen2
To get lots of PCIe lanes you have to move to High end Desktop (HEDT) (Threadripper or Intel Xtreme) or Server platforms (Epyc or Xeon)
In general most GPUs under a 2080Ti can be fed off a PCIe3 x4 and see little to no loss in performance.
I’m taking of advantage of this and using a mining frame with m.2 to PCIe x4 risers to run 6 GPUs off one Ryzen 2700
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Rack-Mount 8-GPU Dedicated Folding Rig
CPU should be 2.5+Ghz to feed fast GPUs. On Windows it needs minimum pcie 3.0 x4 for fast GPUs and on Linux pcie 3.0 x1 or else it bottlenecks.
Last edited by foldy on Fri Mar 06, 2020 5:31 pm, edited 1 time in total.
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: Rack-Mount 8-GPU Dedicated Folding Rig
As you can see from gordonbb's post, if you list an actual model, the help is there. 'In general' just gets wishy washy answers.
So post what you are thinking of, so folks can research just how many PCIE slots are supported, at what speed.
(Spoiler alert, it will never be as fast as separate PCs)
So post what you are thinking of, so folks can research just how many PCIE slots are supported, at what speed.
(Spoiler alert, it will never be as fast as separate PCs)
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
-
- Posts: 41
- Joined: Thu Oct 09, 2008 8:59 pm
Re: Rack-Mount 8-GPU Dedicated Folding Rig
Here’s an example of the Asus server. Supermicro and others have similar gpu servers.
- One or two Xeon
- Active or passive-cooled gpus
- dual PLX supports 8 gpus
https://www.asus.com/us/Commercial-Serv ... SC8000-G4/
- One or two Xeon
- Active or passive-cooled gpus
- dual PLX supports 8 gpus
https://www.asus.com/us/Commercial-Serv ... SC8000-G4/
-
- Site Moderator
- Posts: 6359
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: Rack-Mount 8-GPU Dedicated Folding Rig
This is going to be very expensive and noisy ...
You can't mount gaming GeForce in such case. You'll probably need very expensive Quadro or Tesla ...
For instance, nothing like these will fit :
https://fr.msi.com/Graphics-card/GeForc ... VENTUS-11G
https://fr.msi.com/Graphics-card/GeForc ... UKE-11G-OC
You usually need something like this :
http://www.pny.eu/fr/professional/explo ... datacenter
You can't mount gaming GeForce in such case. You'll probably need very expensive Quadro or Tesla ...
For instance, nothing like these will fit :
https://fr.msi.com/Graphics-card/GeForc ... VENTUS-11G
https://fr.msi.com/Graphics-card/GeForc ... UKE-11G-OC
You usually need something like this :
http://www.pny.eu/fr/professional/explo ... datacenter
-
- Posts: 41
- Joined: Thu Oct 09, 2008 8:59 pm
Re: Rack-Mount 8-GPU Dedicated Folding Rig
No, junking all my RTX cards for $$$ Quadros is not in my plans. Plus the server cost approaching $5,000 is pricey.
What would a workable mining-type rig with PCIe risers look like? Motherboard support for lots of gpus, very large power supply, mining case, etc. Have folders done this?
What would a workable mining-type rig with PCIe risers look like? Motherboard support for lots of gpus, very large power supply, mining case, etc. Have folders done this?
-
- Posts: 41
- Joined: Thu Oct 09, 2008 8:59 pm
Re: Rack-Mount 8-GPU Dedicated Folding Rig
Thank you, gordonbb.
-
- Posts: 511
- Joined: Mon May 21, 2018 4:12 pm
- Hardware configuration: Ubuntu 22.04.2 LTS; NVidia 525.60.11; 2 x 4070ti; 4070; 4060ti; 3x 3080; 3070ti; 3070
- Location: Great White North
Re: Rack-Mount 8-GPU Dedicated Folding Rig
If one of your existing motherboards supports PCIe bifurcation in the BIOS then you might be able to reuse it with a PCIe m.2 adapter and the m.2 adapter on the motherboard to drive 5 GPUs off one motherboard.Catalina588 wrote:No, junking all my RTX cards for $$$ Quadros is not in my plans. Plus the server cost approaching $5,000 is pricey.
What would a workable mining-type rig with PCIe risers look like? Motherboard support for lots of gpus, very large power supply, mining case, etc. Have folders done this?
With the effective death of mining on GPUs many of the mining frames can be found quite inexpensively. I used the Veddha 6 GPU minercase Pro version that I got delivered for around $40Cdn.
-
- Posts: 41
- Joined: Thu Oct 09, 2008 8:59 pm
Re: Rack-Mount 8-GPU Dedicated Folding Rig
Gordonbb,
I read your PCIe m.2 rig build post several times and I think I’ve got it all down except the riser extension lengths. I looked at all the parts list you provided and found price and availability reasonable.
My “best” motherboards with a PLX are PCIe 2.0, and they’re obsolescent. The current generation rig (six of them) are ASRock Intel 270, 370, and 390. Two mobo m.2 slots. Also, a WiFi slot for which a PCIe riser card exists (but I’m unlikely to use).
If I follow your approach, I think I can safely drive at 4x speeds:
- 4 gpus @ PCIe 4x with the Asus m.2 to PCIe x16 card (or the newer ASRock equivalent)
- 2 gpus on the top and bottom m.2 slots
- maybe 1 gpu on the bottom x16 slot running at 4x. Might work.
I’d need two psus and a psu “connector” (which I have). SSD on SATA6 unused slot. 6c/6t or 4c/8t processor. My usual rigs are 8gb memory but I’d go 16 go here so as to not bottleneck the cpu-gpu transfers. Linux: I’m happy with Mint.
Thoughts?
I read your PCIe m.2 rig build post several times and I think I’ve got it all down except the riser extension lengths. I looked at all the parts list you provided and found price and availability reasonable.
My “best” motherboards with a PLX are PCIe 2.0, and they’re obsolescent. The current generation rig (six of them) are ASRock Intel 270, 370, and 390. Two mobo m.2 slots. Also, a WiFi slot for which a PCIe riser card exists (but I’m unlikely to use).
If I follow your approach, I think I can safely drive at 4x speeds:
- 4 gpus @ PCIe 4x with the Asus m.2 to PCIe x16 card (or the newer ASRock equivalent)
- 2 gpus on the top and bottom m.2 slots
- maybe 1 gpu on the bottom x16 slot running at 4x. Might work.
I’d need two psus and a psu “connector” (which I have). SSD on SATA6 unused slot. 6c/6t or 4c/8t processor. My usual rigs are 8gb memory but I’d go 16 go here so as to not bottleneck the cpu-gpu transfers. Linux: I’m happy with Mint.
Thoughts?
-
- Posts: 511
- Joined: Mon May 21, 2018 4:12 pm
- Hardware configuration: Ubuntu 22.04.2 LTS; NVidia 525.60.11; 2 x 4070ti; 4070; 4060ti; 3x 3080; 3070ti; 3070
- Location: Great White North
Re: Rack-Mount 8-GPU Dedicated Folding Rig
The riser lengths will be dependent on the PCIe x16 slot location and m.2 break-out card used and the location of the additional m.2 slots on the motherboard.Catalina588 wrote:Gordonbb,
I read your PCIe m.2 rig build post several times and I think I’ve got it all down except the riser extension lengths. I looked at all the parts list you provided and found price and availability reasonable.
Once I got the mining frame in I used the mounting posts on the frame to estimate the location of the PCIe m.2 connectors and an old Radeon HD5670 card to estimate the height of the m.2 break-out card. I then found an old IDE ribbon cable and removed the connectors from it and marked lines on it at 5cm intervals then used that to measure the length from the appropriate m.2 header to the PCIe x16 connector of the GPU now mounted on the upper bar of the frame and if the length was close rounding up to the next 5cm length as it is better for the cable to be too long than to be too short.
Ideally reusing some of your existing motherboards, CPUs, memory and Power Supplies would keep the cost down. I took a quick look at a couple of ASRock z390 motherboard manuals and saw no BIOS settings for PCIe Bifurcation.Catalina588 wrote:My “best” motherboards with a PLX are PCIe 2.0, and they’re obsolescent. The current generation rig (six of them) are ASRock Intel 270, 370, and 390. Two mobo m.2 slots. Also, a WiFi slot for which a PCIe riser card exists (but I’m unlikely to use).
Looking at the ASRock Ultra Quad M.2 card manual the supported motherboards all seem to be x299 and x399 variants. So you'll need to look for a new motherboard. The Gigabyte z390 Gaming X manual shows it to have PCIe BiFurcation Support and 2 m.2 slots so would be an inexpensive option that meets the requirement for 6 GPUs.
Yes, you will need at least 1.5GB of RAM per slot as recent WUs are more memory hungry so ideally 16GB of DDR4 which would leave 7GB for the OS. I was running with 8GB and have some slots fail to insufficient memory.
I'm currently running with 2 x 750W Corsair RM750x power supplies but running 5 RTX2070 Super Hybrids and 1 RTX2080 was tight and both Power Supplies were running close to capacity which is not ideal for their long-term life. The 2070s pull between 185-215W each and the 2080 205-230W but reducing the power limit using
Code: Select all
nvidia-smi -i <GPU ID> -pl <Target Power Limit in Watts>
Its hard to find actual block diagrams for motherboards these days but in general on the current AMD and Intel Desktop platforms you have directly from the CPU 4 PCIe lanes to the upper m.2 slot and 16 PCIe lanes to the PCIe slots, on "SLI" capable boards you would have 2 PCIe x16 slots, the upper wired at x16 and the lower of the two wired at x8 and PCIe "switches" that direct the upper 8 lanes from the upper slot to the lower allowing x8/x8 operation with 2 GPUs installed.Catalina588 wrote:If I follow your approach, I think I can safely drive at 4x speeds:
- 4 gpus @ PCIe 4x with the Asus m.2 to PCIe x16 card (or the newer ASRock equivalent)
- 2 gpus on the top and bottom m.2 slots
- maybe 1 gpu on the bottom x16 slot running at 4x. Might work.
I’d need two psus and a psu “connector” (which I have). SSD on SATA6 unused slot. 6c/6t or 4c/8t processor. My usual rigs are 8gb memory but I’d go 16 go here so as to not bottleneck the cpu-gpu transfers. Linux: I’m happy with Mint.
Thoughts?
There are an additional 4 PCIe lanes (or a DMI link on Intel that is about the same Bandwidth) from the CPU that feed the Chipset and all the other peripherals connected to it.
So unless you are using a Ryzen 3000-series processor on x570 which uses PCIe Gen4 for the Chipset connection you likely will have a bottleneck running GPUs on both the lower m.2 and PCIe slots.
I use Ubuntu but Mint should work similarly as they are both based off Debian and both work well for Folding.
I would pick up whichever 4-port m.2 card is least expensive as aside from some VRMs to generate minor Voltage Rails for the m.2 NVMe drives, which aren't need in this use case, they are essentially just passive traces that split the 16 PCIe lanes to 4 x 4 lanes to the m.2 connectors.
If you are planning on building more than one of these the mining frames I used come with plates to stack frames on top of each other.
For the cooling fans I just used 120mm Fractal Designs fans I had from various cases. I typically remove these from their cases and replace them with Noctua iPPC fans so it was nice to use a few of them up And having them push air over the GPUs seems to work better than having them pull air from the GPUs. Having 5 fan headers on the motherboard also allowed me to cable them with no need for fan "Y" splitters or extension cables.
One benefit is that using the open frames managing the heat is much easier than in enclosed cases and the 6-GPU rig actually runs quieter than most of my dual GPU rigs.
The downside is if you do have to reboot the rig due to a stuck slot then the other 5 GPUs all roll-back to the last checkpoint so you do lose some production.
Re: Rack-Mount 8-GPU Dedicated Folding Rig
Linux takes a lot less of a hit from 1x PCI-E connections than Windows does.
Newegg sells a few versions of "Rosewill" rack mount case designed to handle 6 or 8 GPUs in one case - but there aren't a LOT of motherboards that have more than 6 PCI-E slots per motherboard, outside of server-specific designs and some of the "mining" motherboards.
Most of my folding rigs run XUbuntu (I prefer XFCE by far as a UI over the stock Ubundu UI) in an open shelf/rack custom build setup.
The BIG issue with many-card rigs is that if you have an issue, you lose a LOT more production - and stability isn't as good each time you add one more card.
Also, you really need at least 1 CPU core PER GPU for max throughput - hyperthreaded "cores" don't count except on lower-end cards, you want REAL cores if you're going to run something like 1080ti or high end 2xxx series cards.
If you look at "crypto mining rigs" for ideas, that's a good source - except do NOT try to use the low-end CPUs most such rigs use and plan to use LINUX to avoid much hit on the cards running on risers.
Newegg sells a few versions of "Rosewill" rack mount case designed to handle 6 or 8 GPUs in one case - but there aren't a LOT of motherboards that have more than 6 PCI-E slots per motherboard, outside of server-specific designs and some of the "mining" motherboards.
Most of my folding rigs run XUbuntu (I prefer XFCE by far as a UI over the stock Ubundu UI) in an open shelf/rack custom build setup.
The BIG issue with many-card rigs is that if you have an issue, you lose a LOT more production - and stability isn't as good each time you add one more card.
Also, you really need at least 1 CPU core PER GPU for max throughput - hyperthreaded "cores" don't count except on lower-end cards, you want REAL cores if you're going to run something like 1080ti or high end 2xxx series cards.
If you look at "crypto mining rigs" for ideas, that's a good source - except do NOT try to use the low-end CPUs most such rigs use and plan to use LINUX to avoid much hit on the cards running on risers.