I don't see how the PCIe slot can "consume" more power. I also tried the Founders Edition which has a limit of 370 watts. No difference in PPD. Just those massive swings depending on the WU. Also the RTX 3080 doesn't even use 8x PCIe 3.0 for folding. I doesn't use the full 16x in games either. That .5% uplift is how the bits are encoded and lowers the overhead. Funny enough you "gain" .5% with PCIe, but being on AMD at lower resolutions you lose 15-30% FPS depending on the game. It is only when you reach 4K does the CPU not matter much. But we are talking folding here and I don't see any reason why PCIe 4.0 would help in folding.MeeLee wrote:@flarbear: I tend to agree with you.
The question I would ask here, is, if the pcie 4.0 bandwidth also consumes 1.5% more energy than 3.0? If your system runs at 350W, the extra 3.5W may be worth it, but it may not if the power draw is more like 10W higher...
And performance and power draw on Pcie 4.0 vs 3.0, and x16, vs x8, vs 4.0x4 speeds also need to be tested.
Now what needs to happen is FAHCores being written for Tensor. If the RTX 2080 Ti gets 3.5~ mill, a extra 4000 CUDA cores, higher clock speed and memory frequency should be at least 50% faster. But at 4.5 Mill tops it is only 28%. This tells me things need to be optimized better for currently CUDA. Then add Tensor WUs.