For DC work, that depends of course on the algorithms/science they are trying to implement. At the moment, I see more CPU (and less GPU) work out there. I think it will depend a lot on the direction that AI and machine learning takes, and what work is best farmed out to the crunchers versus done in-house. I would hold my options open as much as possible.ProDigit wrote:Do you think that the future of computing will be graphics cards or CPU?
When the core 22 will be tested?
Moderators: Site Moderators, FAHC Science Team
Re: When the core 22 will be tested?
Re: When the core 22 will be tested?
If it's OpenCL, then it's AMD or nVidia. If it turns out to be CUDA also, then those with nVidia hardware will be very happy, since I predict that CUDA will produce each unit of science faster than OpenCL, but only on nVidia.JimF wrote:Wasn't Core 22 going to be CUDA? ....
But if it is still OpenCL, the new AMD Navi chips could be cost-effective.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: When the core 22 will be tested?
The rate at which CPU cores increase actually is pretty insignificant. If you think a hexacore CPU is powerful, consider the fact that a single GPU has from, say, 16 to 12000 single precision computing cores. As long as the calculations can be ordered in a way that allows a large number of parallel floating point operations the GPU will win, hands down.ProDigit wrote:Do you think that the future of computing will be graphics cards or CPU?
CPUs are heading towards hexacores and more; it wouldn't surprise me, if a home computer in 10 years, would have between 10 to 50 low-power cores, and inevitably a system that exists just entirely out of 'gpu cores' as CPU cores.
Makes sense from a power perspective.
All the way until quantum computing takes over.
See https://en.wikipedia.org/wiki/Stream_processing
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: When the core 22 will be tested?
I know.bruce wrote:The rate at which CPU cores increase actually is pretty insignificant. If you think a hexacore CPU is powerful, consider the fact that a single GPU has from, say, 16 to 12000 single precision computing cores. As long as the calculations can be ordered in a way that allows a large number of parallel floating point operations the GPU will win, hands down.ProDigit wrote:Do you think that the future of computing will be graphics cards or CPU?
CPUs are heading towards hexacores and more; it wouldn't surprise me, if a home computer in 10 years, would have between 10 to 50 low-power cores, and inevitably a system that exists just entirely out of 'gpu cores' as CPU cores.
Makes sense from a power perspective.
All the way until quantum computing takes over.
See https://en.wikipedia.org/wiki/Stream_processing
But looking back to 2014, we all had dual core pcs at home, the occasional quad core.
The server market came out with 10+ core designs then.
Today, Qualcomm has 24 core CPUs, 48 threads.
I can see within 5 to 10 years, our home PCs have 24 cores.
And a CPU core is much faster than a GPU core.
I don't think there will be any limit to CPU cores other than thermal limits.
But with smaller lithography, more cores will fit in.
Perhaps even a hybrid system (just like phones have a Big Little setup, perhaps in the future we will see 4 real CPU cores, and tens to hundreds of smaller cores like GPUs).
Re: When the core 22 will be tested?
A CPU core isn't really "faster" than a GPU core. A GPU core is more specialized while the CPU core is more general. We need both.
FAH depends mostly on the GFLOPS numbers that can be produced and on a CPU, the limit of the number of FPUs is the primary issue slowing down FAH calculations. Advanced instruction sets like SSE and AVX seem to be making progress faster than adding independent cores. On the other hand, stream_processing offers mostly unlimited GFLOPS but requires the x86 instructions to be compiled into kernel (blocks of GPU work) and then transferred to the GPU memory to be processed and then back to main RAM so it can be reorganized and stored to disk and uploaded to the internet.
Writing code for OpenCL/CUDA is a more specialized skill that writing code for x86/AMD64 and as I suggested above, it's only useful for processes that can be parallelized to use the GPU's capabilities. Given the masses and forces between each pair of 50000 atoms calculate Newton's law F=MA (solved for F) becomes a single process where on a 8-way CPU, you have to divide the same process into 8 groups of more than 6250 atoms duplicate some of the atoms near the boundaries of those 8 groups, and calculate F=MA in each group and then combine the results from those groups, and then calculate new positions for all of the atoms (and then repea those processes for a day or so).
The existing FAHCore_a7 software can easily manage systems with, say, 64 CPUs without any changes.
FAH depends mostly on the GFLOPS numbers that can be produced and on a CPU, the limit of the number of FPUs is the primary issue slowing down FAH calculations. Advanced instruction sets like SSE and AVX seem to be making progress faster than adding independent cores. On the other hand, stream_processing offers mostly unlimited GFLOPS but requires the x86 instructions to be compiled into kernel (blocks of GPU work) and then transferred to the GPU memory to be processed and then back to main RAM so it can be reorganized and stored to disk and uploaded to the internet.
Writing code for OpenCL/CUDA is a more specialized skill that writing code for x86/AMD64 and as I suggested above, it's only useful for processes that can be parallelized to use the GPU's capabilities. Given the masses and forces between each pair of 50000 atoms calculate Newton's law F=MA (solved for F) becomes a single process where on a 8-way CPU, you have to divide the same process into 8 groups of more than 6250 atoms duplicate some of the atoms near the boundaries of those 8 groups, and calculate F=MA in each group and then combine the results from those groups, and then calculate new positions for all of the atoms (and then repea those processes for a day or so).
The existing FAHCore_a7 software can easily manage systems with, say, 64 CPUs without any changes.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: When the core 22 will be tested?
if the PCIe express lane is the current limitation, they should optimize their program.
Especially since the data and program itself is fairly small in size, it could be loaded in VRAM; and use two GPU threads for spreading tasks locally.
Basically getting everything done from the GPU.
There's no reason why so much data needs to be hopping back and forth on the PCIE bus; just like how the PC connects to the FAH servers, it sends and receives only few small packages in between WUs.
Even Intel IGPs have at least 8 processing pipelines; where 2 can be used for task distribution. More with dedicated graphics cards.
Especially since the data and program itself is fairly small in size, it could be loaded in VRAM; and use two GPU threads for spreading tasks locally.
Basically getting everything done from the GPU.
There's no reason why so much data needs to be hopping back and forth on the PCIE bus; just like how the PC connects to the FAH servers, it sends and receives only few small packages in between WUs.
Even Intel IGPs have at least 8 processing pipelines; where 2 can be used for task distribution. More with dedicated graphics cards.
-
- Posts: 80
- Joined: Tue Dec 19, 2017 12:19 pm
Re: When the core 22 will be tested?
bruce wrote:No news doesn't suggest that nobody is work on it. In fact, it's just the opposite. No News is more likely to suggest that more people are working more hours on fixing whatever else needs to be fixed before it's ready for a semi-formal beta test of the new code.
I second foldy's comment. What do you mean "take advantage of better hardware"? What hardware do you have that isn't being taken advantage of when using Core_21?
The main method of taking advantage of new hardware is to update the drivers since that's where hardware features are really supported. FAH talks directly to OpenCL, which either talks to CUDA which talks to the drivers or OpenCL talks directly to the drivers (depending on how your GPU manufacturer installs the drivers).
RTX does Core 21 but the GPU has evolved allot since it was originally released.
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: When the core 22 will be tested?
Yes, and which of those changes are of any benefit for the calculations done for folding? The half-precision calculations added for RTX are of no use for folding, the rest of the features connected with RTX are aimed at improving a video picture and also are not useful.scott@bjorn3d wrote:RTX does Core 21 but the GPU has evolved allot since it was originally released.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
Re: When the core 22 will be tested?
(or Core_22?)bruce wrote:What do you mean "take advantage of better hardware"? What hardware do you have that isn't being taken advantage of when using Core_21
In fact, you may find that many projects don't have enough atoms to keep the increasing numbers of shaders continuously busy. That's certainly not true for all projects but it does happen.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: When the core 22 will be tested?
There are currently 5 projects listed on psummary using OPENMM_22. I have no idea when they first started appearing there.
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: When the core 22 will be tested?
OPENMM_22 may be the internal alpha test core running with OpenMM 7.2.1 When it is verifed successfully then new projects are setup on this core while old projects keep running on Core_21
-
- Site Moderator
- Posts: 6359
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: When the core 22 will be tested?
It's not a problem with OpenCL : kernels are compiled and optimized by the driver upon execution.scott@bjorn3d wrote:bruce wrote:No news doesn't suggest that nobody is work on it. In fact, it's just the opposite. No News is more likely to suggest that more people are working more hours on fixing whatever else needs to be fixed before it's ready for a semi-formal beta test of the new code.
I second foldy's comment. What do you mean "take advantage of better hardware"? What hardware do you have that isn't being taken advantage of when using Core_21?
The main method of taking advantage of new hardware is to update the drivers since that's where hardware features are really supported. FAH talks directly to OpenCL, which either talks to CUDA which talks to the drivers or OpenCL talks directly to the drivers (depending on how your GPU manufacturer installs the drivers).
RTX does Core 21 but the GPU has evolved allot since it was originally released.
The behaviour is different with CUDA, that require recompilation of the binaries to include support of new CUDA specifications (compute capabilities or new CUDA features).
Re: When the core 22 will be tested?
Core_22 will not use half-precision or use the new Tensor/RT cores. Both Core_21 and Core_22 will use newer/faster standard shaders, faster memory, and other hardware enhancements of the RTX family of hardware. Get the RTX 20 series hardware ONLY if you want better gaming and better AI or if the minor improvements to the shaders is worth the extra cost.Joe_H wrote:Yes, and which of those changes are of any benefit for the calculations done for folding? The half-precision calculations added for RTX are of no use for folding, the rest of the features connected with RTX are aimed at improving a video picture and also are not useful.scott@bjorn3d wrote:RTX does Core 21 but the GPU has evolved allot since it was originally released.
Core_22 is NOT tied to the new hardware or vice-versa. Internally, it will allow future FAH projects to use new scientific methods so it's important to the scientists but it won't magically make use of new hardware features which are not also available to Core_21.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: When the core 22 will be tested?
The wait is over! Public testing has begun.
Re: When the core 22 will be tested?
It would appear that the wait should have been longer - out of all of my machines, only ONE has not had massive BAD WORK UNIT issues - so far.
No overclock on ANY of them, usually running somewhat less than design TDP so if anything they're UNDERclocking - they were 100% stable with CORE_21
No overclock on ANY of them, usually running somewhat less than design TDP so if anything they're UNDERclocking - they were 100% stable with CORE_21