Page 2 of 5
Re: Intel® HD Graphics
Posted: Sun Apr 09, 2017 3:21 am
by bruce
I have a NVidia GT 740. Its performance is below that of the "average GPU." A 1 minute run on hdfr gave me a score of 8.90. It has 384 shaders running at 993 MHz which gives 762 SP GFLOPS. It easily makes the deadlines.
I also have a GT 710 (LP) which has 192 Shaders @ 954 which produces 366 SP GFLOPS. I'll need to benchmark it but that's probably a FAHBench score of 4.27.
Compare that to the Intel 6000 series which have 48 execution units @ 1000Ghz. I'd expect them to be about 25% as fast. This puts them somewhat above many CPUs but still considerably below most supported GPUs. I can't find anything except the Iris Pro Graphics 580 with more than 48 EUs. (It has 72). Do you have a source that rates the Intel Graphics in terms of Single Precision GFLOPS?
Question: Do you know anybody who considers themselves a gamer would would accept the frame rates that can be produced by an iGPU? Everybody I know has a NV GTX1080 (2560 shaders @ 1607) or a TItan (3584 shaders @1417) which produce something like 1000 GFLOPS.
Re: Intel® HD Graphics
Posted: Sun Apr 09, 2017 1:36 pm
by darkbasic
bruce wrote:Do you have a source that rates the Intel Graphics in terms of Single Precision GFLOPS?
Here it is:
https://en.wikipedia.org/wiki/Intel_HD_ ... s_Graphics
Mine is an Ultra Low Power HD Graphics 5500, capable of 364.8 GFLOPS and scores 3.8875 points with an unoptimized OpenCL stack.
Iris Plus Graphics 650, which is a modern Kaby Lake but still Ultra Low Power iGPU is capable of 883.2 GFLOPS, which you can interpolate into 9.4118 points with an unoptimized OpenCL stack.
Iris Pro Graphics 580 is a bit more outdated (it's based on the Skylake architecture instead of Kaby Lake) but it's a desktop iGPU and not an Ultra Low Voltage mobile one, so it's capable of 1152 GFLOPS which you can interpolate into 12.2763 points with an unoptimized OpenCL stack.
Intel still didn't release its Kaby Lake top notch desktop iGPU, but since from Broadwell (Iris Pro Graphics 6200, 883.2 GFLOPS) to Skylake (Iris Pro Graphics 580, 1152 GFLOPS) we had a 30.43% increase in GFLOPS,
we can expect the soon to be released Kaby Lake top notch iGPU to be capable of 1503 GFLOPS, which you can interpolate into 16.0168 points with an unoptimized OpenCL stack.
Considering the Beignet stack is still far from being well optimized we can expect to squeeze out another 30-50% out of it, which means that we could be able to score up to 20.8218-24.0251 which is 70-80% of an HD 7950/R9 280 at stock frequencies.
bruce wrote:Question: Do you know anybody who considers themselves a gamer would would accept the frame rates that can be produced by an iGPU? Everybody I know has a NV GTX1080 (2560 shaders @ 1607) or a TItan (3584 shaders @1417) which produce something like 1000 GFLOPS.
It's not a matter of who you know, it's a matter of market share.
Intel has about 70% of the GPU market share with their iGPUs, nVidia and AMD together have the remaining 30%. Nvidia has 66% of the remaining share and 1080s are probably a tenth of the whole Nvidia cards, which means that 1080s have a 0.1*0.66*0.3=0.0198% of the market share. Yeah 1080s are fast, but they account for
less than 0.02% of the total market share.
Re: Intel® HD Graphics
Posted: Sun Apr 09, 2017 2:51 pm
by bruce
Nobody disputes that iGPUs are great for displaying the desktop images produced by your OS. My question is in regard to high-performance video as demanded by video games and by FAH. What kind of GPU is in a "game-ready" or "VR ready" computer and what is their market share in that market?
Today's ad for a local electronics store advertises a name brand computer with a "6th Gen Intel Core i7 processor" and "Radeon RX-480 graphics." Does it qualify as being part of the Intel's 70%?
Re: Intel® HD Graphics
Posted: Sun Apr 09, 2017 3:47 pm
by Joe_H
Yes, an nVidia 1080 is at the high end and represents a smaller fraction of the market, but I only used that as an example where I could find a FAHBench score fairly quickly. There are plenty other mid-tier GPU's from AMD and nVidia that perform quite well and are several times as powerful as the Iris Plus 650.
The Iris Plus 650 you use as an example is one of the most powerful iGPU's currently offered in the Intel lineup. It is only offered in a fraction of the Kaby Lake mobile processors, and not at all currently for the desktop. Same applies for the lesser Iris Plus 640, mobile only. In terms of GFLOPS, there are a few top end iGPU offerings in the recent Intel CPU families that perform at that level.
So, basically the highest end Intel iGPU's are reaching a minimum performance threshold that may make them suitable for folding. Absent answers to the question of how long they can run at near full performance without being throttled by thermal considerations, that at least makes them "interesting". Thermal throttling may be less of an issue with a desktop computer. With future generations of Intel processors likely to get further improvements in iGPU's, worth watching and possibly investigating further. Whether that leads to an Intel GPU folding core, time will tell. PG will still have to weigh the costs in time against other programming and support needs.
Re: Intel® HD Graphics
Posted: Sun Apr 09, 2017 10:36 pm
by darkbasic
bruce wrote:Today's ad for a local electronics store advertises a name brand computer with a "6th Gen Intel Core i7 processor" and "Radeon RX-480 graphics." Does it qualify as being part of the Intel's 70%?
It qualifies as Intel as well as Nvidia, in fact the owner of such computer will be able to fold on both GPUs. Yet Intel has still 70% of the market share while Nvidia only 20%.
Joe_H wrote:In terms of GFLOPS, there are a few top end iGPU offerings in the recent Intel CPU families that perform at that level.
True, but since Haswell the average desktop iGPU has at least 432 GFLOPS which you can interpolate into 4.6036 points with an unoptimized OpenCL stack.
Considering the Beignet stack is still far from being well optimized we can expect to squeeze out another 30-50% out of it, which means that we could be able to score up to 6-7 with the average iGPU.
Since Intel has 70% of the market share while the 1080s (which score about 110) only 0.02% if you do the math you will discover that
they both account for the same exact computing power.
I'm not saying that Folding@home should absolutely release an Intel GPU core, but they should at least seriously consider it because the trend is towards more powerful APUs and even if iGPUs don't have raw power they have
numbers, extremely huge numbers. I don't know how hard could be to adapt core x22 (the one currently in development AFAIK) to be able to run on Intel GPUs, but if it doesn't require too much efforts it should be evaluated, especially because no one is going to optimize the OpenCL stack for a compute task until it becomes of general use and such optimizations will surely take a long time.
Joe_H wrote:Absent answers to the question of how long they can run at near full performance without being throttled by thermal considerations, that at least makes them "interesting". Thermal throttling may be less of an issue with a desktop computer.
My laptop doesn't have such a problem, but it's an high end Dell XPS 13. I really don't know about others.
Re: Intel® HD Graphics
Posted: Sun Apr 09, 2017 11:07 pm
by Joe_H
The iGPU is running within the same TDP as the CPU, so that is a practical limitation. Either can be working and needing to dump heat, once that passes the TDP set for the chip either the CPU, the GPU or both will be throttled to prevent overheating. I have not yet been able to find figures on how much power is required by the iGPU's in these later chips to reach their max GFLOPS.
You still are assuming that the high end power of these chips is representative of the whole, it is not. They also only represent an small fraction of sales, and an even smaller portion of the installed base. Yes, that may change over time, so so also is the computing needs of the folding projects. And due to the sequential nature of the WU's needed to complete runs, faster GPU's are more useful than a large number of slower units that take longer to complete any WU. That delays the start of the next WU in the sequence.
To summarize, the highest performance iGPU's that Intel is including in some of their recent offerings have reached parity with some of the slowest GPU's currently usable. That by itself is probably not enough to get development time, but may be enough for further investigation as to capabilities and potential use. Against that is how much they can already contribute using the existing A7 CPU core.
Re: Intel® HD Graphics
Posted: Mon Apr 10, 2017 2:01 pm
by darkbasic
Joe_H wrote:Against that is how much they can already contribute using the existing A7 CPU core.
Looking at the benchmarks the iGPU is already much faster than the CPU, but the real question is: do you prefer raw power or the problem solving flexibility of a general purpose CPU?
If PPDs are a true estimate of how useful a certain comupation is for a certain project then CPUs are just a waste of power, despite being able to solve a much wider range of problems. But somehow I suspect that PPDs are far from being representative of the truth...
Re: Intel® HD Graphics
Posted: Tue Apr 11, 2017 1:49 am
by bruce
darkbasic wrote:Joe_H wrote:Against that is how much they can already contribute using the existing A7 CPU core.
Looking at the benchmarks the iGPU is already much faster than the CPU, but the real question is: do you prefer raw power or the problem solving flexibility of a general purpose CPU?
If PPDs are a true estimate of how useful a certain comupation is for a certain project then CPUs are just a waste of power, despite being able to solve a much wider range of problems. But somehow I suspect that PPDs are far from being representative of the truth...
AFAIK, the words "solve a much wider range of problems" no longer apply. I believe the old distinction of different limitations between implicit and explicit solvent calculations has been removed.
Recent improvements for CPU cores (using GROMACS) allow for better use of modern hardware (AVX). Recent improvements for GPU cores (using OpemMM) still focus on both better hardware utilization and on enhanced scientific capabilities.
Joe_H wrote:To summarize, the highest performance iGPU's that Intel is including in some of their recent offerings have reached parity with some of the slowest GPU's currently usable. That by itself is probably not enough to get development time, but may be enough for further investigation as to capabilities and potential use. Against that is how much they can already contribute using the existing A7 CPU core.
Assuming that some scientific needs of FAH can be met by the highest performance Intel iGPUs, FAH
might decide to support those hardware platforms. Speaking now as a member of the FAH support team who staff this forum, I'm sure this will open the floodgates of gripes from folks who have hardware one increment slower that whatever minimum performance is defined.
The overall progress of FAH projects is measured by the average speed of all active machines working on a partular project. When new top-of-the-line GPUs come to market, FAH is able to accomplish increasing amounts of science per year. When the number of machines with minimum performance hardware increases, the opposite it true.
FAH has two categories of projects ... projects needing lots of GFLOPS in a short time and projects which either need fewer GFLOPS or which can simply run at a slower pace. The former are assigned to GPUs and the latter to CPUs. Adding AVX to CPUs is/was a significant help to CPU projects. Adding more GPUs to the low end of "average" GPUs needs to be carefully evaluated as it may or may not be a benefit to time-critical FAH projects.
Re: Intel® HD Graphics
Posted: Wed Dec 12, 2018 4:02 am
by ProDigit
Just read up on this thread.
How are things 1,5 years later?
If Android phones of old were supported, and a P4 @ 1,4Ghz is supported, current IGPs will be much faster.
In light of things, I understand why no research is being done in this area,
When a single server gets in the 200 million points per month, it takes hundreds of desktop PCs to achieve the same result with IGPs.
Still, it might feel good, to be able to contribute to FAH with older phones, and may make people feel better if their dual core CPU now can fold 2/3rd faster thanks to the IGP folding simultaneously with the CPU.
Intel has a great market in the IGP, and their IGPs are pretty much the same, save for stream units, or Mhz on all their Core I chips.
Intel isn't particularly ingenious in redesigning their graphics or improving it.
One thing Intel does have, is hardware encoders for audio and video.
Using those hardware encoders (GPU), and only the GPU, nearly doubles performance on nearly all their CPU line.
Then again, using CPU and GPU together, might overheat those APUs.
It appears this Folding at home, is moving from home into the industry; and should think of renaming Folding at Home, to Folding at company, or something....
Because most people won't be able to afford serious hardware for computations, if it wasn't for a momentary dip in prices for old server hardware.
Not sure if this trend will continue, or prices will bounce back to unaffordable; and people with small netbooks or chromebooks, raspberry pis, cellphones, will have to opt out of this program, in favor of other people, who have the money to deliver 20-200x better results, with advanced hardware farms...
Re: Intel® HD Graphics
Posted: Wed Dec 12, 2018 4:20 am
by JimboPalmer
You can never say never, Intel is looking to get back in the GPU market and may fund a F@H core to improve their credibility. But..
Pande Group did not fund the Android Client, Sony did.
No new work had to be done to keep the Ancient A4 core alive, it just still works. Even on P4s.
If you found a deep pockets funding source for Intel OpenCL development, I can't imagine Pande Group would turn down the cash. No Intel GPU has an OpenCL driver to my knowledge.
Re: Intel® HD Graphics
Posted: Wed Dec 12, 2018 6:59 am
by darkbasic
JimboPalmer wrote:No Intel GPU has an OpenCL driver to my knowledge.
Intel GPUs have MULTIPLE OpenCL drivers (Neo, Beignet...), at least on Linux. They're even capable of OpenCL 2+, which Nvidia doesn't.
Re: Intel® HD Graphics
Posted: Wed Dec 12, 2018 12:45 pm
by foldy
Problem is not having an OpenCL driver but to support it by FAH. Because each OpenCL driver on nvidia, amd or intel behaves a little different. I heared the former AMD GPU chief architect switch from AMD to Intel. So maybe Intel will have some faster GPUs too which make them more interesting for FAH in the future.
Re: Intel® HD Graphics
Posted: Wed Dec 12, 2018 1:52 pm
by JimboPalmer
darkbasic wrote:JimboPalmer wrote:No Intel GPU has an OpenCL driver to my knowledge.
Intel GPUs have MULTIPLE OpenCL drivers (Neo, Beignet...), at least on Linux. They're even capable of OpenCL 2+, which Nvidia doesn't.
As I read Wikipedia
https://en.wikipedia.org/wiki/OpenCL#Implementations I read that as CPU implementations, not GPU implementations. All the platforms mentioned (Ivy Bridge, Skylake, Broadwell are CPUs)
I am not saying you are wrong, only that there is no mention of GPU drivers by Intel in that article.
Re: Intel® HD Graphics
Posted: Wed Dec 12, 2018 8:51 pm
by foldy
OpenCL 2.0 for Intel GPU too since "Intel HD Graphics 4000" integrated iGPU which are slow. Search your wikipedia link for "HD Graphics"
Intel says OpenCL for its CPUs but it means for their CPU integrated iGPU.
https://software.intel.com/en-us/articl ... ph-section
Execute OpenCL™ applications on Intel® Processors with Intel® Graphics Technology.
Specifically target Intel® HD Graphics, Intel® Iris® Graphics, and Intel® Iris® Pro Graphics if available on Intel® Processors.
Re: Intel® HD Graphics
Posted: Wed Dec 12, 2018 10:20 pm
by bruce
ProDigit wrote:
If Android phones of old were supported, and a P4 @ 1,4Ghz is supported, current IGPs will be much faster.
You already got an answer about Sony funding the development of that core.
The issue of the (single core) P4 is that the old code still works with zero development cost and zero support cost. FAH doesn't disable stuff that still works but they can be pretty picky when searching for ways to fund development costs.
Intel iGPUs (and iGPUs, in general) have had a less-than-stellar history. For a long time, Intel required software that did a reasonable job of supporting graphics but also didn't fix bugs that produced an unacceptable number of calculation errors when used for scientific computing. Then, too, relatively speaking, iGPUs are rather low performance devices and often impose new requirements that can be costly. With that kind of history, it's hard to gather support when FAH has been repeatedly burned.
The same logic also applies to why GPUs for the Mac are not supported. Except for iGPU, there are really very few GPUs that might be productive. Spend a lot of money debugging and testing a new FAHCore and increase the global productivity of FAH by a percent or two.
Sony lost interest. The same could be said for the X-Box. On the other hand, NVidia and ATI compete with each other successfully bringing out faster, lower power. and if you don't insist on buying their flagship models, cheaper hardware that is 100% backwards compatible as far back as ten years --- which is a long time in dog-years (i.e.GPU-years).
Put a gt500 or GTX700 series GPU (or better) in almost any Win/Lin PC chassis and you can fold with it. Then figure out how to use the iGPU concurrently for Graphics.