Page 2 of 4
Re: New core = significant production drop GPU
Posted: Sat Dec 01, 2012 1:40 am
by Grandpa_01
kiore wrote:chaosdsm wrote:Related question, I'm seeing people on several team forums saying to just dump core 2.25 and go back to core 2.22 if you're on a Fermi card. Just wondering if PG has an official stance on this subject???
I do understand why people would suggest such a thing.. It does appear from a donator point of view that exactly the same science is done but at what appears to be 30% faster speed by manipulating or whatever you choose to call it.. changing the core you run on against what the project installs.. Maybe this is a 'bad' core, the kepler folders will not share this opinion.. or maybe it is the core that the project prefers.. Whatever the answer, it seems that the project is asking us to run 2.25 by updating this and requiring donator input to prevent this change.. I guess the decision by the donator is whether they or the project managers understand best what is best for the project.
Personally I have been disappointed by the sudden reduction in my output this has caused, but as I really don't know the details I personally feel that changing this without being told it is OK by those who really know is against my preference which is to do the best science. Yes I enjoy the points competition and it is a serious motivator, but to cherry pick cores seems not so different to cherry picking work units which I am opposed to. So until told different from those who really know I will stick with whatever core is assigned, just like I will fold whatever unit assigned.
kiore I believe the answer is that for 762x WU's PG requires the 2.25 core, for any other they do not, the core will not be changed by the servers on any WU other than the 762x WU if you never get a 762x WU the core will not get changed, so that says to me that the 2.22 core produces the same quality science as does 2.25 only faster since it is an optimised core for cards other than Kepler. The 2.25 and the 762x WU's were both designed for Keppler so they could be utilised. PG could easily set the servers up to require the 2.25 core and would if there was a problem with the science being done with the 2.22 core. At least they have had no problem doing so in the past. There is really no reason to penalise folders folding on cards other than Kepler IMHO that would just create more discontent.
Re: New core = significant production drop GPU
Posted: Sat Dec 01, 2012 4:10 am
by chaosdsm
If
Grandpa_01 wrote:kiore I believe the answer is that for 762x WU's PG requires the 2.25 core, for any other they do not, the core will not be changed by the servers on any WU other than the 762x WU if you never get a 762x WU the core will not get changed, so that says to me that the 2.22 core produces the same quality science as does 2.25 only faster since it is an optimised core for cards other than Kepler. The 2.25 and the 762x WU's were both designed for Keppler so they could be utilised. PG could easily set the servers up to require the 2.25 core and would if there was a problem with the science being done with the 2.22 core. At least they have had no problem doing so in the past. There is really no reason to penalise folders folding on cards other than Kepler IMHO that would just create more discontent.
If that were the case, then why have the 2.25 core & 762x WU's run on anything other then Kepler cards??? I know it's possible to filter non-Kepler cards the same as Stanford was able to filter the QMD WU's to run only on Intel CPU's years ago. All Kepler cards have a unique identifier, just as all Fermi cards have a unique identifier.
What he's got a GF104 (GTX460 aka Fermi), well no 762x for him... a GF108 (GT430 aka Fermi), nope no 762x for that machine either. It's aready being done, you can't run these work units on anything older than a Fermi, so don't even try to say they can't filter them out for anything older than Kepler. It can be done even if it's not being done. Sadly this seems to actually enforce your opinion, even if it might be an incorrect opinion...
Without knowing the programming specifics, there's no way to know what you are or aren't doing to a work unit by changing the core version. Sure you may get more points & quicker turn-around, but if its corrupting the data in any way, it's not worth it.
There's only one reason that people should be folding, and that's for science. Science that might one day save your great grandchild's life or their great grandchilds life though better understanding & thus potential cures for cancers & other deadly / dibilitating diseases. Corrupted data means bad science & further delays.
I know for a fact that this swapping of cores is happening quite frequently and on a large number of teams. But unless Stanford makes a statement about it one way or the other, I will never support it & will continue to run what Stanford gives me with the software they give me, even if that means my hardware isn't turning in as much work as it once was on the exact same work units.
As I said in my team forum, Stanford is the big looser with core 2.25. A 30% - 40% reduction in performance on Fermi cards means an overall reduction in completed work assignments turned in every day. It would benefit Stanford to release an official statement on this matter, especially if any kind of data corruption may happen. The only thing worse than a reduction in completed work, is completed work that is useless. Unfortunately, it could take weeks or months to determine if any harm is being done
Re: New core = significant production drop GPU
Posted: Sat Dec 01, 2012 4:24 am
by Grandpa_01
If it was not returning the same work I doubt PG would allow it to continue, why would they bad science is bad science good science is good unusable is worthless and the problem would be fixed. There was such a large public outcry for Kepler support that they bad to do something which was 2.25 and the 762x WU's anytime there has been a problem caused by donor actions PG has come out and said we recommend against doing that. I have yet to hear such a comment from PG about the 2.22 core.
Re: New core = significant production drop GPU
Posted: Sat Dec 01, 2012 5:09 am
by 7im
When v2.25 becomes the minimum version for all WUs, that will answer the question.
Re: New core = significant production drop GPU
Posted: Sun Dec 02, 2012 5:12 am
by proteneer
We are planning core 17 right now. It will make all of these things much easier in the future.
Re: New core = significant production drop GPU
Posted: Sun Dec 02, 2012 6:12 am
by Spongebob25
proteneer wrote:We are planning core 17 right now. It will make all of these things much easier in the future.
I hope so!!
Re: New core = significant production drop GPU
Posted: Thu Dec 06, 2012 8:32 am
by TheWolf
If 2.22 was bad why has it taken so long to pull it in the first place.
What about those thousand & tens & 100 of thousands of work units that have been work over the course of the existences of v2.22?
Where they all in-vain a waste of the donors time and money?
Re: New core = significant production drop GPU
Posted: Thu Dec 06, 2012 9:07 am
by Joe_H
Where has anyone identified the 2.22 core as being bad? It had one known failing, it did not work on Kepler based GPU's. 2.25 may have additional features or capabilities needed to process future WU's, or not. But at some point in software support it is simpler to have only one version to provide to users as the current release. Then the resources can be transferred to developing the next release.
28.8% faster TFP folding on v2.22 than v2.25!!!
Posted: Fri Dec 07, 2012 7:58 am
by championlly
did some
research & found a way to optimize my Fermi (GT555M).
the following is a simple apple-to-apple benchmark comparison & confirmed the result on my own system.
*system spec as of my signature
methodology
*same working environment
*same WU (P8054, R0, C1935, G73)
*same clocks, paused & replaced with v2.22 & continue folding
*result: significant improvement of
28.8%!!! TPF reduced from
6m51s to 5m19s
see screenshots below:
v2.25
v2.22
Re: New core = significant production drop GPU
Posted: Fri Dec 07, 2012 2:01 pm
by HaloJones
Agree that a single core is easier to support but what's the ratio between Kepler and Fermi? If all the Fermi cards are slowed down by 30% does the improvement in Kepler outweigh that? I would be surprised if it does. So my assumption is that this code is damaging the amount of science that PG is doing.
Re: New core = significant production drop GPU
Posted: Fri Dec 07, 2012 4:16 pm
by 7im
No one but PG can say either way so every assumption is just that. Fahcore performance is always a balance between speed and level of detail in the simulation (amoung several other factors). Adjustments are made in each version. Focusing on a single revision is like wearing blinders. There are always bigger picture issues to consider.
Like QRB for instance.
Re: New core = significant production drop GPU
Posted: Fri Dec 07, 2012 5:29 pm
by Grandpa_01
HaloJones wrote:Agree that a single core is easier to support but what's the ratio between Kepler and Fermi? If all the Fermi cards are slowed down by 30% does the improvement in Kepler outweigh that? I would be surprised if it does. So my assumption is that this code is damaging the amount of science that PG is doing.
I would venture to say you are correct here running my 460 and and 580's at the same clock speeds on 2.25 as I did on 2.22 = fail I dropped the OC down from 925 core clock on the 580's to 900 and still fail 70% of the WU's using 2.25 I had 0 failures with 2.22 so I have finally give up and on F@H with the 3 - 580's and 460 they are working on other work now with no problems and are back up to there old clocks. The 2.25 core was designed for Kepler viewtopic.php?f=74&t=22793#p227079 they have a core 2.22 that works great with fermi cards but if it is there choice to force a core that does not work as well for fermi so kepler cards can be completive or whatever reason, that is there choice. Just as I have made my choice for now with my GPU's and from reading the other forums quit a few others are either shutting down there fermi's or doing other work with them.
I am a little puzzled as to why PG would cripple one group in the same class of folders for another, since it is easy to allow both cores and assign 2.25 to kepler and 2.22 to fermi they already have the ability to read family's of cards so It should not be difficult to do so. Any way it sure seems to defeat the statement of needing things done as quickly as possible. Anyway in the long run it will not matter whatever happens happens, they will eithr recover from it or they wont.
Re: New core = significant production drop GPU
Posted: Fri Dec 07, 2012 5:43 pm
by widsss
It'd be ideal to have a hybrid core, where multiple optimizations could be contained. Support for new architectures could be added as needed, without affecting older GPUs.
Re: New core = significant production drop GPU
Posted: Sat Dec 08, 2012 4:27 am
by chaosdsm
widsss wrote:It'd be ideal to have a hybrid core, where multiple optimizations could be contained. Support for new architectures could be added as needed, without affecting older GPUs.
I don't see that happening from a single core. As new technologies come about, it be comes increasingly difficult to "fully" support the old and new hardware simultaneously. This is the reason that nVidia's unified GPU driver package these days is over 200MB while just 5 years ago it was under 50MB.
Re: New core = significant production drop GPU
Posted: Sat Dec 08, 2012 11:12 am
by TheWolf
Joe_H wrote:Where has anyone identified the 2.22 core as being bad? It had one known failing, it did not work on Kepler based GPU's. 2.25 may have additional features or capabilities needed to process future WU's, or not. But at some point in software support it is simpler to have only one version to provide to users as the current release. Then the resources can be transferred to developing the next release.
First off you answered a question with a question.
But I'll take it your answer was there is no problem with v2.22 & its a proven core to return good usable results.
So on this hand we have v2.22 & on this other we have v2.25. One has proven to return good usable results.
The other is just beta with very little returned results, more than likely don't even know if the results is usable yet.
So why force all Fermi & Kepler to a one unknown and possible fatal outcome core?
When you could be getting good known results from a already proven core like V2.22 from Fermi?
Looks like it would make more since to get something back that is usable than a maybe this will work.
If this new v2.25 trashes all this work you will have a huge back log of work that will have to be redone.
At least continuing the use of v2.22 with Fermi you have some good results to work with until this is all
ironed out with the next generation core.