p2665 points/deadline?

Moderators: Site Moderators, FAHC Science Team

kasson
Pande Group Member
Posts: 1459
Joined: Thu Nov 29, 2007 9:37 pm

Re: p2665 points/deadline?

Post by kasson »

Regarding the platform question:

The problem is that the components we rely on for the SMP core--MPI libraries and the Gromacs code--are less well supported (MPI) or not officially supported at all (Gromacs) under Windows. So it's much easier for us to develop functional and optimized versions of the core for Linux and OSX than it is for Windows. Our development path is typically to get something functional on the platforms that our components support well and then broaden the base. Windows has proven to be tricky on occasion, but we still push forward with it. (One reason for this strategy is that e.g. if something in Gromacs is breaking under OSX or Linux, we can email the developers and they're much more interested in fixing it. If the same thing happens in Windows, they'll give us advice because they're nice people, but we're much more on our own. So we try to work out the bugs initially using "supported" platforms.)

That said, we are committed to achieving good Windows ports, and once we have the next generation of cores ported to Windows it should be much more efficient to run a quad in native Windows than two Linux VM's. The problem with VM's is that unless you have one of the fancy bare-metal installs the VM's don't do resource sharing as well (and in part because of this don't typically support >2 processors). I'm not a VM technology expert--if there are good solutions to this, that would be wonderful.

BTW, none of the SMP code is in Fortran (unless there's something hiding in the MPI library). Our code is C-based, and the current Gromacs libraries are C and hand-optimized assembly (that's part of why they're fast...the Gromacs developers have different assembly kernels for about 20 different CPU architectures, from SSE to BlueGene. Talk about cross-platform support!).

Hope this clarifies things a bit!
cancersux
Posts: 5
Joined: Wed Mar 05, 2008 2:45 pm

Re: p2665 points/deadline?

Post by cancersux »

So are they going to change these 1275PT WU's or not. I'm consistently getting 50% the PPD with them versus virtually every other work unit i've ever seen.
Foxery
Posts: 118
Joined: Mon Mar 03, 2008 3:11 am
Hardware configuration: Intel Core2 Quad Q9300 (Intel P35 chipset)
Radeon 3850, 512MB model (Catalyst 8.10)
Windows XP, SP2
Location: Syracuse, NY

Re: p2665 points/deadline?

Post by Foxery »

BillR wrote: One other big commonality they all share is the use of any number of the “ix” OS’s Linux, Unix, OSX (Jobs gives a lot of hardware to schools, go figure) and any other equivalent OS. There is no good reason not to use these Os’s in fact they are perfectly suited to the task.
...
What Stanford won’t ask anyone to do, pay attention here, is ask as many users as possible switch to Linux. With one OS to deal with neither science or points would suffer at all, in fact both would gain in a really big way.
Do you even listen to yourself speak? DC projects are successful because they use spare resources on existing machines, and, listen carefully, do not interfere with the owner's daily tasks. Noone is going to reformat their gaming rig or office machines to Linux solely to run a DC program. Perhaps this table can clear up some things for you:

http://fah-web.stanford.edu/cgi-bin/mai ... pe=osstats
77% of the project's X86 computing power comes from Windows machines. This more than makes up for the fact that Windows is only 85-90% efficient.

Seriously, guy, take off the tinfoil hat.
Core2 Quad/Q9300, Radeon 3850/512MB (WinXP SP2)
BillR
Posts: 8
Joined: Tue May 13, 2008 1:45 pm

Re: p2665 points/deadline?

Post by BillR »

Foxery wrote:
BillR wrote: One other big commonality they all share is the use of any number of the “ix” OS’s Linux, Unix, OSX (Jobs gives a lot of hardware to schools, go figure) and any other equivalent OS. There is no good reason not to use these Os’s in fact they are perfectly suited to the task.
...
What Stanford won’t ask anyone to do, pay attention here, is ask as many users as possible switch to Linux. With one OS to deal with neither science or points would suffer at all, in fact both would gain in a really big way.
Do you even listen to yourself speak? DC projects are successful because they use spare resources on existing machines, and, listen carefully, do not interfere with the owner's daily tasks. Noone is going to reformat their gaming rig or office machines to Linux solely to run a DC program. Perhaps this table can clear up some things for you:

http://fah-web.stanford.edu/cgi-bin/mai ... pe=osstats
77% of the project's X86 computing power comes from Windows machines. This more than makes up for the fact that Windows is only 85-90% efficient.

Seriously, guy, take off the tinfoil hat.
Well, there is a lot more action over at [H]forums on this issue then here at the official forum when I asked the same question and nobody accused me of owning a tinfoil hat.

That said, you might want to take a peek:


http://www.hardforum.com/showthread.php?t=1306647

Kasson is still trying to point out the bigger picture and so far the main issue here is they (Stanford) probably should not have released the new work without the tools to do the job.
Xilikon
Posts: 155
Joined: Sun Dec 02, 2007 1:34 pm

Re: p2665 points/deadline?

Post by Xilikon »

kasson, I can understand the challenges of having the Windows port perform the same as the Linux and the problem of trying to satisfy everyone. IMHO, there is a solution to your challenge which I believe would be very fair : Split the WU into 2 groupe, 1 for Linux with X points (assuming the newest gromacs core increased efficiency) and 1 for Windows with Y points (with the known issue of reduced efficiency). Then limit the assignment of each WU to a certain group (Linux/OSX vs Windows) and everyone will see similar results despite the difference in the amount of work.

I'm aware this solution might break your stance about the benchmark value and avoiding differences between platforms. However, it's probably your only solution to stop the frustration of the community.
Image
ppetrone
Pande Group Member
Posts: 115
Joined: Wed Dec 12, 2007 6:20 pm
Location: Stanford
Contact:

Re: p2665 points/deadline?

Post by ppetrone »

Hey guys/girls

Thank you for the heads up on the psummary list.
Happy weekend
paula
Mitsimonsta
Posts: 30
Joined: Tue Feb 12, 2008 1:53 am

Re: p2665 points/deadline?

Post by Mitsimonsta »

Hey Xilikon..... you have what appears to be a relatively easy fix here and I think it would be better. However, consider this:

THE CURRENT SMP BENCHMARK MACHINE BEARS ZERO RESEMBLANCE THE AVERAGE SMP MACHINE THAT CONTRIBUTORS USE.

That's right. I'd hazard an (educated) guess that on both Windows AND Linux, the clear majority of contributors would be using a Q6600 with 2GB of DDR2 RAM. I know that is all I run on my home farm now. The underlying architecture is the problem here.

Now, with everything going on with new cores, new units, things apparently benchmarked wrong etc, it's time to make the benchmark machine as representative of the contributors as possible.

We could run around calling the WU an Asshat, the benchmarking an Asshat, VJ & Kasson Asshats, certain moderators here Asshats - but in this case the only thing here that is truly an Asshat is the Benchmark Machine (through no fault of its own mind you - they are still nice machines but the CPU's need to communicate via the MCH - and as such are basically two dual core machines in one case). It's the dual-socketedness of it that has caused the issue here.

I propose that the following occurs:
1) The current A1 core is benchmarked on the current Dual Woodcrest box. Any new units for the A1 core should be targeted at dual core machines.
2) The upcoming A2 core should be benchmarked on a Q6600 with 2GB of DDR2 RAM since the core is designed to scale better. Work Units requiring the A2 core should be targeted at natively quad core machines (as in 4 cores per socket).
3) That Stanford place a value on the work units processed by the A2 core higher than the A1 target daily PPD of 1760 points, as there is obviously more resources going into it. 3520ppd (doubled) might be too high, maybe 1.5x the current (about 2640ppd) might be a better point to start from.
4) That Stanford understand what machines the majority are running these clients on, and use a benchmark machine that would be a reasonably assumed to be the standard hardware. At the point of the Q6600's halving in price last year, it should have been announced that the benchmark SMP machine was now going to be a Q6600 @ 2.4Ghz with 2G of RAM. Obviously you do not go back and re-benchmark current units and adjust their points, but any NEW units being released would be benchmarked on the new hardware platform and points allocated accordingly.

VJ / Kasson, you can take the suggestion or leave it.
Image
Xilikon
Posts: 155
Joined: Sun Dec 02, 2007 1:34 pm

Re: p2665 points/deadline?

Post by Xilikon »

Mitsimonsta wrote:Hey Xilikon..... you have what appears to be a relatively easy fix here and I think it would be better. However, consider this:

THE CURRENT SMP BENCHMARK MACHINE BEARS ZERO RESEMBLANCE THE AVERAGE SMP MACHINE THAT CONTRIBUTORS USE.

That's right. I'd hazard an (educated) guess that on both Windows AND Linux, the clear majority of contributors would be using a Q6600 with 2GB of DDR2 RAM. I know that is all I run on my home farm now. The underlying architecture is the problem here.

Now, with everything going on with new cores, new units, things apparently benchmarked wrong etc, it's time to make the benchmark machine as representative of the contributors as possible.

We could run around calling the WU an Asshat, the benchmarking an Asshat, VJ & Kasson Asshats, certain moderators here Asshats - but in this case the only thing here that is truly an Asshat is the Benchmark Machine (through no fault of its own mind you - they are still nice machines but the CPU's need to communicate via the MCH - and as such are basically two dual core machines in one case). It's the dual-socketedness of it that has caused the issue here.

I propose that the following occurs:
1) The current A1 core is benchmarked on the current Dual Woodcrest box. Any new units for the A1 core should be targeted at dual core machines.
2) The upcoming A2 core should be benchmarked on a Q6600 with 2GB of DDR2 RAM since the core is designed to scale better. Work Units requiring the A2 core should be targeted at natively quad core machines (as in 4 cores per socket).
3) That Stanford place a value on the work units processed by the A2 core higher than the A1 target daily PPD of 1760 points, as there is obviously more resources going into it. 3520ppd (doubled) might be too high, maybe 1.5x the current (about 2640ppd) might be a better point to start from.
4) That Stanford understand what machines the majority are running these clients on, and use a benchmark machine that would be a reasonably assumed to be the standard hardware. At the point of the Q6600's halving in price last year, it should have been announced that the benchmark SMP machine was now going to be a Q6600 @ 2.4Ghz with 2G of RAM. Obviously you do not go back and re-benchmark current units and adjust their points, but any NEW units being released would be benchmarked on the new hardware platform and points allocated accordingly.

VJ / Kasson, you can take the suggestion or leave it.
Misti, that's already brought up in the thread (and in other discussions as well). That's precisely the issue of using a benchmark machine which is not representative of the majority and this mess up the estimates. Anyway, VJ said there will be a points system overhaul coming soon to better reflect the science value of everything so there is no more inconsistencies. I dunno if that stance mean they will get a different benchmark machine or just rely on testers to redo the baseline more accurately.

That's why I said in the end that my solution could break their usual position about the benchmark value but they already took my advice by adjusting them for 1920 points.
Image
Mitsimonsta
Posts: 30
Joined: Tue Feb 12, 2008 1:53 am

Re: p2665 points/deadline?

Post by Mitsimonsta »

While it was mentioned in this thread briefly, I have not been into other threads on the subject of benchmarking. I try and stay out of here as much as I can for reasons that are obvious to both of our teams.

Now they have changed the points after pressure from the community, and the SMP contributors out there will more than likely be happier with the outcome (as opposed to pleased or satisfied), but I know this will not be the last time that we have discussion about a project whose points seem to be well out of whack with others.

I welcome the idea of a Work Unit's points to be based on the value to the science. There is no easy answer to this issue at all. Maybe Stanford could come up with a tightly defined proposal on how they will allocate points, and then ask for PROPER feedback from teams (as opposed to individual users bitching) on how they could make it better.

After all is said and done, transparency is what we all want. I want to be able to go to the Points Summary and find up to date information, with the Base WU points allocated, any bonuses applied and what types of bonuses, and the current points awarded per WU. It would be nice to know which clients get it (platform & version), if it was a small/normal/big WU, and if it is allocated under AdvMethods on, AdvMethods off, or both.

Maybe we could move to a free market theory eventually, where the demand for a WU defines it's point value. Like a stock or futures market for F@H Work Units. This kind of sounds ridiculous, but the more I think about it, the more it makes sense. As demands for particular units goes up, the point value comes down. Less demand for a unit, and the points for a unit goes up. The project wants more results on a particular project? Put a priority bonus on it. If we can control our clients enough to get the 'good' units, then as their supply dries up the others will be more attractive.

I dunno, it's after midnight here now, so any of the above may or may not make sense, and may or may not be a good idea.
Image
Aivas47a
Posts: 1
Joined: Fri Feb 08, 2008 5:44 am

Re: p2665 points/deadline?

Post by Aivas47a »

It seems to me the guiding principle of the points system should be to prioritize work in the way that most benefits the scientific objectives of the project. That's what this is all about -- the points are fun but that's not really why we're all here. We're here for science.

Rather than trying to create a "fair" outcome between users with different kinds of systems, the points system should be an objective guide for users to follow in structuring their folding activities. Relative point production should reflect relative value of the work to Stanford.

If the points system could really be implemented in this way, it would eliminate arguments such as the one over whether it is better to run a single SMP even though in many configurations dual SMPs result in more points per day. The points system itself should be structured to incentivize the behavior that is most valuable to the project (including placing value on time of completion).

My $.02 :mrgreen:
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: p2665 points/deadline?

Post by bruce »

Mitsimonsta wrote:consider this: THE CURRENT SMP BENCHMARK MACHINE BEARS ZERO RESEMBLANCE THE AVERAGE SMP MACHINE THAT CONTRIBUTORS USE.
Let's break this down into two distinct parts.

1) Does any quad machine represent any dual machine? (and also for other numbers of cores)
2) Does the benchmark machine represent common quad machines?

The developers are looking very hard at first issue right now. Any time Kasson mentions the work "scaling" that's what he's talking about. Ideally, a duo should take twice as long as a quad and earn half of the PPD. This was clearly NOT true for FahCore_a1, and steps are being taken to correct this problem. Points and deadlines assigned to specific projects have been modified in some cases and I'm sure there will be more adjustments as time goes on. Nevertheless, the goal is to make the points fair for any number of cores that you're prepared to donate to FAH.

Second, there's so much confusion about item 1 right now that it's too early to worry about item 2 for a while. Let's get FahCore_a2 out to everyone and see if that solves the whole problem before proposing any other changes. Then, based on how well it does what it's supposed to, the decision to change the benchmark machine from one quad to another quad can be considered.
Mitsimonsta
Posts: 30
Joined: Tue Feb 12, 2008 1:53 am

Re: p2665 points/deadline?

Post by Mitsimonsta »

bruce wrote:1) Does any quad machine represent any dual machine? (and also for other numbers of cores)
Of course it does not - and even more so when the underlying architecture is totally different. You virtually cannot buy a platform like the benchmark machine anymore.... there aren't many dual-socket dual-core boxes sold when the biggest push is for virtualisation. You can get 8 cores on the same hardware for about 10% more. Ask any Dell sales rep, ask any HP reseller. They will tell you the same thing - 4 cores per socket is the entry level now even if 2 cores per socket is still available.
bruce wrote:2) Does the benchmark machine represent common quad machines?
It's not a quad Bruce, so therefore it cannot be representative. It's a dual socket, dual core. While it does have 4 cores, it is still dual-core. The processor sockets communicate via the MCH and has zero comparability to a 'quad' processor equipped machine which will be natively faster for inter-core communication. It therefore cannot be representative of an average quad machine, defined as 4 cores per socket.
bruce wrote:Ideally, a duo should take twice as long as a quad and earn half of the PPD.
That is incorrect Bruce. There is a law of diminishing return with adding cores, you will never get 100% extra performance of what the extra core(s) should have added. This has been known since the very start of the PentiumD and A64 X2 days. A dual should take slightly less than half the time of a quad, and receive a slightly better than half the ppd of a quad.

With all that said, there may be some efficiencies from running 4 threads on 4 cores, instead of running 2 threads per core. Less swapping of threads and cache for one. It may even out in the end.
Image
mikeb12
Posts: 28
Joined: Tue Feb 12, 2008 11:51 am
Location: South Carolina USA

Re: p2665 points/deadline?

Post by mikeb12 »

since the points were changed to 1920 on 2665, it looks to be performing on par with the 30xx's... I'm fine with the points now..

3 quads x4, x5, and x6 all running dual smps w/ AC..
Project : 2665
Core : SMP Gromacs
Frames : 100
Credit : 1920


-- X4-A-Vista-Q6600-3.4 ghz --

Min. Time / Frame : 15mn 17s - 1809.03 ppd
Avg. Time / Frame : 15mn 35s - 1774.20 ppd
Cur. Time / Frame : 16mn 37s - 1663.87 ppd
R3F. Time / Frame : 16mn 14s - 1703.16 ppd
Eff. Time / Frame : 16mn 01s - 1726.20 ppd


-- X4-B-Vista-Q6600-3.4ghz --

Min. Time / Frame : 16mn 12s - 1706.67 ppd
Avg. Time / Frame : 16mn 54s - 1635.98 ppd
Cur. Time / Frame : 17mn 40s - 1564.98 ppd
R3F. Time / Frame : 16mn 53s - 1637.59 ppd
Eff. Time / Frame : 17mn 49s - 1551.81 ppd


-- X5-B-XP-Q6600 3.4ghz --

Min. Time / Frame : 15mn 53s - 1740.69 ppd
Avg. Time / Frame : 16mn 52s - 1639.21 ppd
Cur. Time / Frame : 16mn 29s - 1677.33 ppd
R3F. Time / Frame : 16mn 29s - 1677.33 ppd
Eff. Time / Frame : 16mn 57s - 1631.15 ppd


-- X5-A-XP-Q6600-3.4ghz --

Min. Time / Frame : 15mn 53s - 1740.69 ppd
Avg. Time / Frame : 16mn 32s - 1672.26 ppd
Cur. Time / Frame : 16mn 28s - 1679.03 ppd
R3F. Time / Frame : 16mn 29s - 1677.33 ppd
Eff. Time / Frame : 17mn 00s - 1626.35 ppd


-- X6-B-XP-Q6600-3.4ghz --

Min. Time / Frame : 15mn 34s - 1776.10 ppd
Avg. Time / Frame : 15mn 38s - 1768.53 ppd
Cur. Time / Frame : 15mn 38s - 1768.53 ppd
R3F. Time / Frame : 15mn 38s - 1768.53 ppd
Eff. Time / Frame : 15mn 42s - 1761.02 ppd


-- X6-A-XP-Q6600-3.4ghz --

Min. Time / Frame : 15mn 38s - 1768.53 ppd
Avg. Time / Frame : 16mn 09s - 1711.95 ppd
Cur. Time / Frame : 16mn 14s - 1703.16 ppd
R3F. Time / Frame : 16mn 16s - 1699.67 ppd
Eff. Time / Frame : 16mn 35s - 1667.22 ppd
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: p2665 points/deadline?

Post by bruce »

Mitsimonsta wrote:It's not a quad Bruce, so therefore it cannot be representative. It's a dual socket, dual core. While it does have 4 cores, it is still dual-core. The processor sockets communicate via the MCH and has zero comparability to a 'quad' processor equipped machine which will be natively faster for inter-core communication. It therefore cannot be representative of an average quad machine, defined as 4 cores per socket.
I see you like to argue.

You probably should note that the Q6600 is not a quad, then, because it consists of two duals on the same die. Communications between the two halves goes through the FSB.

What do you call a machine with four CPUs in individual sockets?
cancersux
Posts: 5
Joined: Wed Mar 05, 2008 2:45 pm

Re: p2665 points/deadline?

Post by cancersux »

so there aren't going to be any changes with this 1275 point workunit? its a little frustrating to see my point production get halved on all my SMP machines. :x i know points vary with the workunits, but 50% of the average is a little excessive with this one.

also i noticed that the deadlines for these workunits are 6 days instead of the normal 4. So you KNOW its going to take more time than other SMP workunits, but you award LESS points for it???
Post Reply