Bigadv points change

Moderators: Site Moderators, FAHC Science Team

GreyWhiskers
Posts: 660
Joined: Mon Oct 25, 2010 5:57 am
Hardware configuration: a) Main unit
Sandybridge in HAF922 w/200 mm side fan
--i7 2600K@4.2 GHz
--ASUS P8P67 DeluxeB3
--4GB ADATA 1600 RAM
--750W Corsair PS
--2Seagate Hyb 750&500 GB--WD Caviar Black 1TB
--EVGA 660GTX-Ti FTW - Signature 2 GPU@ 1241 Boost
--MSI GTX560Ti @900MHz
--Win7Home64; FAH V7.3.2; 327.23 drivers

b) 2004 HP a475c desktop, 1 core Pent 4 HT@3.2 GHz; Mem 2GB;HDD 160 GB;Zotac GT430PCI@900 MHz
WinXP SP3-32 FAH v7.3.6 301.42 drivers - GPU slot only

c) 2005 Toshiba M45-S551 laptop w/2 GB mem, 160GB HDD;Pent M 740 CPU @ 1.73 GHz
WinXP SP3-32 FAH v7.3.6 [Receiving Core A4 work units]
d) 2011 lappy-15.6"-1920x1080;i7-2860QM,2.5;IC Diamond Thermal Compound;GTX 560M 1,536MB u/c@700;16GB-1333MHz RAM;HDD:500GBHyb w/ 4GB SSD;Win7HomePrem64;320.18 drivers FAH 7.4.2ß
Location: Saratoga, California USA

Re: Bigadv points change

Post by GreyWhiskers »

bruce wrote:The Demand part is directly related to keeping the servers balanced.

If the active science projects require 10000 trajectories for bigadv and 80000 trajectories for smp and 50000 trajectories for GPU and 60000 trajectories for uniprocessors, servers are configured based on those numbers. Now if 50000 donors want to run bigadv, there will be 40000 clients who can't get their choice of assignments. Demand exceeds Supply in that category. Having them continually banging on the servers that have run out of bigadv projects does not contribute anything to science and it certainly contributes to the frustration level of a lot of donors.

(I invented all those numbers -- they probably are totally unrealistic, but that doesn't matter -- there's still a fixed number of trajectories on each server which are either checked out to a client who is processing it or waiting for someone to request that assignment. Those fixed numbers do change when new projects are started or when projects end. The number of donors seeking work in each classification changes, too, for a number of obvious reasons but there has to be a reasonable balance between the two.)
I've been thinking about how the work servers are fed - and thought I'd crunch some real numbers from the logs. See the graph I created from almost a month's worth of every-35 minute data.

Format of the graph: the blue line is the WU Available in each interval - against the scale on the left side. The red line is the WUs received in each reporting interval against the scale on the right side.

I don't know what the real back end processing is, but it appears that there's an almost steady-state replenishment of the WU avail for the bigadv and bigbeta folders (the logs don't break out finer granularity of how many bigadv and how many bigbeta). I don't think that this real-time replenishment is due to new work being created by the FAH scientists - it appears that it represents creation of the next-generation WUs along a trajectory based on just-returned WUs.

There has been considerable discussion in the forum about running out of work for the bigadv and/or bigbeta clients. It would be helpful (not the first time I've made this suggestion) if the proj leads could share the design of the projects. For instance, how many more gens are anticipated to get the scope along the timeline? Is the back end processing able to keep up with all of the newly-folded WUs? Are there any real serialization issues, or are there enough projects in the hopper to tolerate a large volume of two-day returns? Are some of the problems in the 2684/6900/etc projects reaching the end of their simulation period, and almost ready for final back end crunching to generate scientific conclusions and generate papers?

Image
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: point system is getting ridiculous...

Post by MtM »

noorman wrote:.
Very true; in the Team I was part of for nearly 6 years (and in which I started Folding) we always pointed that out; the actual science is measured in finished Wu's, not in the credits awarded to them!
.
We lost that as 'flatline' with gpu wu's though.
SKeptical_Thinker
Posts: 76
Joined: Tue Apr 29, 2008 11:02 pm
Hardware configuration: XP-32 Pro SP-3
Antec NSK-2480 with two Thermaltake 120mm Smart Fans
Gigabyte ga-ma78gm-s2h 780G IGP
BE-2350 with 10.5 x multiplier, 1.250V in BIOS, clock at 272 (2.856GHz)
EVGA 8800 GS
Ninja Mini CPU HS
GeIL 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800
Seagate 500GB SATA hard drive
ASUS 18X DVD±R DVD Burner PATA Model DRW-1814BL

Re: Bigadv points change

Post by SKeptical_Thinker »

GreyWhiskers wrote:
Image
That graph seems to blow a big hole in the demand outstripping supply argument.
Image
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: Bigadv points change

Post by Grandpa_01 »

SKeptical_Thinker wrote:
GreyWhiskers wrote:
Image
That graph seems to blow a big hole in the demand outstripping supply argument.
I think you and allot of people misunderstood what Kasson said I can not figure out where anybody came up with that theory. Kasson does not say anywhere in his statement there is a shortage of bigadv WU's. What I get from what Kasson said in his statement is that too many people are choosing to do bigadv and not enough are choosing to do other work which is true. People are using hacks to chase the carrot and choosing to run WU's that were never intended to run on there hardware. So they are attempting to make it not so attractive. But in my opinion they are failing at this also the only way they are going to stop it is to shorten the deadline significantly people are still using the hack maybe even more so now than before because they just made the biggest carrot even bigger in the eyes of those using the hack. The ones that are using the hack do not care about the projects they only care about the points so why would they stop, (they will not until the incentive is not there). :wink: I think that Stanford may have made a mistake here they may be coming closer to balancing the point system but the have not even addressed the issue of the point system / WU's / projects being manipulated by points chasers.
bigadv points changes

Postby kasson » Fri Jul 01, 2011 5:11 pm
After much discussion, we are adjusting the points bonus for bigadv. Bigadv work units have been given a 50% base points bonus over standard SMP; the rationale for this was to compensate for the increased system load, increased memory requirements, and increased upload/download bandwidth requirements. As judged from the high demand for bigadv work units, this has been very much a success, perhaps a little too much so. We would like to continue to offer a bonus for bigadv to offset the above factors, but we don't want demand for bigadv to overwhelm the rest of the project or imbalance the points system.

We are therefore dropping the bigadv base points bonus from 50% to 20%, effective for all work units issued this time onwards.

We very much appreciate the donors who have volunteered to run bigadv work units; these projects add substantially to our scientific capabilities. We do important science with all classes of work units, however, and we want the points system to reflect that. Based on extensive feedback, we are considering renormalizing other parts of the system but have not finalized decisions in that regard.

Thanks again for folding!
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
Haitch
Posts: 34
Joined: Tue Dec 04, 2007 4:34 pm

Re: point system is getting ridiculous...

Post by Haitch »

This started off as a reply over on the beta forum, but as Dr Pande has requested specific suggestions here, I'm consolidating assorted posts and reposting it here.

Rather than having a one size fits all exponential curve over the full range of tpf, how about having a cut off point, then use a linear equation for each reduction in TPF after that ?

Add an additional variable, say C, to the WU, where C is the TPF in minutes where the bonus transitions from expontential to linear.

Call Pc the value of the bonus if the WU is completed with a TPF of C minutes.

For TPF > C, then the existing exponential QRB plan remains in effect (preferably with the non-renormalized values).

For TPF < C then the points for the WU are:

Points = Pc + (( c - n ) * ( Pc / c ))

where n is the TPF in minutes.

It gives an exponential bonus for "normal" SMP systems then an increasing, but linear, bonus for the "extreme" SMP systems.

Assume a big adv gets 200,000 points for the WU with a TPF of 10 minutes, C = 10, and Pc = 200,000

If it does it in 9 minutes, it gets 220,00
8 minutes 240,000
...
2 minutes, 360,000
1 minute, 380,000
0 minutes 400,000

Going from 2 minutes to 1 minute yields about 6% additional points for the WU, rather than 40% more. Yes it will get slightly more than 2x PPD, but it's producing 2x WU per day.

It's a linear progression for TPF < C minutes. The maximum points/WU is capped at 2 x Pc. The trend to infinite points per WU as TPF trends to zero is removed, and the incentive still remains for improving TPF, without the exponential effect. You double your efficiency, you get around 2.1x as many PPD, you get around 1.06x Points/WU.

A hypothetical machine that can do a 6903 with a 0.001 minute TPF, and C = 12 will get 1,328,782 points, rather than 72,783,395

Spreadsheet and graph at: http://goo.gl/eWKkR


Haitch.
bess43
Posts: 2
Joined: Thu Jun 30, 2011 6:46 am

Re: point system is getting ridiculous...

Post by bess43 »

Having read through this and other threads: I think there is something missing here in the whole analysis of this points issue. The way things are presently set up, it's just one huge project, with crunchers belonging to teams and a degree of competition among teams. Into the fray along comes some huge work units allowing a lot of points from some really high powered systems compared to the average. So go take a look at the teams out there, and the high rollers in each team, and what do you see? The heavy hitters running the large point units. Now not only are those folks ahead of the others in points, but waaaaaay ahead of the others. So this points thing is basically forcing the issue for crunchers to run whatever they can to in order to accumulate the highest number of points. So is this the way the sports world works, or boxing, or car racing? Nope, they brough classes into play to keep the high end stuff from overrunning the lessor powered stuff.

In spite of folks saying it's the science that counts, and then others harraging about the points thing, this whole basic issue is going to continue as long as everything is lumped into one big puddle. If PG wants people running some of the smaller points units because of benefits it gives them for refining the science of their work, then the whole system needs to be modified to take all these factors into account. And the basic fact is that one size just doesn't fit all no matter how you look at it.

If the project was split into three groupings there would be some incentive for folks to run stuff other than the high point units. As for the science, it would still be accomplished. This approach works for the sports teams and auto racing to mention a couple readily apparent examples. Human nature being what it is, you can't just put out a huge assortment of work units work varying points and expect folks to jump at the chance to run the small stuff when the whole team competition thing is totally overwhelmed by the high end technology systems. PG could have the Classic, then the GPU and mid-level smp stuff, then the bigadv units.

Undoubtedly there will be a lot of poo-pooing this comment which I fully expect. However the method PG is now utilizing to try and manipulate the point system to modify how volunteers crunch units isn't going to reach the satisfaction level that PG is trying to chase for the various volunteer level groupings. You rob from Peter to pay Paul, and Peter is going to complain, and it's going to continue basicall no matter what you do with this one size fits all grouping.
SKeptical_Thinker
Posts: 76
Joined: Tue Apr 29, 2008 11:02 pm
Hardware configuration: XP-32 Pro SP-3
Antec NSK-2480 with two Thermaltake 120mm Smart Fans
Gigabyte ga-ma78gm-s2h 780G IGP
BE-2350 with 10.5 x multiplier, 1.250V in BIOS, clock at 272 (2.856GHz)
EVGA 8800 GS
Ninja Mini CPU HS
GeIL 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800
Seagate 500GB SATA hard drive
ASUS 18X DVD±R DVD Burner PATA Model DRW-1814BL

Re: Bigadv points change

Post by SKeptical_Thinker »

Grandpa_01 wrote:
SKeptical_Thinker wrote:
GreyWhiskers wrote:
Image
That graph seems to blow a big hole in the demand outstripping supply argument.
I think you and allot of people misunderstood what Kasson said I can not figure out where anybody came up with that theory. Kasson does not say anywhere in his statement there is a shortage of bigadv WU's. What I get from what Kasson said in his statement is that too many people are choosing to do bigadv and not enough are choosing to do other work which is true.
This is why I made the argument that this is *not* a supply versus demand issue. There is plenty of supply, as the graph indicates. If PG feels that the need to normalize the points awarded to drive donors to other projects, they should consider increasing the points to be made on those other projects. My bigadv rig (12 real Xeon cores, 24 with hyperthreading) is now getting ~85,000 PPD. If it were fed a steady diet of regular SMP projects, it would make ~55,000. If I cared about points, I would care about that. I don't. I fold what they send me. Send me regular SMP projects and I will happily fold them. If there is too much "demand" for bigadv projects, send SMP projects instead. They will get returned real soon now.

Point inflation is a fact of life and will only get worse as more capable hardware comes online. Trying to preserve the value of points earned on a P-III should not be the goal here. Those points had value at that time. At this time a person has to have a particular need to buy a new system that isn't smp capable.

IMHO, since point allocation is supposed to be science based, there should be a QRB on everything.
Image
PantherX
Site Moderator
Posts: 6986
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: Bigadv points change

Post by PantherX »

SKeptical_Thinker wrote:...Send me regular SMP projects and I will happily fold them. If there is too much "demand" for bigadv projects, send SMP projects instead. They will get returned real soon now...
The issue that I see is that a 4/6/8 CPU systems are folding bigadv while a 24/32/48 CPU systems is either sitting idle or crunching normal SMP WUs. While you fold any WU sent (this is great), the bigadv aren't "reaching" their target audience which makes some donors upset. The fallback to normal SMP WUs when bigadv is lacking might have been fixed now but the "shortage" is caused the slowest machines. Hopefully, a system is being worked on which will favor the bigadv reaching their target audience.
SKeptical_Thinker wrote:...IMHO, since point allocation is supposed to be science based, there should be a QRB on everything.
Correct, hence a QRB is already in place on some Classic Projects and it will eventually be on GPUs, when it is ready 8-)
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
orion
Posts: 135
Joined: Sun Dec 02, 2007 12:45 pm
Hardware configuration: 4p/4 MC ES @ 3.0GHz/32GB
4p/4x6128 @ 2.47GHz/32GB
2p/2 IL ES @ 2.7GHz/16GB
1p/8150/8GB
1p/1090T/4GB
Location: neither here nor there

Re: Bigadv points change

Post by orion »

PantherX wrote:The issue that I see is that a 4/6/8 CPU systems are folding bigadv while a 24/32/48 CPU systems is either sitting idle or crunching normal SMP WUs. While you fold any WU sent (this is great), the bigadv aren't "reaching" their target audience which makes some donors upset. The fallback to normal SMP WUs when bigadv is lacking might have been fixed now but the "shortage" is caused the slowest machines. Hopefully, a system is being worked on which will favor the bigadv reaching their target audience.
If we want to see these WU's get to their intended target audience PG needs to do three things:
1. shorten the deadlines
2. move bigadv/big bigadv to v7 only
3. fix v7 to defeat any workarounds.

As long as the deadlines allow unintended systems to get QRB points, people will set those systems up to get them no matter what the argument against them doing so is.
iustus quia...
GreyWhiskers
Posts: 660
Joined: Mon Oct 25, 2010 5:57 am
Hardware configuration: a) Main unit
Sandybridge in HAF922 w/200 mm side fan
--i7 2600K@4.2 GHz
--ASUS P8P67 DeluxeB3
--4GB ADATA 1600 RAM
--750W Corsair PS
--2Seagate Hyb 750&500 GB--WD Caviar Black 1TB
--EVGA 660GTX-Ti FTW - Signature 2 GPU@ 1241 Boost
--MSI GTX560Ti @900MHz
--Win7Home64; FAH V7.3.2; 327.23 drivers

b) 2004 HP a475c desktop, 1 core Pent 4 HT@3.2 GHz; Mem 2GB;HDD 160 GB;Zotac GT430PCI@900 MHz
WinXP SP3-32 FAH v7.3.6 301.42 drivers - GPU slot only

c) 2005 Toshiba M45-S551 laptop w/2 GB mem, 160GB HDD;Pent M 740 CPU @ 1.73 GHz
WinXP SP3-32 FAH v7.3.6 [Receiving Core A4 work units]
d) 2011 lappy-15.6"-1920x1080;i7-2860QM,2.5;IC Diamond Thermal Compound;GTX 560M 1,536MB u/c@700;16GB-1333MHz RAM;HDD:500GBHyb w/ 4GB SSD;Win7HomePrem64;320.18 drivers FAH 7.4.2ß
Location: Saratoga, California USA

Re: Bigadv points change

Post by GreyWhiskers »

Good observations in the last day or so.

One preface to the discussion below. The definition at the bottom of the Server Stats pages of WU Avail is clear. The one for WU RCV is a bit ambiguous. In any case, I put the Server Stats definitions on the charts. I presume, and would appreciate a confirmation or correction, that the WU RCV represents the load per unit time (35 minutes) of WUs that have been returned by the clients after folding a work unit. It is conceivable that this is the new WUs received from the project back end to keep the hopper filled up - but that's not the interpretation I've taken below.

I wanted to update the one chart I posted yesterday - it seems that the PsummaryC .html web site identifies only p6900 with 130.237.232.141. Taking a closer look at the numbers, the Server Stats detail pages has an update every 35 minutes - so there are about 41 observations per day. Putting the moving average line there, we see that there are about 10 WUs returned per 35 minutes, or about 410 per 24 hour day. The supply is only running about 700, so this is pretty fragile to back end hiccups. It wouldn't take much to run that well dry.

The wild card is not knowing on average how long it takes to complete a p6900, in this case. Is it averaging 24 hours? 48 hours? 72 hours?

I also included a chart for 130.237.232.237, the server that P6901, and the "big bigadv" or whatever you are calling the ones for the 12-core systems - p6903/p6904.

The stats there show about an average of 2 WUs received per 35 minutes, or a total of ~82 per day. I guess I had thought that there were a couple of hundred of these monster servers eating up a work unit like the P6903 in times between 18 hours and 72 hours (per grandpas table in the p6903 thread). I still don't know what the average time the big iron folders take on one of these WUs, but one can see that there is a WU AVAIL of almost 1200, against the WU Received of ~82 per day. That's about a 14-day supply.


Image

Image
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: Bigadv points change

Post by Grandpa_01 »

I think you may be misreading the server stats but I am not sure I just fished 2 - big bigadv 6903 which come from server 130.237.232.237 if there were 1200 WU,s available I should have received either a 6901, 6903, or 6904 but I did not both machines got assigned to 171.67.108.22 the first machine made 6 attempts to get a WU and received the message (Attempt to get work failed and no other work to do) before it was assigned a 2684 the second only had to make 2 attempts and got a 2689. I just looked at 130.237.232.237 and it says 1177 WU Avil, 1177WU's to Go and 1177 WU's Wait which I am pretty sure means all of the WU's have not been returned yet and none are available I also looked at 171.67.108.22 and it says 105 WU's Avail 103 WU's to Go and 2390 WU's Wait And I had to wait to get WU's from this server. I do not know if my assumption is right or wrong I only know what happened in my case.
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
GreyWhiskers
Posts: 660
Joined: Mon Oct 25, 2010 5:57 am
Hardware configuration: a) Main unit
Sandybridge in HAF922 w/200 mm side fan
--i7 2600K@4.2 GHz
--ASUS P8P67 DeluxeB3
--4GB ADATA 1600 RAM
--750W Corsair PS
--2Seagate Hyb 750&500 GB--WD Caviar Black 1TB
--EVGA 660GTX-Ti FTW - Signature 2 GPU@ 1241 Boost
--MSI GTX560Ti @900MHz
--Win7Home64; FAH V7.3.2; 327.23 drivers

b) 2004 HP a475c desktop, 1 core Pent 4 HT@3.2 GHz; Mem 2GB;HDD 160 GB;Zotac GT430PCI@900 MHz
WinXP SP3-32 FAH v7.3.6 301.42 drivers - GPU slot only

c) 2005 Toshiba M45-S551 laptop w/2 GB mem, 160GB HDD;Pent M 740 CPU @ 1.73 GHz
WinXP SP3-32 FAH v7.3.6 [Receiving Core A4 work units]
d) 2011 lappy-15.6"-1920x1080;i7-2860QM,2.5;IC Diamond Thermal Compound;GTX 560M 1,536MB u/c@700;16GB-1333MHz RAM;HDD:500GBHyb w/ 4GB SSD;Win7HomePrem64;320.18 drivers FAH 7.4.2ß
Location: Saratoga, California USA

Re: Bigadv points change

Post by GreyWhiskers »

I'm sure that the statistics would show there were some times when the server would be quite busy. I don't know how big the uploads and downloads for the 6903s would be - but if the downloads are, say, 50 Mbytes, and the uploads, say, 200 Mbytes, there could be quite some bottlenecking. I see there is a peak of 9 WUs returned in one 35 minute period.
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: Bigadv points change

Post by Grandpa_01 »

You are pretty close on the upload download size but I have uploaded and downloaded simultaneously before. It takes me 9 min. to upload to the server and about 1 1/2 min to download. This server is in Sweden though.
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
Nathan_P
Posts: 1164
Joined: Wed Apr 01, 2009 9:22 pm
Hardware configuration: Asus Z8NA D6C, 2 x5670@3.2 Ghz, , 12gb Ram, GTX 980ti, AX650 PSU, win 10 (daily use)

Asus Z87 WS, Xeon E3-1230L v3, 8gb ram, KFA GTX 1080, EVGA 750ti , AX760 PSU, Mint 18.2 OS

Not currently folding
Asus Z9PE- D8 WS, 2 E5-2665@2.3 Ghz, 16Gb 1.35v Ram, Ubuntu (Fold only)
Asus Z9PA, 2 Ivy 12 core, 16gb Ram, H folding appliance (fold only)
Location: Jersey, Channel islands

Re: Bigadv points change

Post by Nathan_P »

Here is something else to through into the ring, I run a 3.2 Ghz 12c/24t twin cpu machine that does a "normal" bigadv per day. the points change has knocked 20% off its PPD potential to the point where if I get a string of 2684's it is more beneficial points wise to run straight SMP.

I know that part of the change was to bring down the points curve and also push more marginal systems back onto standard SMP but this machine is exactly what stanford wants to run -bigadv. Its slightly slower twin (2.53ghz) is supposed to be coming online in the next couple of days but i am now thinking as to whether or not to let it run -bigadv or not, especially as it will reduce the load on my net connection and any 2684 will make the difference marginal at best

Not exactly in the best interests of the project to allow the most capable hardware to earn nearly the same points on the smaller, easier WU is it?
Image
phoenicis.
Posts: 9
Joined: Fri Feb 25, 2011 8:14 pm

Re: Bigadv points change

Post by phoenicis. »

bruce wrote:Though a client that can automatically reconfigure itself has not been proposed, I can conceive that it might be possible to write one. Suppose a machine with N cores (N presumed to be large) can't get bigadv and it's reassigned to standard SMP. That works, but there's constant bitching. If it went one step further and automatically kicked off N uniprocessor WU if, for some reason, it couldn't get SMP WUs, I can't imagine the uproar we would hear. We certainly need a points system that encourages big hardware to work on tough problems but we need acceptable solutions to what to do with those systems when the current tough problems have been solved and we're waiting for the scientists to propose new problems, generate projects, perform appropriate stability and validity testing on those projects, and release them to FAH. (This second case is probably more appropriately applied if N=2 or 3 or 4 than if N=32 or 48, but the concept would still be the same.)
I really quite like this.

As part of a package of measures aimed towards improving planning and communication (to assist in guiding purchasing decisions), having the client communicate the Project’s needs on a short interval basis is so much better than guessing using the points as a guide. This would probably need an opt-in flag/reward and clearly couldn’t happen overnight.

In the interim, if the Project were to communicate guidance via this site, or even better, by email to forum members, then short term changes in priority could be swiftly understood and responded to eg ‘For reason x we need a big push on regular smp or uniprocessor work units next week/month’. I, and I’m sure many others, would respond to this, points be damned. There’d be an immense satisfaction from knowing that you’re aiding the Project to the max, rather than just thinking that you are, especially if there was regular feedback on the impact of the push.

A problem would be, if it was uni work units required, I’d have to figure out how to swiftly set them up on a large amount of processor cores :?

Similarly, perhaps there could be a system for shifting the hardware requirements based on supply shortages. This, combined with some limited deadline adjustments, would perhaps avoid the pitfalls you describe in reducing the deadlines too much. For example, when there is plentiful supply of the new bigadv unit, 12 cores is adequate, whereas when there’s not, the threshold is raised. If all the rule-sets were made clear then donors may be able to approach their purchasing decisions in a more enlightened fashion.

Edit: Please disregard the last paragraph above. I thought core hacking was rare but after further reading it turns out that at least one forum has published a guide and their site admin appears to support the practice. Increasing the core threshold may only serve to make this approach even more widespread than it already is.
Nathan_P wrote:Here is something else to through into the ring, I run a 3.2 Ghz 12c/24t twin cpu machine that does a "normal" bigadv per day. the points change has knocked 20% off its PPD potential to the point where if I get a string of 2684's it is more beneficial points wise to run straight SMP.

I know that part of the change was to bring down the points curve and also push more marginal systems back onto standard SMP but this machine is exactly what stanford wants to run -bigadv. Its slightly slower twin (2.53ghz) is supposed to be coming online in the next couple of days but i am now thinking as to whether or not to let it run -bigadv or not, especially as it will reduce the load on my net connection and any 2684 will make the difference marginal at best

Not exactly in the best interests of the project to allow the most capable hardware to earn nearly the same points on the smaller, easier WU is it?
I saw this too when I switched a couple of machines to standard smp. Some smp projects pretty much match the ppd of a 2684 but there are also some that are not quite so close.
Last edited by phoenicis. on Mon Jul 11, 2011 12:25 pm, edited 1 time in total.
Post Reply