Lopsided points when folding -smp
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 1122
- Joined: Wed Mar 04, 2009 7:36 am
- Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M
Lopsided points when folding -smp
I recently switched all of my 12 threaded machines over to smp from bigadv mostly running 6903 / 6904 WU's after learning there was a backlog of smp WU's that needed to be done. Bare in mind that all 5 of these rigs average within 4,000 PPD of each other when folding bigadv WU's. I was quite surprised to see how big of a variation there is when it comes to the smp it does not matter if it is a3 or a4 WU's on these rigs. The low I have seen is around 30,000 PPD and the high is around 50,000 PPD that is quite a variation. IT does not appear to me that the benchmark machine is a very good way to set point values for WU's, at least not for higher powered rigs, that is way to big of a variation in point values. It does not somehow seem right that these 5 virtually identical rigs should be such a big variation in pay for basically doing the same work day in and day out.
It sure is a good thing computers are just inanimate objects because if they were humans they would be screaming discrimination and going on strike.
It sure is a good thing computers are just inanimate objects because if they were humans they would be screaming discrimination and going on strike.
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
-
- Posts: 260
- Joined: Tue Dec 04, 2007 5:09 am
- Hardware configuration: GPU slots on home-built, purpose-built PCs.
- Location: Eagle River, Alaska
Re: Lopsided points when folding -smp
Grandpa, I've noticed nearly the same. I'm lately Folding standard SMP, as the -bigadv WU servers have dried up/are being cleaned up (temporarily?). Well anyway, on my Lynnfield 8-thread processors, SMP production varies from 12K to 20+K. Wow, I hadn't folded standard SMP in a while and had no idea there was such a divergence in points, depending on the project series.
~ 12K for a 101xx? Not a complaint from me, just head scratching surprise.
~ 12K for a 101xx? Not a complaint from me, just head scratching surprise.
Re: Lopsided points when folding -smp
I too see widely varying ppd on my C2Qs and 17s. From 7000 to 12,000 ppd on a q9450, from around 30K ppd to 45K ppd (p7500) on the 2600K. It makes me wonder if the benchmark procedure is being followed on all the WUs or if there are some performance assumptions going on. It would be nice to hear from someone running an i5. If they see similar performance across all WUs, we can chalk the variations up to hardware differences.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Lopsided points when folding -smp
Don't forget that with the bonus points that any small variation in performance due to hardware differences is exaggerated. It's not like in the old days where +/-10% was the difference between 100 ppd and 120 ppd on Gromacs work units.
10% variation today nets a points difference in the 1000s.
If something is undervalued (very possible), commenting on it won't change it. As always, if you think you see a pattern, then document it. Hard numbers (specs and frame times) are much more persuasive.
10% variation today nets a points difference in the 1000s.
If something is undervalued (very possible), commenting on it won't change it. As always, if you think you see a pattern, then document it. Hard numbers (specs and frame times) are much more persuasive.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 1122
- Joined: Wed Mar 04, 2009 7:36 am
- Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M
Re: Lopsided points when folding -smp
What I am pointing out here is that there is too big of a variation in the benching system if you really want to see it just use some higher powered equipment and it stands out like a sore thumb. There is currently a backlog of smp WU's so I decided to switch to help clean them up. But why would the average bigadv folder switch to folding smp when there is such a large variation in PPD. I believe this is a problem Stanford needs to look into and fix if possible there should not be that much difference between 1 WU and the next. Hey how about if you complete WU x in x amount of time you get XXX points.7im wrote:Don't forget that with the bonus points that any small variation in performance due to hardware differences is exaggerated. It's not like in the old days where +/-10% was the difference between 100 ppd and 120 ppd on Gromacs work units.
10% variation today nets a points difference in the 1000s.
If something is undervalued (very possible), commenting on it won't change it. As always, if you think you see a pattern, then document it. Hard numbers (specs and frame times) are much more persuasive.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Lopsided points when folding -smp
Actually, there can be (and has been) a lot of difference between one project and the next. FAH History has shown that repeatedly. Some projects can be cache intensive, while others are not. You may recall how the "feathered vesicles" work units processed more quickly on systems with a larger cache back on the Conroe days.
Look at the recent GPU projects. Some of the newer larger work units run just fine on a gtx 480, but chokes on a 450 because the WU with more atoms overwhelms the lower number of shaders available. We've had very large differences in PPD reported there as well. All very logical when the details were considered.
Tell me again how 1 WU is not that much different from the next.
Seriously, I understand your concern, but without more details, nothing will change.
Look at the recent GPU projects. Some of the newer larger work units run just fine on a gtx 480, but chokes on a 450 because the WU with more atoms overwhelms the lower number of shaders available. We've had very large differences in PPD reported there as well. All very logical when the details were considered.
Tell me again how 1 WU is not that much different from the next.
Seriously, I understand your concern, but without more details, nothing will change.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 1024
- Joined: Sun Dec 02, 2007 12:43 pm
Re: Lopsided points when folding -smp
The benchmark machines do not get bonus points. If you want to complain about benchmarking, you need to recalculate the PPD so it does not include the bonus and start gathering data on the baseline points. Otherwise your complaints will fall on deaf ears. Even after you've done that, the answer will probably be that different hardware reacts to project variations in differnt ways.Grandpa_01 wrote:What I am pointing out here is that there is too big of a variation in the benching system if you really want to see it just use some higher powered equipment and it stands out like a sore thumb. There is currently a backlog of smp WU's so I decided to switch to help clean them up. But why would the average bigadv folder switch to folding smp when there is such a large variation in PPD. I believe this is a problem Stanford needs to look into and fix if possible there should not be that much difference between 1 WU and the next. Hey how about if you complete WU x in x amount of time you get XXX points.7im wrote:Don't forget that with the bonus points that any small variation in performance due to hardware differences is exaggerated. It's not like in the old days where +/-10% was the difference between 100 ppd and 120 ppd on Gromacs work units.
10% variation today nets a points difference in the 1000s.
If something is undervalued (very possible), commenting on it won't change it. As always, if you think you see a pattern, then document it. Hard numbers (specs and frame times) are much more persuasive.
It sounds like your complaint may be more along the line of "The QRB is unfair." If so, then you're going to have to gather entirely different data to support your claim. Once again, it's going to vary depending on your hardware but all the QRB is only interested in three things: (benchmarked) baseline points, how long it took you to finish the WU, and how long the preferred deadline is.
Re: Lopsided points when folding -smp
Good point, codysluder.
The bigadv flag suggests to the Assignment Server that you'd prefer WUs with short deadlines. The AS also checks your hardware in an attempt to determine if it's capable of meeting those tighter deadlines. If both are true, the QRB awards you accordingly.
In some sense, WUs with longer deadlines (relative to the amount of computing to be done) are "easier" to fold. That degree of difficulty is an obscure project characteristic, since it's not directly related to any of the things that are explicitly measured. Traditionally, deadlines for classic WUs can be completed by a 400MHz - 500MHz Pentium. Deadlines an Points per WU vary, but the "difficulty" was essentially constant. Clearly it's impossible for that type of hardware to meet the SMP deadlines, even if the AS permitted it. (There's a dual-core Atom chip that could be configured for SMP, but at 2x1.80 GHz or 2x1.5 GHz, it would be severely challenged by most SMP deadlines.)
OK, back to the original question. Suppose a SMP project is running too slow for the needs. of science. If I were the project owner, I might choose to configure a new project based on the same protein. I could increase the number of cores required and expect it to be folded by more powerful hardware. That would speed up the WUs that are being folded but it wouldn't do anything for the occasional WU that isn't returned. To deal with them, I would shorten the deadline. If my project has to wait for a large deadline to expire it delays the project more than if it only has to wait for a shorter deadline to expire.
Looking at it from another perspective, I have now reconfigured the same computation with a shorter deadline so I've increased the difficulty rating of the project. I have compensated for the increased difficulty by increasing the number of cores required and I can expect my results sooner. Neitherof these changes would lead to a change in baseline points (benchmarking). I expect that both the deadline and the average elapsed time to change so from the perspective of the researcher, QRB didn't change. From the perspective of an individual donor, though, elapsed time did not change but deadline did.
By the way, my example is fictitious. I do not know if Pande Group researchers have actually made changes like I've described to their projects. What I do know is that for donors with bigadv-capable hardware, they do report lower PPDs when they're assigned standard SMP projects. Similarly, SMP-capable hardware earns lower PPD when it's running classic WUs with longer deadlines. This whole post was fabricated in an attempt to explain your lower PPD by looking at at two facts: 1) The QRB formula and 2) the fact that identical hardware finds it easier to complete a standard SMP WU before its deadline than to complete a bigadv WU before its deadline.
The bigadv flag suggests to the Assignment Server that you'd prefer WUs with short deadlines. The AS also checks your hardware in an attempt to determine if it's capable of meeting those tighter deadlines. If both are true, the QRB awards you accordingly.
In some sense, WUs with longer deadlines (relative to the amount of computing to be done) are "easier" to fold. That degree of difficulty is an obscure project characteristic, since it's not directly related to any of the things that are explicitly measured. Traditionally, deadlines for classic WUs can be completed by a 400MHz - 500MHz Pentium. Deadlines an Points per WU vary, but the "difficulty" was essentially constant. Clearly it's impossible for that type of hardware to meet the SMP deadlines, even if the AS permitted it. (There's a dual-core Atom chip that could be configured for SMP, but at 2x1.80 GHz or 2x1.5 GHz, it would be severely challenged by most SMP deadlines.)
OK, back to the original question. Suppose a SMP project is running too slow for the needs. of science. If I were the project owner, I might choose to configure a new project based on the same protein. I could increase the number of cores required and expect it to be folded by more powerful hardware. That would speed up the WUs that are being folded but it wouldn't do anything for the occasional WU that isn't returned. To deal with them, I would shorten the deadline. If my project has to wait for a large deadline to expire it delays the project more than if it only has to wait for a shorter deadline to expire.
Looking at it from another perspective, I have now reconfigured the same computation with a shorter deadline so I've increased the difficulty rating of the project. I have compensated for the increased difficulty by increasing the number of cores required and I can expect my results sooner. Neitherof these changes would lead to a change in baseline points (benchmarking). I expect that both the deadline and the average elapsed time to change so from the perspective of the researcher, QRB didn't change. From the perspective of an individual donor, though, elapsed time did not change but deadline did.
By the way, my example is fictitious. I do not know if Pande Group researchers have actually made changes like I've described to their projects. What I do know is that for donors with bigadv-capable hardware, they do report lower PPDs when they're assigned standard SMP projects. Similarly, SMP-capable hardware earns lower PPD when it's running classic WUs with longer deadlines. This whole post was fabricated in an attempt to explain your lower PPD by looking at at two facts: 1) The QRB formula and 2) the fact that identical hardware finds it easier to complete a standard SMP WU before its deadline than to complete a bigadv WU before its deadline.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 1122
- Joined: Wed Mar 04, 2009 7:36 am
- Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M
Re: Lopsided points when folding -smp
Hey guys I am not complaining here I do not care about the points myself. If I did I would be folding bigadv right now not smp I switched my rigs because there is a backlog of smp and it is holding back progress and research this is voluntary. I am just pointing out that in order to get the average bigadv folder to switch over and help out there would probably need to be more consistency between the points an average smp WU makes. I have found there basically is no consistency when you use a upper range cpu to fold -smp. Hell I had one folding this morning 7019 I think that was making 63,000 PPD and another rig was folding another WU that was making 28,000 that is a 35,000 PPD difference. Do you mean to say that WU X is worth 35,000 PPD more than WU Y when they are both smp WU's. Is this acceptable or desirable. All I am doing is pointing out what I perceive to be a problem with the benchmark system.
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
Re: Lopsided points when folding -smp
Grandpa, benchmarking and this perceived points inconsistency issue has been on the agenda of the Donor Advisory Board for awhile. I know it was when I was on the board so I hope it still is and that they are making progress at either narrowing the points gap, although this is a very difficult problem, or through better education as to why such a gap exists.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Lopsided points when folding -smp
The perceived inconsistencies in PPD when the donor's hardware varies significantly from the benchmark hardware has been the longest term criticism of the benchmarking system (while ignoring its many benefits over other project's methods), ever since they deviated from the 1 WU = 1 Point model (long time ago). However, none of the many suggested changes over those many years would have resulted in a net improvement over the current system. Each suggestion to fix one part either breaks another part, or adds too much cost and/or too many man hours to manage.
PPD variance with hardware variance remains one of the last hurdles that PG has not been able to solve yet. I suspect they have some long term solutions to resolve this, but short term we will need to continue to educate people through the forum, and through the additional Points FAQ that was written to help with this topic.
PPD variance with hardware variance remains one of the last hurdles that PG has not been able to solve yet. I suspect they have some long term solutions to resolve this, but short term we will need to continue to educate people through the forum, and through the additional Points FAQ that was written to help with this topic.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: Lopsided points when folding -smp
I know that, and I'm sure the FAH researchers appreciate your willingness to help out where needed.Grandpa_01 wrote:Hey guys I am not complaining here I do not care about the points myself....
Stanford want's the overall FAH system to achieve the best overall balance of processing across all projects. This optimum drifts as projects come and go and as donors upgrade their hardware or as donors themselves come and go. As you've heard, right now it's slightly out of balance for SMP. Actual assignment decisions are made jointly by the Assignment Servers and by the settings you can modify on your cleint. For you, that information was all you needed to adjust your client. For many others, the points system is the primary incentive that might encourage them to adjust their client settings.
As Tobit suggests, points can be considered, too, but a lot depends on whether the imbalance is temporary or a long-term trend. Dynamically adjusting the points system based on a temporary change in the SMP/BigAdv balance is not a good idea so other factors need to be considered, as well. I don't know whether they're looking at a temporary or a long-term imbalance. Certainly what's considered high-end-hardware is migrating upward and that's a continuing trend.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 1122
- Joined: Wed Mar 04, 2009 7:36 am
- Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M
Re: Lopsided points when folding -smp
So how do we convince others that there computer power is needed in a certain area. I am pretty sure that if you could convince 15% of the bigadv folders to do smp for a week or 2 they would put a pretty big dent in the oversupply of smp WU's. Does Stanford need to make an announcement that they need people to fold a certain class of WU and then temporally raise the Points on that class of WU. I know there is not as big of a discrepancy when folding smp on the average cpu but when you go to the higher end rigs there is a big discrepancy which I believe may cause a problem with some folders. But what is important here is keeping the science moving forward at the fastest possible pace.
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
-
- Posts: 1165
- Joined: Wed Apr 01, 2009 9:22 pm
- Hardware configuration: Asus Z8NA D6C, 2 x5670@3.2 Ghz, , 12gb Ram, GTX 980ti, AX650 PSU, win 10 (daily use)
Asus Z87 WS, Xeon E3-1230L v3, 8gb ram, KFA GTX 1080, EVGA 750ti , AX760 PSU, Mint 18.2 OS
Not currently folding
Asus Z9PE- D8 WS, 2 E5-2665@2.3 Ghz, 16Gb 1.35v Ram, Ubuntu (Fold only)
Asus Z9PA, 2 Ivy 12 core, 16gb Ram, H folding appliance (fold only) - Location: Jersey, Channel islands
Re: Lopsided points when folding -smp
Love to have me one of those 400Ghz pentiums (EDIT by B: Of course I meant MHz. Fixed.)bruce wrote:Good point, codysluder.
In some sense, WUs with longer deadlines (relative to the amount of computing to be done) are "easier" to fold. That degree of difficulty is an obscure project characteristic, since it's not directly related to any of the things that are explicitly measured. Traditionally, deadlines for classic WUs can be completed by a 400GHz - 500GHz Pentium.
On a serious note the points system is never going to be fair, the benchmark machine is not updated often enough, and what is the current benckmark machine for the -bigadv/ bigbigadv units? The same i7 870 that they use for SMP or something with a bit more power?? Whatever changes are made will make people unhappy, nerf -bigadv again and watch the big iron donors scream. cut back on SMP and watch everyone try and run -bigadv on an Phenom 2 x4 965. Raise SMP points to attract more donors to process them and watch the old hands moan about their years of work being devalued, -bigadv guys moan that there is no difference between -bigadv and SMP, and GPU donor moan that they should be given more points as it costs more to donate GPU time in terms of electricity.
Another point that needs to be raised is that the SMP points variation has gotten worse since the introduction of SMP on A4. I'm not sure how but the really high PPD is being reported by the newer projects running on that core. It could be a windows/linux thing as A3 performs about the same on windows & linux.
Atom count seems to be all over the place with A4 as well - 500 atoms upto 94,000. If A4 defaults to SMP on a multi thread machine and gets a 500 atom WU PPD is likely to go through the roof. That won't help
Re: Lopsided points when folding -smp
For you guys who wanted some numbers to back up observation:
WU variability on a Q6600 @ 3.24 GHz, running Ubuntu 11.04 without GPUs.
Current Wus:
One of the slower, not currently assigned but relatively recent WUs:
So I'm seeing the base average ppd on a WU vary from the mean by +/- 16% (32% total)
I have years of data on this machine. Looking back to the begining of the QRB, and the average base ppd was in the 1200 range (7000 ppd with bonus), so the benchmark has been sliding up, or the later WUs run better on a Q6600 than on an i5. Before the QRB, it made about 5600 ppd.
One of my Windows XP x64 rigs (also a Q6600 @ 3.24) is getting p11051 with a base average ppd of 1697 ppd to compare to p6072 at a base average of 1265 ppd. Even greater variability.
WU variability on a Q6600 @ 3.24 GHz, running Ubuntu 11.04 without GPUs.
Current Wus:
Code: Select all
Project ID: 6026
Core: GRO-A3
Credit: 517
Frames: 100
Name: CR L 1
Path: \\Cr-l1\fah\SMP\
Number of Frames Observed: 100
Min. Time / Frame : 00:04:43 - 9,741 PPD
Avg. Time / Frame : 00:04:45 - 9,638 PPD
Base Average ppd: 1566
____________________________________________________________
Project ID: 6072
Core: GRO-A3
Credit: 481
Frames: 100
Name: CR L 1
Path: \\Cr-l1\fah\SMP\
Number of Frames Observed: 300
Min. Time / Frame : 00:04:51 - 8,735 PPD
Avg. Time / Frame : 00:05:06 - 8,101 PPD
Base Average ppd: 1374
______________________________________________________
Project ID: 6098
Core: GRO-A3
Credit: 1593
Frames: 100
Name: CR L 1
Path: \\Cr-l1\fah\SMP\
Number of Frames Observed: 99
Min. Time / Frame : 00:12:47 - 11,802 PPD
Avg. Time / Frame : 00:12:58 - 11,553 PPD
Base Average ppd: 1770
____________________________________________________________
Project ID: 6099
Core: GRO-A3
Credit: 1588
Frames: 100
Name: CR L 1
Path: \\Cr-l1\fah\SMP\
Number of Frames Observed: 300
Min. Time / Frame : 00:12:56 - 11,561 PPD
Avg. Time / Frame : 00:13:40 - 10,643 PPD
Cur. Time / Frame : 00:13:28 - 10,823 PPD
R3F. Time / Frame : 00:13:30 - 10,793 PPD
All Time / Frame : 00:13:35 - 10,718 PPD
Eff. Time / Frame : 00:13:39 - 10,659 PPD
Base Average ppd: 1672
______________________________________________________________
Project ID: 7500
Core: GRO-A3
Credit: 529
Frames: 100
Name: CR L 1
Path: \\Cr-l1\fah\SMP\
Number of Frames Observed: 300
Min. Time / Frame : 00:03:55 - 13,321 PPD
Avg. Time / Frame : 00:04:02 - 12,748 PPD
Base Average ppd: 1889
________________________________________________________________
Project ID: 7800
Core: GRO-A4
Credit: 1622.91
Frames: 100
Name: CR L 1
Path: \\Cr-l1\fah\SMP\
Number of Frames Observed: 200
Min. Time / Frame : 00:13:44 - 9,592 PPD
Avg. Time / Frame : 00:13:45 - 9,575 PPD
Base Average ppd: 1708
__________________________________________________________________
Code: Select all
Project ID: 6701
Core: GRO-A3
Credit: 921
Frames: 100
Name: CR L 1
Path: \\Cr-l1\fah\SMP\
Number of Frames Observed: 300
Min. Time / Frame : 00:10:35 - 7,250 PPD
Avg. Time / Frame : 00:10:42 - 7,132 PPD
Base Average ppd: 1239
So I'm seeing the base average ppd on a WU vary from the mean by +/- 16% (32% total)
I have years of data on this machine. Looking back to the begining of the QRB, and the average base ppd was in the 1200 range (7000 ppd with bonus), so the benchmark has been sliding up, or the later WUs run better on a Q6600 than on an i5. Before the QRB, it made about 5600 ppd.
One of my Windows XP x64 rigs (also a Q6600 @ 3.24) is getting p11051 with a base average ppd of 1697 ppd to compare to p6072 at a base average of 1265 ppd. Even greater variability.