TBH, this whole debate highlights why statistics is clear as mud & why you shouldn't just look @ graphs
Using the available data, it is possible to interpret the information any way you want, taking into the strengths and weaknesses of each method of measurement.
For example, measuring based on TFLOPs is inaccurate; a FLOP doesn't always = a FLOP, & what happens if due to a major break-through algorithm, Stanford magically halves the number of flops required to compute each step of a simulation ?
Alternatively, to measure according to PPD over a long period of time is often fraught by the bonus system or say a new instruction set which improves performance, but not on the benchmarking machine. Also, and I'm not accusing Stanford of this, but it is possible for projects to 'inflate' points over time, which could give the impression of greater computing power even when actual performance is fixed. Thus measuring by points is an estimation of production, and not definitive.
Measure by WU completed is confounded by different sized WUs. Could we measure performance by the number of publications released per year? Perhaps by the release of new methods of simulation!
As Rattledagger said, measuring by clients/active CPUs is difficult because you can have 1x 8 thread SMP client use the exact same resource of 8x classic clients, or 2x 4 thread SMP clients. You could hypothetically drop clients by a quarter, and yet still be using the exact same resources.
What would be interesting from analysis point of view here, would be to
study the effect of key points in time. For example; was the a marked drop in SMP clients when VMware Player 3.0.0 (with its ability to run 8 threads at once) came on the market. Was there a drop in linux SMP clients when bigadv work units became available? In both these cases, you could suggest that client consolidation occurred. I believe they didn't have the information in the past; but my understanding is they now poll the client as to how many cores are made available? This would allow Stanford to track the average # of cores per SMP client, for example.
The introduction of Bonus Points/A3 cores would be another time point of interest; did this result in another consolidation of SMP clients, did it result in a shift from Linux -> Windows (which would result in better efficiency). Did the introduction of Bonus Points also result in a decrease in the number of ATI GPUs folding, as it increased the turn around time of the SMP client & thus decreased the PPD by an even greater factor.
Following on from the above analysis, extra, more difficult analysis could be done; perhaps we could analyse the role of increasing power costs by analysing the average cost of electricity over time in the US with US production. Perhaps we could analyse production cycles - the cyclical effect of seasons & thus ambient temperature on production; how much does Northen Hemisphere production fluctuate with the weather? Of interest to me; how does the economic cycle impact on point production?
To me, this sort of analysis would be of more interest than simply looking at any given graph; how much can we attribute to things we can identify? This is different to just saying oh look - this graph is trending downwards. Of course, that may just be the statistician in me; my bias towards looking @ key points in time stems, I admit, from my academic training (human/economic/political geography). Perhaps my training just makes me look for things that can be turned into publications