From Nature (see http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo761.html):
“The precipitation anomaly of the past few decades in Law Dome is the largest in 750 years, and lies outside the range of variability for the record as a whole, suggesting that the drought in Western Australia may be similarly unusual.”
Climate science has a colorful history of hyperbole: hurricanes, droughts, floods, fires, famines. Old habits die hard and so do true believers. I want to turn attention to the highlighted phrase and what it really means.
The central, neglected issue, is how a warrant can be reliably issued by observing variation in behaviour, to underpin such claims as “x is not natural therfore must be due to human influence”.
What does “lies outside the range of variability for the record as a whole”, mean (in increasing order of strength)?
1. The observed anomaly is larger than any others in the observed record. True, but doesn’t allow any quantification of the importance of this observation. The largest of two events is meaningless.
2. The observed anomaly has a low probability. Whether it is 0.05, 0.001 or 0.00034, the probability is still within the range of variability, just rare. It is ‘inside the range of variability’ not ‘outside the range of variability’.
3. The system is outside its normal operating parameters. A doctor would have to see more than one case of a rare disease in a short period, to conclude that ‘something unusual is happening’, because there is always some probability of seeing a rare disease.
The argument is used in climate science a lot: temperatures are the highest in recorded history, the highest in the last millenium, improbable etc, and the jump to the conclusion that the system is unnatural. I am trying to clarify the point I was trying to make at the end of the last post. Claims are made. In science they need justification. To prove a system is outside its normal operating range requires more that the largest or unlikely observation.
I don’t see much evidence of that sort of rigor, with lots of jumps between (1) and (2) and (3). Here is an example of what a ‘scientific’ defense of a claim sounds like, exemplifying concern with the test and its power, without statistical slop (see http://content.nejm.org/cgi/content/abstract/361/9/868).
Results: For samples obtained on admission, the diagnostic accuracy was highest with the sensitive troponin I assay (area under the receiver-operating-characteristic curve [AUC], 0.96), as compared with the troponin T assay (AUC, 0.85) and traditional myocardial necrosis markers. With the use of the sensitive troponin I assay (cutoff value, 0.04 ng per milliliter) on admission, the clinical sensitivity was 90.7%, and the specificity was 90.2%. The diagnostic accuracy was virtually identical in baseline and serial samples, regardless of the time of chest-pain onset. In patients presenting within 3 hours after chest-pain onset, a single sensitive troponin I assay had a negative predictive value of 84.1% and a positive predictive value of 86.7%; these findings predicted a 30% rise in the troponin I level within 6 hours. A troponin I level of more than 0.04 ng per milliliter was independently associated with an increased risk of an adverse outcome at 30 days (hazard ratio, 1.96; 95% confidence interval, 1.27 to 3.05; P=0.003).
Conclusions: The use of a sensitive assay for troponin I improves early diagnosis of acute myocardial infarction and risk stratification, regardless of the time of chest-pain onset.