A couple of recent posts challenging global warming science have not been picked up by other observers. While real scientists find Climategate distasteful, it does not necessarily challenge the pillars of AGW logic. These latest developments do, and perhaps give the insightful a heads-up of the direction of challenges to come.
The first is Loehle, Craig. 2010. The estimation of historical CO2 trajectories is indeterminate: Comment on “A new look at atmospheric carbon dioxide.” Atmospheric Environment, in press.
Loehle critiques a paper by Hoffman, claiming the exponential model for characterizing CO2 trajectories for historical data is not estimated properly. He illustrates with the past 51 years of CO2 data that three different models capture the historical pattern of CO2 increase with R2 > 0.98 but forecast very different future trajectories.
To use the blog-speak, the exponential curve (that gives the highest forecast levels, surprise, surprise) is cherry-picked. To use Bayesian statistical terminology its ‘inductive bias‘, or in the financial parlance ‘model risk‘.
The breakthrough, as I see it, is that most analysts would view this result as ‘bleeding obvious’, so obvious that such comments almost never get published, because either the writer, reviewers, or editors think it is trivial, a negative result or some such party killer. However, I think he’s right, and it needs to be addressed. It also really irritates warmies as the discussion shows.
Clearly such comments do not need to be long or complicated. Like the note of mine on Rahmstorf’s ‘the climate is more sensitive that expected’ meme, Craig’s paper is only 2 pages. I reckon there are countless such comments possible, as climate science is full of such trivial errors, but the challenge is getting them published and Craig has been very good at it.
So 5% of the statements claimed to be right because of statistical observations are wrong while 95% of them are right. Is it a good enough success rate to build science? Someone who is not familiar with science or rational thinking in general may think that it is good enough to make 95% of correct statements.
Regardless of the character and interpretation of the hypotheses and theories, it’s clear that a working scientific discipline requires at least the 5-sigma standards if its insights are going to be quasi-reliably reused in realistic, slightly longer chains of reasoning that can be as long as 6 steps or more.
Once again, to most analysts this is obvious, and it seems to piss people off to say it, but the basis for most climate statements are too uncertain to be useful.
The third post I noticed was Rahmstorf (2009): Off the mark again (part 1)
Tom Moriarty is highly critical of Stefan Rahmstorfâ€™s ever more fanciful models for scaring the world about sea level rise. The fact that the only really solid empirical relationship between sea level and temperature is a linear correlation, and that notions that an increase in the sea level rise rate would take a millennium to dampen out have no empirical basis, appear to matter not at all to reviewers or editors.
Tom’s argument against this madness is as follows:
If realistic data is applied to a model that is purported to explain a phenomenon, and the result is obviously unrealistic, then that model must be rejected. In this section I will explain how VR2009 apply a realistic temperature scenario to their model, namely a linear increasing temperature, to explain the effect of the counter-intuitively negative value of the model parameter, b. Their result is satisfying. But in the next section I will apply another realistic temperature scenario to their model, and the result will be outrageously bogus. This will force the rejection of their model and its predictive power for the 21st century.
Three good posts, in good taste, and well worth digesting.