Tears of a Clown

False forecasts are not without consequences. How to think about this?

Opposition to “Crying Wolf” is growing. A report by a British MP claims the World Health Organisation and other public health bodies have “gambled away” public confidence by overstating the dangers of the flu pandemic.

“This decline in confidence could be risky in the future,” says the report, seen by the Guardian. “When the next pandemic arises many persons may not give full credibility to recommendations put forward by WHO and other bodies. They may refuse to be vaccinated and may put their own health and lives at risk.”

False financial prophets are called ‘clowns’ by Nassim Taleb, I suppose in reference to the grotesque parody of reality, and their entertainment value only. He argues that excessive reliance on flawed Value-At-Risk models of portfolio risk directly contributed to the Great Financial Recession. How many are still suffering from this forecast failure?

I have been hard in the past on experts who claim forecast ability using speculative assumptions, particularly massive extinctions, drought and sea level rise, but there are many more.

By some reports, governments and others have been recklessly investing in green energy and projections of climate models that by objective criteria are useless. More costly consequences:

Instead of spending just $1.3 billion on a new dam on the Mitchell River, this Government wasted $3.5 billion on a desalination plant that will produce a third of the water. And for insurance it’s wasted $750 million more on this pipeline to steal water from irrigators.

The State of the Climate report stated without reservation that:

“Australia will be hotter in coming decades”

“Much of Australia will be drier in coming decades”

Some statements in the report have been heavily criticized, on the blogosphere at least, here and here and here. Moreover, there is a history of forecasts by CSIRO scientists of increasing drought due to global warming, here. But the issue is not settled, as shown by other CSIRO scientists who disagree.

The question can be asked whether scientific organizations issuing authoritative statements of “fact” are acting ultra vires — beyond their mandate. Putting aside the issue of whether the BoM and CSIRO’s fact sheet is misleading, by what right do they issue a document presenting controversial claims as settled fact?

Questions are also being asked about the undue influence of groups such as the WWF, renewable energy and carbon trading interests over policy decisions.

Or as Peter Berger said: “It seems plausible that folly and fools, like religion and magic, meet some deeply rooted needs in human society.”

Australia's Relatively Stable Rainfall

The “State of the Climate” report from two of Australia’s lead agencies, the Bureau of Meteorology (BoM) and CSIRO, states that total rainfall has been “relatively stable” last century, but omitted their own data that clearly shows total rainfall increasing.

rranom.aus.0112.20873

The fine print at the bottom of BoM graph says rainfall has increased at the “relatively stable” rate of 6.3mm per decade. On downloading the data and fitting a linear regression, the upwards slope is significantly greater than zero as follows.

Slope=0.63 S.E.=0.23 P=0.003

When autocorrelation is taken into account the values are

Slope=0.63 S.E.=0.29 P=0.014

The Chow break-test shows a significant break in 1971-72, when annual rainfall increased by 60mm. This is a 13% increase in “relatively stable” rainfall (from 432±9mm to 492±15mm per annum).

fig10

How many ways this BoM statement not match observations?

A slight increase in Australian annual mean rainfall is evident during the 20th Century although this is largely due to several wet years during the 1970s. The five year mean rainfall also shows a weak upward trend. However, the high year-to-year variability of Australian rainfall dominates any background trends.

How Much More Rain Will Global Warming Bring? This is the question asked by Frank J. Wentz. The fundamental physical relation of evaporation and temperature called the Clausius-Clapeyron relation suggests a 6% increase in evaporation per K of temperature increase, also by Wentz’s analysis. If the ocean is not to migrate into the sky, evaporation has to equal precipitation globally (although not locally) and also increase by a “relatively stable” 6% globally.

But Wentz found the climate simulation models predict an “absolutely stable” increase in precipitation of only 1-3%, although a comment on the paper says the precipitation across all models is “relatively unstable” anyway, and so can’t really be trusted.

CSIRO and BoM experts, computer simulation models, the laws of physics, and observational data — two of these belong together, two of these don’t. Can you guess which ones?

Continue reading

Three good posts

A couple of recent posts challenging global warming science have not been picked up by other observers. While real scientists find Climategate distasteful, it does not necessarily challenge the pillars of AGW logic. These latest developments do, and perhaps give the insightful a heads-up of the direction of challenges to come.

The first is Loehle, Craig. 2010. The estimation of historical CO2 trajectories is indeterminate: Comment on “A new look at atmospheric carbon dioxide.” Atmospheric Environment, in press.

Loehle critiques a paper by Hoffman, claiming the exponential model for characterizing CO2 trajectories for historical data is not estimated properly. He illustrates with the past 51 years of CO2 data that three different models capture the historical pattern of CO2 increase with R2 > 0.98 but forecast very different future trajectories.

To use the blog-speak, the exponential curve (that gives the highest forecast levels, surprise, surprise) is cherry-picked. To use Bayesian statistical terminology its ‘inductive bias‘, or in the financial parlance ‘model risk‘.

The breakthrough, as I see it, is that most analysts would view this result as ‘bleeding obvious’, so obvious that such comments almost never get published, because either the writer, reviewers, or editors think it is trivial, a negative result or some such party killer. However, I think he’s right, and it needs to be addressed. It also really irritates warmies as the discussion shows.

Clearly such comments do not need to be long or complicated. Like the note of mine on Rahmstorf’s ‘the climate is more sensitive that expected’ meme, Craig’s paper is only 2 pages. I reckon there are countless such comments possible, as climate science is full of such trivial errors, but the challenge is getting them published and Craig has been very good at it.

The second development is Lubos taking the bat here and here to traditional certainty levels in environmental science.

So 5% of the statements claimed to be right because of statistical observations are wrong while 95% of them are right. Is it a good enough success rate to build science? Someone who is not familiar with science or rational thinking in general may think that it is good enough to make 95% of correct statements.

Regardless of the character and interpretation of the hypotheses and theories, it’s clear that a working scientific discipline requires at least the 5-sigma standards if its insights are going to be quasi-reliably reused in realistic, slightly longer chains of reasoning that can be as long as 6 steps or more.

Once again, to most analysts this is obvious, and it seems to piss people off to say it, but the basis for most climate statements are too uncertain to be useful.

The third post I noticed was Rahmstorf (2009): Off the mark again (part 1)

Tom Moriarty is highly critical of Stefan Rahmstorf’s ever more fanciful models for scaring the world about sea level rise. The fact that the only really solid empirical relationship between sea level and temperature is a linear correlation, and that notions that an increase in the sea level rise rate would take a millennium to dampen out have no empirical basis, appear to matter not at all to reviewers or editors.

Tom’s argument against this madness is as follows:

If realistic data is applied to a model that is purported to explain a phenomenon, and the result is obviously unrealistic, then that model must be rejected. In this section I will explain how VR2009 apply a realistic temperature scenario to their model, namely a linear increasing temperature, to explain the effect of the counter-intuitively negative value of the model parameter, b. Their result is satisfying. But in the next section I will apply another realistic temperature scenario to their model, and the result will be outrageously bogus. This will force the rejection of their model and its predictive power for the 21st century.

Three good posts, in good taste, and well worth digesting.

Orders of Integration

I(0), I(1) or I(2)? What does it all mean? Below is a visual presentation of CO2 concentration from Mauna Loa and global temperature from GISS, demonstrating the difference in their order of integration.

differences

The blue series is the increasing level of CO2 in annual steps. Differencing means to successively subtract the previous value at each step, giving the change at each step (the delta or dCO2 shown in magenta). Differencing again gives ddCO2 shown in yellow. After these two differences the values are clearly oriented around the zero line (or stationary).

Continue reading

CSIRO and BoM Report

A short post, but it doesn’t take much to show that CSIRO and BoM are telling porkies again in their just released State of Climate report. Just click here to get a graph showing the INCREASING trend in rainfall.

The report states:

2. Rainfall
While total rainfall on the Australian continent has been relatively stable …

rranom.aus.0112.20873

The fine print at the bottom left says: “Linear trend of 6.33mm decade.”
Continue reading

Independent Confirmation of Beenstock's Theory

Beenstock’s radical theory needs to be tested. As discussed here, he proposed that CHANGE in greenhouse gases (delta GHGs or dGHGs) not absolute values produces global warming. A simple test is to develop linear regression models predicting temperature, with and without GHG and dGHG. If Beenstock’s theory is correct, then models containing dGHG should be more accurate.

The protocol was to develop and test linear regression models on all the temperature data from 1900 to 2004 (internal test), and two external tests on held back data. That is, the data is divided in half, the model is developed in one half and tested on the other. This gives two external tests.

The index of fit is the Nash-Sutcliffe coefficient of model power. The NSE compares the skill of a prediction to a mean value. The NSE is positive if the prediction has more skill, zero if skill is the same as the mean, and negative if less than the mean.

I chose the following variables based on previous models. I decided to include an ocean oscillation term as I have seen a 60 year cycle in the residuals (eg here), indicating the presence of an unexplained periodic. Here are the variables:

TEMP — temperature
OO — The sum of a standardized AMO an PDO indices
GHG — The sum of all anthropogenic columns in RadF.txt, mostly the radiant effect of CO2.
dGHG — The first difference of the above
V — Stratospheric aerosols (a proxy for volcanic eruptions)
SS — Sun spot count, a proxy for solar isolation

1) Incredibly, on the first test on all the variables, GHG is not even significant, being entirely screened by dGHG.

TEMP ~ -0.49(***)+0.06*OO(***) + 0.72*GHG() -11.1*dGHG(***) + 4.0*V() -0.09*SS() R-squared: 0.8709

The NSE coefficients that follow are: model on 1900-1950 and testing on 1950-2000, model on 1950-2000 and testing on 1900-1950, and finally model development and testing on 1900-2000.

[1] -4.83 0.625 0.871

The NSE indicates the model has some difficulty predicting temperature post 1950 from a model developed on data prior to 1950.

I then ran the model again with only GHG and not dGHG. The predictions are shown on the graph, where blue is prediction from a model developed on pre 1950 data, green the prediction from a model developed on post 1950 data, and observed global temperature is black.

fig1

Continue reading

Cointegration Summary

It’s incredible that a global warming theory could agree with both the IPCC (discernable anthropogenic influence) and the sceptics (low long term risk from emissions) but there you are. The analysis of Greenstock suggests it is not the amount of greenhouse gasses, particularly CO2, in the atmosphere that contributes to global warming, but the change in the amount. That is, when the rate of CO2 produced is increasing — as it was last century — this increases the global temperature. Conversely, if the rate of increase is constant so is temperature.

dCO2 and CRU

Continue reading