RECENT: Climatological observing windows error

temp1996-300x296The timing of the 24-h observing window occasionally affects the determination of daily extrema through a mischaracterization of the diurnal minima and by extension can lead to errors in determining daily mean temperature.

via A comparison of climatological observing windows and their impact on detecting daily temperature extrema | SpringerLink

Advertisements

Drought predictions for this century

In The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill Roger Pielke Sr. links GCM skill at predicting drought with natural variation:

2. “Future efforts to predict drought will depend on models’ ability to predict tropical SSTs.”

In other words, there is NO way to assess the skill of these models are predicting drought as they have not yet shown any skill in SST predictions on time scales longer than a season, nor natural climate cycles such as El Niño [or the PDO, the NAO, etc.].

This seems an convoluted turn of phrase. There are ways to assess the skill of these models — by comparing them with past drought frequency and severity. Such assessments show the models have NO skill at predicting droughts.

The assumption is that IF they were able to predict cycles like PDO etc. then they would be able to predict droughts. But clearly if we average over these cycles, there is still the little problem of overall trends in extreme phenomena, which accuracy at PDO etal. would not necessarily satisfy.

His argument that drought efficacy swings on PDO prediction is useful, however, as a basis for excluding applications of models for climate phenomena that rely on them.

Roger is perhaps being polite about misleading policymakers when he continues:

Funding of multi-decadal regional climate predictions by the National Science Foundation which cannot be verified in terms of accuracy is not only a poor use of tax payer funds, but is misleading policymakers and others on the actual skill that exists in predicting changes in the frequency of drought in the future.

The review by Dai favours the PDSI drought index:

The PDSI was created by Palmer22 with the intent to measure the cumulative departure in surface water balance. It incorporates antecedent and current moisture supply (precipitation) and demand (PE) into a hydrological accounting system. Although the PDSI is a standardized measure, ranging from about −10 (dry) to +10 (wet)…

I always search for the assessment of accuracy first, and as usual the skill of models gets a very little, non-quantitative coverage. Climate scientists are loath judge the models, preferring to cloak their results in paragraphs of uncertainty, and present “dire predictions” of GCMs in garish figures (his Figure 11).

They need to start acting like scientists and stop these misleading practises until it is shown by rigorous empirical testing, and for fundamental reasons, that the current GCMs are fit for the purpose of drought modelling.

Just to show I am not always negative, this recent report has a lot to recommend in it. The authors of “Climate variability and change in south-eastern Australia” do quite a good job of describing the climatological features impacting the area, and putting technical issues, climate, hydrology and social impact together in an informative report.

While they say:

The current rainfall decline is apparently linked
(at least in part) to climate change, raising the
possibility that the current dry conditions may
persist, and even possibly intensify (as has been the
case in south-west Western Australia).

They also admit they don’t know how to combine the output of multiple models:

Some research (Smith & Chandler, 2009) suggests that
uncertainties in climate projections can be reduced
by careful selection of the global climate models, with
less weight being given to models that do not simulate
current climate adequately. Other work suggests that
explicit model selection may not be necessary (Watterson,
2008; Chiew et al., 2009c). Further research is being
done to determine how to combine the output of global
climate models to develop more accurate region-scale
projections of climate change.

I would fault that there is no suggestion that anything other than GCMs might be used, and no evidence the GCMs perform better than a mean value. If a model does no better than the long term average then there is good reason to suppose it has no skill, and throw it out. This is called ‘benchmarking’, but its an alien concept to reject any GCM from the IPCC, apparently.

Show us your tests – Australian climate projections

My critique of models used in a major Australian drought study appeared in Energy and Environment last month (read Critique-of-DECR-EE here). It deals with validation of models (the subject of a recent post by Judith Curry), and regional model disagreement with rainfall observations (see post by Willis here).

The main purpose is summed up in the last sentence of the abstract:

The main conclusion and purpose of the paper is to provide a case study showing the need for more rigorous and explicit validation of climate models if they are to advise government policy.

It is well known that despite persistent attempts and claims in the press, general circulation models are virtually worthless at projecting changes in regional rainfall, the IPCC says so, and the Australian Academy of Science agrees. The most basic statistical tests in the paper demonstrate this: the simulated drought trends are statistically inconsistent with the trend of the observations, a simple mean value shows more skill that any of the models, and drought frequency has dropped below the 95%CL of the simulations (see Figure).

Rainfall has increased in tropical and subtropical areas of Australia since the 70’s, while some areas of the country, particularly major population centers to the south-east and south-west have experienced multi-year deficits of rainfall. Overall Australian rainfall is increasing.

The larger issue is how to acknowledge that there will always be worthless models, and the task of genuinely committed modellers to identify and eliminate these. It’s not convincing to argue that validation is too hard for climate models, or they are justified by physical realism, or use the calibrated eyeball approach. The study shows that the obvious testing regimes would have eliminated these drought models from contention — if performed.

While scientists are mainly interested in the relative skill of models, where statistical measures such as root mean square (RMS) are appropriate, decision-makers are (or should) be concerned with whether the models should be used at all (are fit-for-use). Because of this, model testing regimes for decision-makers must have the potential to completely reject some or all models if they do not rise above a predetermined standard, or benchmark.

There are a number of ways that benchmarking can be set up, which engineers or others in critical disciplines would be familiar with, usually involving a degree of independent inspection, documentation of expected standards, and so on. My study makes the case that climate science needs to start adopting more rigorous validation practises. Until they do, regional climate projections should not be taken seriously by decision-makers.

It is up to the customers of these studies to not rely on the say-so of the IPCC, the CSIRO and the BoM, and to ask “Show me your tests”, as would be expected with any economic, medical or engineering study where the costs of making the wrong decision are high. Their duty of care requires they are confident that all reasonable means have been taken to validate all of the models that support the key conclusions.

Projected future runoff of the Breede River under climate change

More evidence of worthless model predictions from CO2 Science:

All of the future flow-rates calculated by Steynor et al. exhibited double-digit negative percentage changes that averaged -25% for one global climate model and -50% for another global climate model; and in like manner the mean past trend of four of Lloyd’s five stations was also negative (-13%). But the other station had a positive trend (+14.6%). In addition, by “examination of river flows over the past 43 years in the Breede River basin,” Lloyd was able to demonstrate that “changes in land use, creation of impoundments, and increasing abstraction have primarily been responsible for changes in the observed flows” of all of the negative-trend stations.

Interestingly, Steynor et al. had presumed that warming would lead to decreased flow rates, as their projections suggested; and they thus assumed their projections were correct. However, Lloyd was able to demonstrate that those results were driven primarily by unaccounted for land use changes in the five catchments, and that in his newer study the one site that had “a pristine watershed” was the one that had the “14% increase in flow over the study period,” which was “contrary to the climate change predictions” and indicative of the fact that “climate change models cannot yet account for local climate change effects.” As a result, he concluded that “predictions of possible adverse local impacts from global climate change should therefore be treated with the greatest caution,” and that, “above all, they must not form the basis for any policy decisions until such time as they can reproduce known climatic effects satisfactorily.”

How Bad are the Models – UHI

Urban areas differ from rural areas in a number of well known ways, but the IPCC summaries maintain that these effects have been effectively removed when they talk about the recent (post 1960) increases in global surface temperature.

Continuing the series on how bad climate models really are, another paper is in the pipeline on the long-standing influence of urban heat effects (UHI) in the surface temperature data. Ross McKitrick reports that between 1/2 and 1/3 of the recent increase in temperature is due to this contamination (Ross’s website here).

The methodology uses the regression coefficients from the socioeconomic variables to estimate the trend distribution after removing the estimated non-climatic biases in the temperature data. On observational data this reduces the mean warming trend by between one-third and one-half, but it does not affect the mean surface trend in the model-generated data. Again this is
consistent with the view that the observations contain a spatial contamination pattern not present in, or predicted by, the climate models.

Note that this rather gross bias is not present in or predicted by the climate models, meaning the climate models do not have the physical mechanisms to model it. One consequence is that if the models are to fit the recent increase in temperature, some other (incorrect) mechanism must be used (such as H2O feedback perhaps – I don’t know).

Ross has written up the backstory of the all too common obstacles to publication of articles questioning the IPCC here:

In the aftermath of Climategate a lot of scientists working on global warming-related topics are upset that their field has apparently lost credibility with the public. The public seems to believe that climatology is beset with cliquish gatekeeping, wagon-circling, biased peer-review, faulty data and statistical incompetence. In response to these perceptions, some scientists are casting around, in op-eds and weblogs, for ideas on how to hit back at their critics. I would like to suggest that the climate science community consider instead whether the public might actually have a point.

How Bad are Climate Models? Temperature

Due to building the website for The Climate Sceptics I haven’t been able to post despite some important events. My site and other files were deleted in some kind of attack, so I have had to rebuild it as well. I now have the WordPress 3.0 multiuser system which enable easy creation and management of multiple blogs, so its an ill wind eh?

The important event I refer to is the release of “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Series” by Ross McKitrick, Stephen McIntyre and Chad Herman (2010). Nobody is talking about it, and I don’t know why, as it has a history almost as long as the hockey stick on McIntyre’s blog (summary here), and is a powerful condemnation of climate models in the PRL.

I feel a series coming on, as these results deliver a stunning blow to the last leg that alarmists have been standing on, i.e. model credulity. Also because I have a paper coming out in a similar vein, dealing with drought models in regional Australia.

Using a rigorous methodology on 57 runs from 23 model simulations of the lower troposphere (LT) and mid-troposphere (MT) with forcing inputs from the realistic A1B emission scenario, and four observational temperature series: two satellite-borne microwave sounding unit (MSU)-derived series and two balloon-borne radiosonde series, over two time periods from 1979-99 and 1999-2009, they tested a mismatch between models and observed trends in the tropical troposphere. This represents a basic validation test of climate models over a 30 year period, a validation test which SHOULD be fundamental to any belief in the models, and their usefulness for projections of global warming in the future.

The results are shown in their figure:

… the differences between models and observations now exceed the 99% critical value. As shown in Table 1 and Section 3.3, the model trends are about twice as large as observations in the LT layer, and about four times as large in the MT layer.

Continue reading