The timing of the 24-h observing window occasionally affects the determination of daily extrema through a mischaracterization of the diurnal minima and by extension can lead to errors in determining daily mean temperature.
The complete UAH v6.0 data for January have been released. Ken presents all the graphs for various regions, and as well summaries for easier comparison. Ken also include graphs for the North and South Temperate regions (20-60 North and South), estimated from Polar and Extra-Tropical data.
The Pause has ended globally…
In The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill Roger Pielke Sr. links GCM skill at predicting drought with natural variation:
2. “Future efforts to predict drought will depend on models’ ability to predict tropical SSTs.”
In other words, there is NO way to assess the skill of these models are predicting drought as they have not yet shown any skill in SST predictions on time scales longer than a season, nor natural climate cycles such as El Niño [or the PDO, the NAO, etc.].
This seems an convoluted turn of phrase. There are ways to assess the skill of these models — by comparing them with past drought frequency and severity. Such assessments show the models have NO skill at predicting droughts.
The assumption is that IF they were able to predict cycles like PDO etc. then they would be able to predict droughts. But clearly if we average over these cycles, there is still the little problem of overall trends in extreme phenomena, which accuracy at PDO etal. would not necessarily satisfy.
His argument that drought efficacy swings on PDO prediction is useful, however, as a basis for excluding applications of models for climate phenomena that rely on them.
Roger is perhaps being polite about misleading policymakers when he continues:
Funding of multi-decadal regional climate predictions by the National Science Foundation which cannot be verified in terms of accuracy is not only a poor use of tax payer funds, but is misleading policymakers and others on the actual skill that exists in predicting changes in the frequency of drought in the future.
The review by Dai favours the PDSI drought index:
The PDSI was created by Palmer22 with the intent to measure the cumulative departure in surface water balance. It incorporates antecedent and current moisture supply (precipitation) and demand (PE) into a hydrological accounting system. Although the PDSI is a standardized measure, ranging from about −10 (dry) to +10 (wet)…
I always search for the assessment of accuracy first, and as usual the skill of models gets a very little, non-quantitative coverage. Climate scientists are loath judge the models, preferring to cloak their results in paragraphs of uncertainty, and present “dire predictions” of GCMs in garish figures (his Figure 11).
They need to start acting like scientists and stop these misleading practises until it is shown by rigorous empirical testing, and for fundamental reasons, that the current GCMs are fit for the purpose of drought modelling.
Just to show I am not always negative, this recent report has a lot to recommend in it. The authors of “Climate variability and change in south-eastern Australia” do quite a good job of describing the climatological features impacting the area, and putting technical issues, climate, hydrology and social impact together in an informative report.
While they say:
The current rainfall decline is apparently linked
(at least in part) to climate change, raising the
possibility that the current dry conditions may
persist, and even possibly intensify (as has been the
case in south-west Western Australia).
They also admit they don’t know how to combine the output of multiple models:
Some research (Smith & Chandler, 2009) suggests that
uncertainties in climate projections can be reduced
by careful selection of the global climate models, with
less weight being given to models that do not simulate
current climate adequately. Other work suggests that
explicit model selection may not be necessary (Watterson,
2008; Chiew et al., 2009c). Further research is being
done to determine how to combine the output of global
climate models to develop more accurate region-scale
projections of climate change.
I would fault that there is no suggestion that anything other than GCMs might be used, and no evidence the GCMs perform better than a mean value. If a model does no better than the long term average then there is good reason to suppose it has no skill, and throw it out. This is called ‘benchmarking’, but its an alien concept to reject any GCM from the IPCC, apparently.
My critique of models used in a major Australian drought study appeared in Energy and Environment last month (read Critique-of-DECR-EE here). It deals with validation of models (the subject of a recent post by Judith Curry), and regional model disagreement with rainfall observations (see post by Willis here).
The main purpose is summed up in the last sentence of the abstract:
The main conclusion and purpose of the paper is to provide a case study showing the need for more rigorous and explicit validation of climate models if they are to advise government policy.
It is well known that despite persistent attempts and claims in the press, general circulation models are virtually worthless at projecting changes in regional rainfall, the IPCC says so, and the Australian Academy of Science agrees. The most basic statistical tests in the paper demonstrate this: the simulated drought trends are statistically inconsistent with the trend of the observations, a simple mean value shows more skill that any of the models, and drought frequency has dropped below the 95%CL of the simulations (see Figure).
Rainfall has increased in tropical and subtropical areas of Australia since the 70’s, while some areas of the country, particularly major population centers to the south-east and south-west have experienced multi-year deficits of rainfall. Overall Australian rainfall is increasing.
The larger issue is how to acknowledge that there will always be worthless models, and the task of genuinely committed modellers to identify and eliminate these. It’s not convincing to argue that validation is too hard for climate models, or they are justified by physical realism, or use the calibrated eyeball approach. The study shows that the obvious testing regimes would have eliminated these drought models from contention — if performed.
While scientists are mainly interested in the relative skill of models, where statistical measures such as root mean square (RMS) are appropriate, decision-makers are (or should) be concerned with whether the models should be used at all (are fit-for-use). Because of this, model testing regimes for decision-makers must have the potential to completely reject some or all models if they do not rise above a predetermined standard, or benchmark.
There are a number of ways that benchmarking can be set up, which engineers or others in critical disciplines would be familiar with, usually involving a degree of independent inspection, documentation of expected standards, and so on. My study makes the case that climate science needs to start adopting more rigorous validation practises. Until they do, regional climate projections should not be taken seriously by decision-makers.
It is up to the customers of these studies to not rely on the say-so of the IPCC, the CSIRO and the BoM, and to ask “Show me your tests”, as would be expected with any economic, medical or engineering study where the costs of making the wrong decision are high. Their duty of care requires they are confident that all reasonable means have been taken to validate all of the models that support the key conclusions.
More evidence of worthless model predictions from CO2 Science:
All of the future flow-rates calculated by Steynor et al. exhibited double-digit negative percentage changes that averaged -25% for one global climate model and -50% for another global climate model; and in like manner the mean past trend of four of Lloyd’s five stations was also negative (-13%). But the other station had a positive trend (+14.6%). In addition, by “examination of river flows over the past 43 years in the Breede River basin,” Lloyd was able to demonstrate that “changes in land use, creation of impoundments, and increasing abstraction have primarily been responsible for changes in the observed flows” of all of the negative-trend stations.
Interestingly, Steynor et al. had presumed that warming would lead to decreased flow rates, as their projections suggested; and they thus assumed their projections were correct. However, Lloyd was able to demonstrate that those results were driven primarily by unaccounted for land use changes in the five catchments, and that in his newer study the one site that had “a pristine watershed” was the one that had the “14% increase in flow over the study period,” which was “contrary to the climate change predictions” and indicative of the fact that “climate change models cannot yet account for local climate change effects.” As a result, he concluded that “predictions of possible adverse local impacts from global climate change should therefore be treated with the greatest caution,” and that, “above all, they must not form the basis for any policy decisions until such time as they can reproduce known climatic effects satisfactorily.”
Urban areas differ from rural areas in a number of well known ways, but the IPCC summaries maintain that these effects have been effectively removed when they talk about the recent (post 1960) increases in global surface temperature.
Continuing the series on how bad climate models really are, another paper is in the pipeline on the long-standing influence of urban heat effects (UHI) in the surface temperature data. Ross McKitrick reports that between 1/2 and 1/3 of the recent increase in temperature is due to this contamination (Ross’s website here).
The methodology uses the regression coefficients from the socioeconomic variables to estimate the trend distribution after removing the estimated non-climatic biases in the temperature data. On observational data this reduces the mean warming trend by between one-third and one-half, but it does not affect the mean surface trend in the model-generated data. Again this is
consistent with the view that the observations contain a spatial contamination pattern not present in, or predicted by, the climate models.
Note that this rather gross bias is not present in or predicted by the climate models, meaning the climate models do not have the physical mechanisms to model it. One consequence is that if the models are to fit the recent increase in temperature, some other (incorrect) mechanism must be used (such as H2O feedback perhaps – I don’t know).
Ross has written up the backstory of the all too common obstacles to publication of articles questioning the IPCC here:
In the aftermath of Climategate a lot of scientists working on global warming-related topics are upset that their field has apparently lost credibility with the public. The public seems to believe that climatology is beset with cliquish gatekeeping, wagon-circling, biased peer-review, faulty data and statistical incompetence. In response to these perceptions, some scientists are casting around, in op-eds and weblogs, for ideas on how to hit back at their critics. I would like to suggest that the climate science community consider instead whether the public might actually have a point.
Due to building the website for The Climate Sceptics I haven’t been able to post despite some important events. My site and other files were deleted in some kind of attack, so I have had to rebuild it as well. I now have the WordPress 3.0 multiuser system which enable easy creation and management of multiple blogs, so its an ill wind eh?
The important event I refer to is the release of “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Series” by Ross McKitrick, Stephen McIntyre and Chad Herman (2010). Nobody is talking about it, and I don’t know why, as it has a history almost as long as the hockey stick on McIntyre’s blog (summary here), and is a powerful condemnation of climate models in the PRL.
I feel a series coming on, as these results deliver a stunning blow to the last leg that alarmists have been standing on, i.e. model credulity. Also because I have a paper coming out in a similar vein, dealing with drought models in regional Australia.
Using a rigorous methodology on 57 runs from 23 model simulations of the lower troposphere (LT) and mid-troposphere (MT) with forcing inputs from the realistic A1B emission scenario, and four observational temperature series: two satellite-borne microwave sounding unit (MSU)-derived series and two balloon-borne radiosonde series, over two time periods from 1979-99 and 1999-2009, they tested a mismatch between models and observed trends in the tropical troposphere. This represents a basic validation test of climate models over a 30 year period, a validation test which SHOULD be fundamental to any belief in the models, and their usefulness for projections of global warming in the future.
The results are shown in their figure:
… the differences between models and observations now exceed the 99% critical value. As shown in Table 1 and Section 3.3, the model trends are about twice as large as observations in the LT layer, and about four times as large in the MT layer.
- The Age reports that Climategate was a game changer. Judith Curry said Dr Jones had shown himself to be ”genuinely repentant, and has been completely open and honest about what has been done and why â€¦ speaking with humility about the uncertainty in the data sets”.
So far its a case of the academic defense: “Oops, I lied.” Sir Muir Russell, the chairman of the Judicial Appointments Board for Scotland, notes that senior climate scientists say their world has been dramatically changed by the affair. We welcome senior climate scientists to the real world of professional transparency. Steve McIntyre has received overwhelming financial support from his readers for his trip to the Guardian’s debate in England.
- Senator Wong reminded a conference on the Gold Coast that scientists were responsible for this unpopular policy bind: “Remember why this debate started, why we all started talking about climate change and why people called for action?
“It is because of you that we understand that climate change is real and it is because of you that we understand that climate change is happening now … and that it is caused by carbon dioxide emissions.”
But she also challenged scientists to get their act together:
… the science behind the political debate cannot be over-estimated. Unfortunately in the recent past, science has not been able to speak with one voice on climate change, making it impossible for politicians to enact practical measures to address the phenomenon.
Reading between the lines, could it be that her political windsock no longer points towards the agenda of tenured liberal progressive moonbats and she is butching-up to the union bosses that put Ms Squiggle in command? Hmm…
- Lubos reviews a sloppy article by Rasmus Benestad on climate feedbacks. He explains the system as I see it, with many short run positive feedbacks in the atmosphere (and oceans) but stronger negative feedbacks in the long run, producing a “half-pipe” response profile.
- CSIRO is making science more accessible to decision-makers by “trialling different ways of presenting climate information”. And if they couldn’t be more non-committal, they are presenting the regional forecasts of models that “are complex, and constantly being refined” in a slick interface. If as my upcoming publication shows, the model forecasts are worthless, then you have to wonder — What is the point?
The rainfall simulations in the models are completely opposite to reality over the last 100 years. To make this clear to climate scientists, when rainfall decreases the models increase. When rainfall increases, the models decrease. The best way decision-makers could use CSIRO model forecasts is as contrary indicators, i.e. buy when they say “sell”, and sell when they say “buy”.
Lubos makes me laugh:
Well, let me make it clear that there’s nothing controversial about negative feedbacks. In this battle between negative feedbacks and Rasmus Benestad, it is the latter who is an utterly controversial crackpot. The existence of crackpots may make basic concepts of science controversial among crackpots – and the remaining readers of Real Climate, if there are any – but it can’t make it controversial in the real science.
Having just returned from my leg of the tour, I have been offline for awhile, but expect to catch up this week. Here is my powerpoint presentation “Tweeter and the Monkey M(e)an — Negating Climate Change Policy” (4.3MB).
The title comes from a song by the Traveling Wilburys. The message is that without proper validation, climate models are no more credible than Tweets, and from my (and others’) validation testing, the model forecasts are not fit-for-forecasting, showing no more accuracy than the “Monkey Mean” — the average temperature and rainfall. I critique CSIRO and BoM reports and conclude with an example of how to make rational business decisions under climate forecast uncertainty.
Virial Paper 6_12_2010 submitted by Adolf J. Giger.
Allow me to make some more comments on the Virial Theorem (VT) as used by Ferenc Miskolczi (FM) for the atmosphere.
As I said on this blog back in February, a very fundamental derivation of the VT was made by H. Goldstein in Section 3-4 of “Classical Mechanics”, 1980, Ref. : PE= 2*KE (potential energy=2 x kinetic energy). Then, he also derives the Ideal Gas Law (IGL), P*V = N*k*T as a consequence of the VT, and shows that PE=3*P*V and KE=(3/2)*N*k*T. The two laws, IGL and VT, therefore are two ways to describe the same physical phenomenon. Despite its seemingly restrictive name, we know that the IGL is a good approximation for many gases, monatomic, biatomic, polyatomic and even water vapor, as long as they remain very dilute. Goldstein’s derivations are made for an enclosure of volume V with constant gas pressure P and temperature T in a central force field like the Earth’s gravitational field. They also hold for an open volume V anywhere in the atmosphere. As to FM, he points out that the VT reflects the fact that the atmosphere is gravitationally bounded.
Ferenc Miskolczi in his papers [2,3] relates the total potential energy of the atmosphere, PEtot, to the total IR upward radiation Su at the surface. This relationship has to be considered a proportionality rather than an exact equality, or Su=const* PEtot. We see that this linkage makes sense since Su determines the surface temperature Ts through the Stefan-Boltzmann law, Su = (5.6703/10^8)*Ts^4 , and finally the IGL ties together Ts, P(z=0) and PEtot.
FM then assigns the kinetic IR energy KE (temperature) in the atmosphere to the upward atmospheric IR emittance Eu, or Eu=const*KE. The flux Eu is made up of two terms F + K , where F is due to thermalized absorption of short wave solar radiation in atmospheric water vapor, and K due to heat transfers from the Earth’s surface to air masses and clouds through evaporation and convection. Neither F or K are directly radiated from the Earth’s surface. They represent radiation from the atmosphere itself. There is an obvious limitation for such an assignment mainly because for the VT , or the IGL in general, the temperature (the KE) has to be measured with a thermometer, whereas Eu represents the radiative temperature (flux) that has to be measured with a radiometer, and these two measurements can give vastly different results as we see for the two following extreme cases:
In between these two extremes we have the Earth where FM’s version of the VT , Su = 2 * Eu applies reasonably well. We will see next in a discussion of FM’s exact solution how close, and for what types of atmospheres FM’s VT ( Eu/Su=0.5) holds, but we can say already that no physical principle is violated if it doesn’t. The VT that always holds for gases is not being violated, it is simply not fully recognized by FM’s fluxes that have to be measured by radiometers. This may be an indication that the VT is less important for FM’s theory than normally assumed.
On the other hand, the IPCC assumes a positive water vapor feedback and arrives at very imprecise predictions for the Climate Sensitivity ranging from 1.5 to 5K (and even more). It is clear that this wide range of numbers is caused by the assumed positive feedback system, which apparently is close to instability (or singing, as the electrical engineer would call it in an unstable microphone-loudspeaker system). With such large uncertainties in their outputs true scientists should be reluctant to publish their results.