Douglass et al 2007 and Atmospheric Models

Douglass et al. 2007 may represent a history-of-science-in-the-making showdown between two theories, the infinitely-thick theory of atmospheres as used in GCMs, and the semi-transparent atmospheric model as proposed by Miskolczi.

It misses the point — that the observations discriminate between theories — to focus on the details of the statistical test of the GCMs. To get to the point of understanding why basic theory
might be wrong takes, for me, a lot of work as its not
my field. But its much more interesting and profound
than arguing about parameter uncertainty. For example, the
Michelson–Morley experiment, one of the most important and famous experiments in the history of physics,
resulting in the trashing of an old theory, not just an adjustment to one of the parameters.

Here, the change in lapse rate of the tropical troposphere is the experiment. GCMs predict faster warming in the troposphere than the surface due to increased concentrations of greenhouse gases aloft. It is believed that faster mid-tropical warming than the surface is a NECESSARY condition for warming to be due to greenhouse gasses. But observations only show a slower rate of warming aloft than at the surface. Therefore, warming is not due to greenhouse gasses — QED.

Douglass et al. 2007 only goes so far as to conclude that the projections of future climate based on these models be viewed with caution (due to their lack of correspondence in reality).

However, as Boris states insightfully on CA and lucia liljegren’s blog

If the Douglass analysis is correct and the tropical troposphere is not warming faster than the tropical surface, then it’s not just GCMs that are wrong, but also theory (from RC).

I think the theory that recent warming occurs via warming of the troposphere by GHGs is at fault. A new theory of greenhouse effect, the semi-transparent theory proposed by Miskolczi, predicts that very little warming of the troposphere is possible due to atmospheric compensations (mainly reduction in humidity) that hold the optical depth constant. It emerges from the solution of energy conservation relationships that a constant optimal greenhouse effect maximizes efficiency of transfer of shortwave into longwave energy.


Figure from Douglass et al. 2007 “A comparison of tropical temperature trends with model predictions” annotated to compare forcing of the GHG theory (red arrow) and stratospheric compensation theory (blue arrows).

The effects of the theories is shown on the annotated figure of Douglass et al. 2007. This figure shows the decadal rates of change in temperature for the average of the models, compared to the observations, at various heights in the atmosphere. The two show a very different pattern, with models predicting much higher rates of increase in the lapse rate of the tropical troposphere than observed.

The difference in the theories can be seen from the annotations. The rate of change in temperature is none other than heating, measured in W/m2, or ‘forcing’. Marked on the figures are the different points of maximum forcing for the two curves.

In the models, the main forcing is at 10km, with the lapse rate temperatures pushed up over the whole troposphere. This demonstrates a theory of warming due to ‘blocking’ of radiation in the troposphere, as embodied in the infinitely thick model of planetary atmospheres.

In the observations, the main forcing is at the surface and at the stratosphere, while the troposphere is almost constant. Temperature changes occur in a ‘seesaw’ effect, with warming at surface the inverse of the cooling in the stratosphere. This relationship is the one of the main findings of the Miskolczi semi-infinite theory of planetary atmosphere, linking surface to outgoing radiation in an apparently novel effect called stratospheric compensation.

I have a note appearing in the AIG newsletter on stratospheric compensation, not greenhouse effect, as the possible mechanism for global warming. If correct, the recent warming must either be due to 1) warming of the surface due to albedo or emissivity changes, or 2) cooling of the surface due to depletion of ozone, or 3) a combination of both.

Thus the most reasonable interpretation of these observations reported in Douglass et al. 2007 is that the theory behind GCMs based in the infinitely-thick atmosphere is falsified, and stratospheric compensation based in a semi-transparent atmosphere theory is confirmed.


Free CO2 For All

What are the implications and limitations of the semi-infinite atmosphere theory of Ferenc Miskolczi — a theoretical model for greenhouse effect in the atmosphere?

Unlike current models suggesting a range of 1.5C to 5C increase in global temperatures from doubling of CO2 in the atmosphere, the semi-infinite theory suggests very little warming from increases in greenhouse gases, around 0.24C for CO2 doubling. This is because the earth’s atmosphere adjusts water vapor levels and cloud albedo to compensate for the CO2, in order to maintain an optimal level of greenhouse effect.


If Ferenc Miskolczi’s theory of semi-infinite atmosphere is correct, CO2 emission would be free from regulation.

If Ferenc’s theory is correct, there would be no concerns with ‘runaway warming’. More importantly, the Nations could continue their economic development with fossil fuels, producing as much CO2 as they like, and increasing the levels in the atmosphere without any fears of climate catastrophe.

This theory could be counterpoint to AGW in the way particle theory counterpoints string theory. Unfortunately the semi-infinite theory does not even appear on the radar of pro-AGW climate scientist blogs.

Previously, in a post I wondered whether the approach Ferenc was taking was sound. The second question in my mind was how a theory with constant greenhouse might produce changes in surface temperatures. The result of my ruminations on that will be published in Australian Institute of Geologists Newsletter (AIG) shortly. Previously, they graciously published an article showing the circular reasoning in climate hockey stick reconstruction.

Briefly, even though the temperature of the troposphere stays relatively constant, there are a number of ways. Variations in solar intensity can change temperatures. The temperature of the stratosphere could influence the surface temperature. Other ways of influencing surface temperature are changes in emissivity and albedo.

Rahmstorf 7 Finale

“According to a new U.N. report, the global warming outlook is much worse than originally predicted. Which is pretty bad when they originally predicted it would destroy the planet.” –Jay Leno

If ever there was a good example of alarmists views being given a free ride by a major journal, then the publication in Science of “Recent Climate Observations Compared to Projections” by
Stefan Rahmstorf, Anny Cazenave, John A. Church, James E. Hansen, Ralph F. Keeling, David E. Parker, and Richard C. J. Somerville is it.

This paper claimed to show that:

The data available for the period since 1990 raise concerns that the climate system, in particular sea level, may be responding more quickly to climate change than our current generation of models indicates.

By way of recap, this paper figured prominently in the Interim Report of the Garnaut Review where it is clearly used as a source of mainstream scientific opinion:

“Developments in mainstream scientific opinion on the relationship between emissions, accumulations and climate outcomes, and the Review’s own work on future business-as-usual global emissions, suggest that the world is moving towards high risks of
dangerous climate change more rapidly than has generally been understood.”

Interest in the current weather has been growing since people have been observing either sharp declines in temperatures since last year, or relative stability in temperatures over about the last 10 years and wondering how these fit into the picture of global warming. I did some posts putting it into context showing last years temperature drop was not unusual here, that a particular 10 year period has been flat here, and that a number of climate indicators are showing decadal stability here.

The Blackboard has been spear-heading rigorous statistical methods for checking IPCC projections and finding post 2001 TAR consistently falsified by climate trends.

Contradicting these findings was the paper by Rahmstorf et al 2007, published in Science, by seven of the leading members of the IPCC scientific team. So, I started to audit this paper to see if this paper does in fact provide a more reliable perspective on the issue of whether climate is changing faster or slower than expected.

A number of bloggers ‘raised concerns’ about the vague description of the methodology, and argued at Niche Modeling and The Blackboard that there were important sources of uncertainty unaccounted for. Other blogs picked up the issue including Peter Gallagher and Mark Lawson.

Stefan Rahmstorf and I exchanged comments at and here.

His main defense was that the end point uncertainty would only affect the last 5 points of the smoothed trend line with an 11 point embedding. Here the global temperatures were smoothed using a complex method called Singular Spectrum Analysis (SSA). I gave examples of SSA and other methods where the end point uncertainty affected virtually ALL points in the smoothed trend line, and particularly more than 5 end points. Stefan clearly had little idea of how SSA worked. His final message, without an argument, was:

[Response: If you really think you’d come to a different conclusion with a different analysis method, I suggest you submit it to a journal, like we did. I am unconvinced, though. -stefan]

So much for the recap. Keep in mind that the purpose of a scientific exchange like this is to clarify the points of agreement and disagreement and attempt to arrive at a resolution on the validity of the claims. Note the problem I raised is not the only obvious problem either, but just one I worked on. This is not meant to be a personal process. I am grateful for someone to point out errors in my work and would try to understand them, as I would rather not be blowing smoke unintentionally.

This example highlights the power of numbers to resolve an issue. Stefan can have his opinion, and I have opinions too, but the thing I love is the power of numbers to arbitrate and discriminate, and ultimately eliminate the unjustified ones.

Also I was wanting to address the Garnaut Review, as I feel that they are abrogating a duty of diligence by not paying more critical attention to papers such as these. Here was an opportunity to give a specific example of a paper with flaws so obvious that it SHOULD have been dismissed by anyone with statistical training, or background knowledge.

So thank you readers for your patience with this process. I have put a submission into the Garnaut Review supported by documentation from the web sites involved.

Here is a good example of the use of blogs. As the time for comments has closed, I could not submit a critique to Science. It is also better to have a through and open discussion of the issues at hand anyway, before rushing to publication of critical comments, so both can gain a deeper understanding of the finer points. It is unfortunate that Stefan cut the discussion off, but to his credit he was responsive to the actual concerns in the replies he did make.

Examples of simple smoothers

How much error is there in smoothed climatic and financial series? How much does variability at the ends of the series affect the trend as a result? Here we showed that certain ways of treating the end points introduce a lot of variability. Here we show in certain smoothers variability at the ends can affect the whole smooth!

Below are three different methods with slightly varying end point treatments. Two are causal smoothers (SSA and spline) and one is acausal (moving average). Causal smoothers do not use future data to create a trend to the end point of the series. Acausal smoothers (such as moving averages) need past and future data, and so stop half a window short of the endpoint (see wiki)

All data are global temperature data from GISS from 1973 to 2006.

1. Singular Spectrum Analysis.

Below is the result of two approaches using CaterpillarSSA with an 11 year embedding period. The red curve was a result of padding the end with data reflected around the final 2006 value, the so-called ‘minimum roughness condition’ or MRC. The blue trend is without padding. The green line is the simple linear regression of the 34 years.


The two different approaches differ throughout the whole length, except where the two curves meet at year 1999. The last seven points deviate quite a lot, illustrating the extra uncertainty at the end. Further discussion of this here.

2. Smooth spline

The figure below shows a smooth spline method of fitting and another approach to estimating uncertainty. This fits a higher order non-linear regression line with 11 degrees of freedom to the points. In this figure, the last point at 2006 has been altered to either the top or bottom of the 95% channel range. That is, the last point covers the range of random variation that might reasonably be expected in 2006.


The two curves differ again, but this time they flex about the 11th point from the end. Further discussion of this method here.

3. Moving average

The final figure below shows the result of running a moving average with an end point at 2006 of 0.6 and 0.3.


The moving average stops 5 points short of the end of the series, and the last point of the trend varies as a result of this variation at 2006.


So this shows that methods and data variability results in uncertainty of the trend line, and the uncertainty is particularly pronounced at the end points.

There is a difference between the causal and acausal smoothers used here. In regression type causal smoothers, the end variation can be propogated throughout the whole series. But the regression smoothers have the advantage of extending the smooth all the way to the end of the series (and further if predictions are made). In moving averages the variation is more localized, but the smooth stops short of the end.

Below is a plot of monthly global temperatures from Hadley and GISS with their smooth splines (11df) and regression lines. This is suggestive of temperatures fluctuating more or less randomly above and below a long term trend line.


Thanks to Stefan Rahmstorf for prompting this comparison here.

Comments are closed to allow discussion here.

Rahmstorf et al. 2007 Update

Well it is almost 24 hours since I posted the comment below to RealClimate at the post by Stefan Rahmstorf, about the inconsistency in the methodology used in their Science Brevia article to show that climate is trending higher than IPCC models. As yet the post has not appeared. I can’t see how it breeches their moderation policy, so I guess I am being told to go pound sand.

Update: Stefan Rahmstorf replied at the post here.

To provide a bit more clarity, I have drawn a couple of lines on the figure at issue to illustrate possible trajectories of the trend. The thin red line is where I think the trend should have gone if the method described in the figure caption had been used — SSA+MRC. The thin blue line is where I think the trend line should have gone if SSA only had been used.


Figure: Annotated Rahmstorf et al. 2007 Science Brevia figure showing global temperatures and trend line. SSA is where the trend line should be for SSA method only, SSA+MRC is the trend line for SSA with the ‘minimum roughness criterion’ applied. The published trend line passes between these possible outcomes.

The actual trend on the figure passes between these two obvious choices. So at this stage I don’t know what method was used. It seems clear that if they had used the method SSA+’maximum roughness criterion’ as described, the trend line would not have supported their argument that ‘temperatures may be responding more quickly to climate change than our current generation of models indicates’.

Below is my post to


I would be grateful if you would clarify for me a
puzzling aspect of your Rahmstorf et al. ’07 Science
paper. You state in the figure caption that the
‘minimum roughness criterion’ was used to get the
temperature trend line. Use of this method of data
padding as described by Mann 2004 should ‘pin’ the
trend line to the 2006 temperature value. However,
while the 2006 value lies in the center of the IPCC
range, the trend line shown on the figure lies above
the 2006 value, in the upper IPCC range.

I would like to clarify this apparent inconsistency.
This an important paper for the case that ‘the climate
system is responding more quickly than climate models
indicate’ and it is important to verify its technical
correctness. More details and graphs can be found

Confidence Limits of Minimum Roughness Criterion

Here I show more humorous effects of smoothed trend lines with the ‘minimum roughness condition’ (MRC). The confidence limits blow out.

Fitting a straight line to data such as global temperature data is a common linear regression example problem. Linear regression of stock prices tells you your rate of appreciation. Smoothing, (or filtering) is used to give a smooth, curved trend instead of a straight regression line. Instead of applying a linear regression model to data many techniques such as moving averages, splines, or singular spectrum analysis (SSA) can give a smooth trend line. One problem with these methods is what to do at the ends where the data runs out.

One way of handling end points is the MRC. The MRC is referenced in papers including Rahmsdorf et al. 2007 who state the nonlinear trend lines “were computed with an embedding period of 11 years and a minimum roughness criterion at the end (Moore 2006)”. The MRC is described in a paper by Michael Mann (2004) as follows: “[O]ne pads the series with the values within one filter width of the boundary reflected vertically (i.e. about the y axis) relative to the final value.” He states the intent of MRC padding the end of a time series is to ensure a smooth trend line until the end of the series.

But, as was noted in “Mannomatic Smoothing and Pinned End-points” MRC causes the trend line to pass through the value of the final point of the series (the pin). Willis Eschenbach also notes his paper on the pinning property had been twice rejected by GRL.

When I wrote a little routine to implement Mannomatic smoothing, I noticed something really funny. I know that it seems bizarre that there can be humor in smoothing algorithms, but hey, this is the Team. Think about what happens with the Mannomatic smooth: you reflect the series around the final value both horizontally and vertically. Accordingly with a symmetric filter (as these things tend to be), everything cancels out except the final value. The Mannomatic pins the series on the end-point exactly the same as Emanuel’s “incorrect” smoothing.

Well if you take the pinning property a step further and estimate the confidence interval of the trend line, another humorous thing happens.

The figure below shows the confidence limits of the calculated trends in global temperature from GISS (shown as solid black) and the MRC padding (dashed black lines). The upper and lower trend lines (red) were calculated using MRC padding originating at the limits of the 95% confidence intervals at year 2006. The blue line is the linear regression of the GISS trend from 1975 to 2001.


Figure: Confidence intervals of a smooth spline trend line with ‘minimum roughness criterion’ padding of endpoints.

With MRC confidence limits of the trend expand to the width of a single value, rather than a mean value (Solid red lines). This is considerably greater than the uncertainty of a trend line (dashed red line). The only effect of the MRC is to replace a narrow confidence interval of a trend line, with a large confidence interval of a single point!

Why would you want to do this? If you want to emphasize the direction the final point is going in, then MRC provides a strong bias on the trend, but you would only get away with it if you don’t present the full uncertainty. So it would be useful for fraudulent statistical modeling, but such practices are bordering on academic fraud.

To be fair Mann 2004 cautions against the application MRC because it is sensitive to outliers, suggests careful evaluation of goodness of fit, providing a pathological example of MRC padding from climate science. However in general, ad hoc methodologies such as MRC should be avoided, uncertainty limits and formal tests of significance should be performed to support claims.


[1]Stefan Rahmstorf, Anny Cazenave, John A. Church, James E. Hansen,
Ralph F. Keeling, David E. Parker, and Richard C. J. Somerville. Recent
Climate Observations Compared to Projections.
Science, 316(5825):709–,

[2] A. Grinsted Moore, J.C. and S. Jevrejeva. New tools for analyzing time series relationships and trends. Eos, 86(24), 1995.

[3] M. E. Mann. On smoothing potentially non-stationary climate time series.
Geophys. Res. Lett., 31:L07214, doi:10.1029/2004GL019569 2004.