Just for clarity, the influential Rahmstorf 2007 paper that is contradicted by published evidence here, was irrevocably discredited by his own admission here, that apparent increased climate sensitivity was only due to ‘weather’.

In hindsight, the averaging period of 11 years that we used in the 2007 Science paper was too short to determine a robust climate trend. The 2-sigma error of an 11-year trend is about +/- 0.2 ÂºC, i.e. as large as the trend itself. Therefore, an 11-year trend is still strongly affected by interannual variability (i.e. weather).

Its not discredited by the smoothing method necessarily, that involves padding the data series, although padding data series is problematic in a number of ways.

Questions about the validity of the smoothing in Rahmstorf et al 2007 (see CA here for a belly-laugh) invariably turn into questions about how to smooth in general. With so many ways to do it, what is the right or best way?

One aspect of the question is illustrated in the figure below, which is a replication of Rahmstorf 2007 done a few different ways. The blue lines, are the trends using singular spectrum analysis (SSA) with embedding periods of m=11 (dashed) and m=14 (solid) respectively, and the minimum roughness criterion (MRC). Embedding period makes some difference. The red lines, however, are the result of applying ssa without MRC. This makes a bigger difference. (The smooths are translated for readability).

The MRC is achieved by padding the end of the data series with a line of slope, equal to the slope of the last m points. The red lines above are SSA but not padded. Now padding with points could be done in a number of ways, and the article by Mann 2004 lays out the rationale for these paddings. Padding may even be ‘implicit’ when there is no padding. But the idea of padding the ends of the data series raises questions in most peoples minds. For example, here is a quote from an article in American Thinker.

Most damning, Harvard meteorologists have been unable to replicate the findings of the UN’s International Panel on Climate Change (IPCC) without the use of a technique called “data-padding.”[1] The IPCC actually admitted to engaging in this deceptive practice. Without this padding, the infamous warming trend falls by several degrees. In essence, the IPCC and its primary source manipulated data (dare we say “lied”?) to produce a desired result.

Rightly or wrongly, padding is synonymous with fabrication in most peoples eyes. The padded data on the graphs above can be seen as the straight lines on the right hand end. Adding this data, can be seen as an expression of an expectation that the data will continue to increase along the lines that it has in the recent m points. Although, the recent down-turn lead to a modification of the parameters in the same graph in the Copenhagen Synthesis Report to increase the m points, so the temperature trend continued to go up. So it can be seen that the implicit expectation is not so much that trends will continue, but the expectation that temperatures will go up. So why not just pad the data series with a line where you ‘think’ the temperature is going to go? It would be the same thing.

In another American Thinker article Journalists protest Global Warming spin cycle, according to the Columbia Journalism Review, the NSF’s (National Science Foundation) Jeff Nesbit was met with “consternation” at the London conference for “attempting to ‘disguise’ publicity as objective reporting.” Apparently, the NSF is so heavily invested in propagating the Global Warming party line by producing content for news outlets, journalists are revolting.

Global warming researchers are sometimes accused of chasing research dollars with alarmist rhetoric. The NSF also requests a budget increase of doubling of funding for “basic research” over the next decade.

The bottom line of the article is a warning of the dangers of mis-using science as a crutch to prop up political ambitions. The problem with padding the end of the data series, is that it can easily be seen a a crutch to prop up the faltering temperature trend.

**References**

[1] Willie H. Soon, David R. Legates, and Sallie L. Baliunas, “Estimation and representation of long-term (>40 year) trends of Northern-Hemisphere-gridded surface temperature: A note of caution,” Geophysical Research Letters, vol. 31, 14 February 2004, 2.

I strongly expect warming will resume. In the meantime, it’s unfortunate that someone like Rahmstorf is smoothing and padding end points rather than admitting that the current flat trend is, well, flattish.

I strongly expect warming will resume. In the meantime, it's unfortunate that someone like Rahmstorf is smoothing and padding end points rather than admitting that the current flat trend is, well, flattish.

McI seems to be chortling that Grinsted published in EOS, which is not quite a newsletter (like the australian thing he referenced once…more like a news magazine like Physics Today or C&E News), but Grinsted refernces a regular lit presentation “Grinsted04” in a nonlinear processes journal.

http://www.pol.ac.uk/home/research/waveletcoherence/download/npg-11-561.pdf

Actually given the technical detail, EOS seems a notch more serious than either CnEN or PT. Maybe about like MRS Bulletin. Anyhoo, newsletter is definitely not best charactrization. My local alumni chapter has a newsletter. My overall alumni association, has a magazine. And EOS is a cut above those. although I’m not sure how the revieiwng and abstracting works…

McI seems to be chortling that Grinsted published in EOS, which is not quite a newsletter (like the australian thing he referenced once…more like a news magazine like Physics Today or C&E News), but Grinsted refernces a regular lit presentation “Grinsted04” in a nonlinear processes journal.http://www.pol.ac.uk/home/research/waveletcoher…

Actually given the technical detail, EOS seems a notch more serious than either CnEN or PT. Maybe about like MRS Bulletin. Anyhoo, newsletter is definitely not best charactrization. My local alumni chapter has a newsletter. My overall alumni association, has a magazine. And EOS is a cut above those. although I'm not sure how the revieiwng and abstracting works…

David,

I’ve quoted elsewhere this advice from the Australian Bureau of Statistics:

The last sentence describes exactly the “padding” used by Rahmstorf. So how exactly is the 2007 paper discredited? On the authority of the “American Thinker”?

Hi,

The paper is discredited by Rahmstorf own admission here http://www.realclimate.org/index.php/archives/2009/06/a-warning-from-copenhagen/langswitch_lang/wp#comment-127955, that it was only due to weather that things looked worse than expected.

In hindsight, the averaging period of 11 years that we used in the 2007 Science paper was too short to determine a robust climate trend. The 2-sigma error of an 11-year trend is about +/- 0.2 ºC, i.e. as large as the trend itself. Therefore, an 11-year trend is still strongly affected by interannual variability (i.e. weather).

Of course, he then makes it worse in the Copenhagen Synthesis Report by appearing to try to prop up the result by changing the method surreptitiously.

Its not discredited by the smoothing method necessarily, that involves padding the data series, although padding data series creates a bad impression in many peoples eyes, which is the reference to American Thinker.

David,I've quoted elsewhere < href=”http://www.abs.gov.au/websitedbs/d3310114.nsf/51c9a3d36edfd0dfca256acb00118404/5fc845406def2c3dca256ce100188f8e!OpenDocument#WHAT%20IS%20THE%20END%20POINT%20PROBLEM%3F”>this advice from the Australian Bureau of Statistics:

The last sentence describes exactly the “padding” used by Rahmstorf. So how exactly is the 2007 paper discredited? On the authority of the “American Thinker”?

“So how exactly is the 2007 paper discredited?”

Basing past and current trends by padding end points with what you think it might do in the future is just not credible.

Well, I’ve cited the ABS saying that it is the right thing to do – corrected link here. No-one is “

Basing past and current trends by padding end points with what you think it might do in the future“. They are using a formal defined linear extrapolation to construct non-predictive asymmetric filter functions. Here is another paper showing that this is just a (good) way of constructing filters. Key statement:They actually showed that they are the same.

Where do they say assuming that a trend of the recent past will continue into the future (e.g. MRC) is the right thing to do with data that is likely to be cyclic?

“hey are using a formal defined linear extrapolation to construct non-predictive asymmetric filter functions. ”

Correction: They are using a formally defined linear extrapolation to

predicti.e. guess, the future values. The papers you cite admit these are methods to predict the future values. Their discussion touches on which ways of predicting result, on average, in larger or smaller revisions in the smoothed values at end points. (Smaller is desirable these changes are a problem.)These future values, which were predicted, i.e. guessed, and used to pad the data, are subject to revision when the real data arrive.

In the specific case of Rahmstorf, he didn’t like the way the smoothed data looked after 2 years of data were added so he changed his method. He made no mention of this change in the body of the report, or a footnote. While his caption may have simply been a typo, his failure to make any mention a all looks fishy.

Nothing in your paper goes on to discuss the specific use Rahmstorf made of these smoothed graphs. The economists are using theirs specifically to predict the future. In contrast, Rahmstorf was using them to

testhis predictions of the future.This use by Rahmstorf, his changing the method to make agreement look better and failure to make any mention do no Rahmstorf no credit.

Lucia, as you might expect, I don’t accept that correction. Let’s take one problem at a time. If you have a given data interval, then no external guesses appear in the actual smoothed values. In the interior all the filters are the same, but in the end fringe region (m/2) there is a sequence of varying filters, culminating in a zero width filter at the endpoint. If you then take on another data point, what happens is that the filter sequence all moves along one. Each point in the fringe makes use of the new data point. So the smoothed values change in the fringe.

It’s true that you can interpret that as replacement of a extrapolated value by new data if you want, and that works, quantitatively, because it relates to the way the filter was constructed. But it doesn’t change the fact that each fixed interval uses just internal information.

I agree that if Rahmstorf changed his smoothing interval when new data came in to make the results look better for some purpose, then this should be criticised. But that is a different issue to the use of extrapolation in constructing smoothing filters, which is a legitimate and accepted procedure, and does not discredit his paper.

The authors of the paper you cite use the word “prediction” to describe the estimated points they use to smooth the end points. If you wish to re-interpret what they do as not being based on a prediction of future values simply because the prediction of the future values is computed is based on the past data, ok. But you appear to be in a fairly small group. Moreover, your interpretation seems at variance with that expressed in the papers you cite.

Not a small group, Lucia. The ABS is a heavyweight. And the second paper I cited said:

But my assertion is again this. Name any symmetric, time invariant smoothing filter and and extrapolation procedure, and I can write down the asymmetric time-varying filters that they generate. They will be normal smoothing filters using only known data.

Nick

I’m not suggesting ABS is not small. I read their paper, and they clearly indicate the methods describe involve using forecast/predicted/guessed values to smooth.

For example, in the bit you quote:

A common and natural approach involves forecasting the missing values,Yes. A common method is to forecast/predict/ guess. You are claiming they don’t use predicted/forecast/guessed values.

I have never suggested the method of forcasting/guessing/predicting the data is not based on known data. Every specific example I, (or anyone on any of the blogs where you are posing) said the method of forecasting/predicting/guessing future values is based on past data.

The issue is: The smoothing at endpoints is based on forecast/guessed/predicted values. The fact that the authors did not read tealeaves to forecast the points does not make them any less forecast/predicted/guessed. It only makes the guess an

educatedguess.Dang. I need an editing feature. I didn’t suggest ABS is small.

Nick, I do not understand your constant attempts to confuse people here and elsewhere.

Now, it seems that your central point comes down to this:

“the use of extrapolation in constructing smoothing filters, which is a legitimate and accepted procedure, and does not discredit his paper.”

You do not seem to understand that those “legitimate and accepted” procedures should (and do) carry also uncertainties for the smoothed values. If you consider those, there was nothing unusual in the end (“recent observations”) of the original graph to be even discussed. However, Rahmstorf et al did not even realize that the phenomenon they speculated about might be a simple result of “weather noise” combined with their filter. So the failure to understand uncertainties associated with the smoothing, discredited the paper already before it was published. It was further discredited by the fact that their next two padding guesses turned out to be a way wrong.

Another “point” you keep repeating:

“Name any symmetric, time invariant smoothing filter and and extrapolation procedure, and I can write down the asymmetric time-varying filters that they generate. They will be normal smoothing filters using only known data.”

So what? “Normal” (smoothing) filters are time-invariant. Now you make a fuss about a trivial thing that you can describe padding+time-invariant filter without padding as time-varying filter. In other words, you hide the explicit prediction (padding) to implicit prediction (changing filter coefficients). How is that going to change anything? The end in the “smooth” is not comparable to the middle, and usually this is coupled with larger uncertainties. Failure to do so, and even making some inferences from the smooth, is a scientific oversight at best. I do not understand how you can take it so lighthearted.

Jean,

I’m sorry if you find what I’ve said confusing. But I have offerred some solid statistical references to back it up, which will hopefully add explanation. The only reference on the other side of the argument seems to be to the “American Thinker”.

I agree that the legitimate and accepted procedures carry uncertainties. In fact I’ve said over and over that near the ends something has to give, and here it’s the degree of smoothing, which you could relate to some measure of variability. What I take issue with is statements like “Rightly or wrongly, padding is synonymous with fabrication in most peoples eyes”. That’s the legitimacy issue, and when I point out the math justification and the statistical history, people seem to change the subject.

Changing filter coefficients is not implicit prediction. It is making use of new data. Before you didn’t know (and didn’t predict); now you know.

But OK, here’s a riff on why people smooth (and I have sympathy with those who say that it may be overdone). It’s done in the belief that a fluctuating data set hides a signal (reflecting some physical reality) overlaid with noise. We’d like to separate the signal, and seek to do it based on frequency separation – the belief that the signal will be low frequency, and the noise high frequency. A low pass filter gives a partial separation. The width of the filter determines the upper range of the low pass filter. The actual frequency which is assumed to separate signal from noise is rather arbitrary.

But you want to remove the noise without unduly corrupting the signal. That is the point of the zero-lag requirement. You can get better noise identification by using a whole lot of past data, but then you only get an estimate of what the signal would have been in the past.

So if you want to estimate what the signal was just a little while ago, you have a problem. You can only use a limited amount of past data, else you will create too much lag. So your filter has to narrow. And in fact, if you want to estimate the signal now, you can’t use any past data at all. You can’t remove any noise.

This changes when the next data point comes in. Your previous most recent data point has neighbors on both sides, and you can start removing noise. That doesn’t mean that you made a prediction and found it was wrong. It just means that you know something that you didn’t know before.

Anyway, the bottom line, if you don’t like my explanations, is that what they’ve done is endorsed by statistical authorities, starting from 1877, and including the ABS. I could dig up lot’s more, but what’s the point? How about dealing with the actual statistical reasoning?

Nick, the fact is that if you wish to smooth your data, or estimate a trend, or remove noise as you want to call it, you need to model it somehow. In the case of (centered) MA filter, you can not filter the whole series. Then you can pad your data and continue to use the same filter, or you can model your filter coefficients to be time-variant. Either way, you have infinitely possibilities to choose from. And more importantly, it is an additional model, and therefore, you are making predictions. Wave your arms as much as you wish, but it is not going to change these basic facts. Nor it is going to bluff people who actually understand these things.

And what comes to references, I suggest you read a good book on time series analysis. One worth recommending is W.A. Fuller, Introduction to Statistical Time Series, 2nd edition, Wiley, 1996.

Well, Jean, what do you think of David’s proposition that these widely used statistical methods involve “fabrication of data”? Or Lucia’s “bogus guesses”?

Nick, I didn’t say that, and you know that.

All you need to do is make sure the trends have the adequate uncertainty. In this case, ‘model uncertainty’ would capture concerns people have with the arbitrary choice of end treatments, with a flaring at the end of the series reflecting the possible choices of end treatment. The problem is choosing a method arbitrarily and then not estimating the uncertainty.

Sorry David, a slight mis-quote. What you said was “

Rightly or wrongly, padding is synonymous with fabrication in most peoples eyes.“. And you put up a quote describing it as a deceptive practice, and left it there. Do you now want to resolve that issue of “Rightly or wrongly”?The nuance I was getting as was that there are some things that you might not want to do because they can be misinterpreted or misused easily – like cab charges or receiving gifts.

And ultimately its the uncertainty limits and not the trend that matters.

I wouldn’t use such strong words, but these end-point conditions are predictions (guesses) anyhow. In this case, it is easy to say what the model was: temperatures follow the linear trend of the last m years. This is quite hard to justify, if your aim is to compare recent observations to other predictions. Especially, if you don’t attach any uncertainties along.

Jean, I’ll respond to this and your most recent post here (with more width). You don’t need a model to smooth data, or perform a regression, say. It’s just arithmetic. It’s true that you need a model to explain what it means. One such is the spectral model that I described, based on frequency discrimination. Another involves the interpretation of the smooth, or its derivative, as some sort of underlying mean.

I don’t agree that the model here is “temperatures follow the linear trend of the last m years”. Steve M had fun with the phrase “nonlinear trend line”, although I think nonlinear referred to the SSA dependence on data. I presume “trend line” is meant to be the tangent; clearly the smooth is not a line. Nor is there any explicit polynomial fitting. As far as I can see, the model is indeed that the underlying signal is the low frequency component, found by applying a low pass filter (eg Steve’s triangle).

Nick, there is really no need for you to respond, you seem to be far out of your expertise already. I don’t have much more time for these “discussions” with you.

“You don’t need a model to smooth data, or perform a regression, say. It’s just arithmetic. It’s true that you need a model to explain what it means.”

Oh really, in science we usually want to explain things, we are not doing arithmetics just for fun of it.

“I don’t agree that the model here is “temperatures follow the linear trend of the last m years”.

That’s exactly what the padding is doing. Did you even bother to look the code?

Jean S:

Thank you for coming.

Jean, you are a very impatient fellow. You don’t need to be condescending about my time series expertise. I have been for over thirty years a research scientist in the Division of Mathematics and Statistics (recently renamed) of Australia’s major research organsiation. Although my speciality is numerical solution of differential equations, I have along the way done a great deal of applied time series analysis. I believe your understanding is quite shallow.

The padding does not mean that the model is temperatures following the linear trend. That is just a device to construct a filter with appropriate properties, most notably zero lag. Mann’s MRC method, which gives very similar results, uses reflection. What model do you think that implies?

Smoothing in the interior is done using a weighted moving average. That is purely a low pass filter. No assumption of linear behaviour is involved. And nothing done at the edge can create it.

Nick, this is the last time I’m going to respond to you.

I’m not interested in your CV. Your problem is not that you can not understand these things, your problem seems to be that you, for some reason beyond my comprehension, do not want to understand.

The padding IS a model for your time series, and there is no way around it, even if you call it a “device”. What comes to Mann’s MRC, it is, according to Mann (04), an “approximation” of the Park’s roughness criterion. Now, at this point, you may want to consult Park (1992) to see if he has a data model for which he is proposing a minimization criterion. To save you some time, see around Equation~(10.9).

Nick’s current definition of something being based on arithmetic if applied to AOGCMS would translate to this:

AOGCM’s do not use conservation of mass, momentum and energy. The are simply a set of arithmetic operations used to compute numbers.

Thin-ness test

wafer thin

weally weally weally thin!

“So how exactly is the 2007 paper discredited?”Basing past and current trends by padding end points with what you think it might do in the future is just not credible.

Well, I've cited the ABS saying that it is the right thing to do – corrected link here. No-one is “

Basing past and current trends by padding end points with what you think it might do in the future“. They are using a formal defined linear extrapolation to construct non-predictive asymmetric filter functions. Here is another paper showing that this is just a (good) way of constructing filters. Key statement:They actually showed that they are the same.

Where do they say assuming that a trend of the recent past will continue into the future (e.g. MRC) is the right thing to do with data that is likely to be cyclic?

“hey are using a formal defined linear extrapolation to construct non-predictive asymmetric filter functions. “Correction: They are using a formally defined linear extrapolation to

predicti.e. guess, the future values. The papers you cite admit these are methods to predict the future values. Their discussion touches on which ways of predicting result, on average, in larger or smaller revisions in the smoothed values at end points. (Smaller is desirable these changes are a problem.)These future values, which were predicted, i.e. guessed, and used to pad the data, are subject to revision when the real data arrive. In the specific case of Rahmstorf, he didn't like the way the smoothed data looked after 2 years of data were added so he changed his method. He made no mention of this change in the body of the report, or a footnote. While his caption may have simply been a typo, his failure to make any mention a all looks fishy.Nothing in your paper goes on to discuss the specific use Rahmstorf made of these smoothed graphs. The economists are using theirs specifically to predict the future. In contrast, Rahmstorf was using them totesthis predictions of the future.This use by Rahmstorf, his changing the method to make agreement look better and failure to make any mention do no Rahmstorf no credit.Polynomial least squares smoothing, preferably cubic, always ends up the best option for extracting the signal from the noise. You can even reconstruct surfaces from points, or continuum from particles using moving least squares: Try that with spectrum analysis and see how far you get. It seems they only deviated from standard practice to reduce the “error”. A somewhat dubious idea. Anyway, sometimes a supposed error is really a separate, overlaid signal, as indeed it apparently is here.

Padding by linear trend on a known non-linear system for a non-linear smooth is what Nick is suggesting as being ok. Well as long as you point it out in your paper as being a strong source of potential error due to it’s inherent bias then it would be ok; especially if you showed the effects of different guesses. Otherwise you are using circular logic and misleading the reader.

I’m reminded of the padding of solar data which Damon & Laut so condemned, even though it was actually pointed out in the text by Svensmark. Always the double standards!

James,

I don’t agree that the use of extrapolation in constructing smoothing filters is a source of bias. In fact, by eliminating lag, it behaves like an unbiased estimator. The associated faults are that the smoothing is reduced, and the unbiased estimator will change when new data comes in.

Nick,

“In fact, by eliminating lag, it behaves like an unbiased estimator.”

Do you agree that this estimation can, at times, be wrong to the extent of having the wrong sign and large magnitude differences??

Would you agree that it requires the SELECTION, by the user, from a range of values that can cause not only bias but totally wrong answers when later compared to reality??

K,

All estimates turn out to be wrong. The paper that I cited by Gray and Thompson uses what they call a minimum revision criterion, which minimises the extent to which you expect to have to revise in the face of new information. They say it is closely related to the extrapolation technique. I think it ios also very like MRC.

The methods cited here minimise user selection. In fact, with both MRC and Grinsted, the treatment of the end effect is totally predetermined. The complaints have been about revising a parameter associated with the time-invariant smoother which prevails in the interior.

Even that is unjustified. On CA I cited this other 2007 paper by Rahmstorf in which he discusses various values of M, and settles on 15. This is what turned out to be used in the Copenhagen report, and my guess is that he used the programs from that second paper to update the curve, and forgot to tell whoever wrote the caption. In any case, he was using M=15 before it turned cold in 2008.

It only minimizes the extent to which you have to revise in the face of new information if

a) the first derivative of the trend you used is real and not suprious,

b) your assumption (i.e. guess) the second derivative of the time series is zero turns out to be correct and

c) by some miracle you selected the value of “M” that maximizes signal to noise in your data.

The fact that the user assumes (i.e. guesses) (a)-(c) makes the additional constraint imposed by minimizing the changes in the event that he guessed correctly a modest improvement over doing (a)-(c) and using Mannian roughness, which relies on (a)-(c) and then does

notminimize the revisions even if assumptions (a)-(c) turn out to be correct.No, MRC minimises the second derivative, It doesn’t set it to zero.

The first derivative of the trend won’t be apurious. It will certainly change with new data.

You don’t select M to maximise signal to noise. Noise is minimised by maximum M. You use smaller M values to ensure that frequencies that you believe are signal are not unduly discarded as noise.

Here’s what statisticians Gray and Thompson say about their method:

For any given central moving-average trend filter, a family of end filters is constructed using a minimumrevisions criterion and a local dynamic model operating within the span of the central filter. These end filters are equivalent to evaluating the central filter with unknown observations replaced by constrained optimal

linear predictors.

Despite that equivalence, their criterion is just a minimisation (of an expected value statistic) over curves and data within the interval.

These end filters are equivalent to evaluating the central filter with unknown observations replaced by constrained optimallinear predictors.

In other words: You guess the constraints used in the optimization. The second derivative is minimized (i.e. approaches zero as you reach the end point.) If your guess (i.e. assumption) about the constraints is wrong, the changes are not minimized when data are updated.

I don’t know why you aren’t getting this. What would happen if you used MRC on a signal that was a perfect sinewave. Depending on the phase at the end point. MRC could be good, or absolutely wretched.

No matter how much algebra you do, the accuracy of any end point treatment depends on an analysist’s

guess(i.e. assumption) about what will happen in the future.“and forgot to tell whoever wrote the caption.”

Why would “whoever wrote the caption” assume M=11 in the first place?

Didn’t Rahmstorf tell him what he used when he supplied the graph?

“The complaints have been about revising a parameter associated with the time-invariant smoother which prevails in the interior.”

I think the heading of this thread specifically mentions 2007 the change happened in another paper and a later one the blunder only served to highlight the problem with smoothing techniques that require the “guessing” of data beyond the end point in order to centre the running average.

In the best known R paper of 2007, M=11 was used. In another one about the same time, M=15 was used, with an investigation of different values. And yes, I just said I suspected R forgot to tell him, so M=11 was assumed.

Nick it seems an extraordinary assumption that R would send a graph to his co-authors for inclusion and letting them gues what the caption aught to be.

R was not a co-author of the Copenhagen report.

“R was not a co-author of the Copenhagen report.”

So he gave them a picture without telling them what it was?

That’s even worse.

Polynomial least squares smoothing, preferably cubic, always ends up the best option for extracting the signal from the noise. You can even reconstruct surfaces from points, or continuum from particles using moving least squares: Try that with spectrum analysis and see how far you get. It seems they only deviated from standard practice to reduce the “error”. A somewhat dubious idea. Anyway, sometimes a supposed error is really a separate, overlaid signal, as indeed it apparently is here.Padding by linear trend on a known non-linear system for a non-linear smooth is what Nick is suggesting as being ok. Well as long as you point it out in your paper as being a strong source of potential error due to it's inherent bias then it would be ok; especially if you showed the effects of different guesses. Otherwise you are using circular logic and misleading the reader.I'm reminded of the padding of solar data which Damon & Laut so condemned, even though it was actually pointed out in the text by Svensmark. Always the double standards!

Lucia, as you might expect, I don't accept that correction. Let's take one problem at a time. If you have a given data interval, then no external guesses appear in the actual smoothed values. In the interior all the filters are the same, but in the end fringe region (m/2) there is a sequence of varying filters, culminating in a zero width filter at the endpoint. If you then take on another data point, what happens is that the filter sequence all moves along one. Each point in the fringe makes use of the new data point. So the smoothed values change in the fringe.It's true that you can interpret that as replacement of a extrapolated value by new data if you want, and that works, quantitatively, because it relates to the way the filter was constructed. But it doesn't change the fact that each fixed interval uses just internal information. I agree that if Rahmstorf changed his smoothing interval when new data came in to make the results look better for some purpose, then this should be criticised. But that is a different issue to the use of extrapolation in constructing smoothing filters, which is a legitimate and accepted procedure, and does not discredit his paper.

James,I don't agree that the use of extrapolation in constructing smoothing filters is a source of bias. In fact, by eliminating lag, it behaves like an unbiased estimator. The associated faults are that the smoothing is reduced, and the unbiased estimator will change when new data comes in.

Hi,The paper is discredited by Rahmstorf own admission here http://www.realclimate.org/index.php/archives/2…, that it was only due to weather that things looked worse than expected. In hindsight, the averaging period of 11 years that we used in the 2007 Science paper was too short to determine a robust climate trend. The 2-sigma error of an 11-year trend is about +/- 0.2 ÂºC, i.e. as large as the trend itself. Therefore, an 11-year trend is still strongly affected by interannual variability (i.e. weather).Of course, he then makes it worse in the Copenhagen Synthesis Report by appearing to try to prop up the result by changing the method surreptitiously. Its not discredited by the smoothing method necessarily, that involves padding the data series, although padding data series creates a bad impression in many peoples eyes, which is the reference to American Thinker.

The authors of the paper you cite use the word “prediction” to describe the estimated points they use to smooth the end points. If you wish to re-interpret what they do as not being based on a prediction of future values simply because the prediction of the future values is computed is based on the past data, ok. But you appear to be in a fairly small group. Moreover, your interpretation seems at variance with that expressed in the papers you cite.

Not a small group, Lucia. The ABS is a heavyweight. And the second paper I cited said:

But my assertion is again this. Name any symmetric, time invariant smoothing filter and and extrapolation procedure, and I can write down the asymmetric time-varying filters that they generate. They will be normal smoothing filters using only known data.

NickI'm not suggesting ABS is not small. I read their paper, and they clearly indicate the methods describe involve using forecast/predicted/guessed values to smooth.For example, in the bit you quote:

A common and natural approach involves forecasting the missing values,Yes. A common method is to forecast/predict/ guess. You are claiming they don't use predicted/forecast/guessed values. I have never suggested the method of forcasting/guessing/predicting the data is not based on known data. Every specific example I, (or anyone on any of the blogs where you are posing) said the method of forecasting/predicting/guessing future values is based on past data. The issue is: The smoothing at endpoints is based on forecast/guessed/predicted values. The fact that the authors did not read tealeaves to forecast the points does not make them any less forecast/predicted/guessed. It only makes the guess aneducatedguess.Dang. I need an editing feature. I didn't suggest ABS is small.

I’m glad to see the ABS described as a heavyweight. In an email to the Chair of the IPCC in December 2002, I informed Dr Pachauri that the Head of the Bureau (Dennis Trewin) had advised me that the ABS would be “very willing” to contribute to the work of the IPCC, and that I had “no doubt that many other national statistical offices would also welcome the opportunity to contribute their expertise to relevant aspects of the work of the IPCC.”

The expertise in question was primarily in the area of economic statistics, but the IPCC would also have benefited from the maths/stats skills available in national statistical offices. In fact, David Stockwell quoted the above extracts from my email in his submission of 12 April 2008 to the Garnaut Climate Change Review, in which he (David) urged Professor Garnaut “to seek independent verification of the flaws with Rahmstorf et al (2007)” and offered to “provide contacts to qualified statisticians if required.”

The Australian Government had effectively supported my arguments in its submission to the IPCC on the scoping of AR4 in March 2003. Specifically, they had said that “the second scoping meeting of the AR4 should engage a broader range of experts and ensure representative participation is sought at a national level from the economics, statistical, and economic history professions, and include representation from key international bodies such as the United Nations Statistical Commission.”

But it was not to be. Dennis Trewin was accepted as an Expert Reviewer of an AR4 chapter, but so far as I can see his highly pertinent comments were entirely disregarded. The IPCC is strongly averse to taking the advice of statistical experts.

I'm glad to see the ABS described as a heavyweight. In an email to the Chair of the IPCC in December 2002, I informed Dr Pachauri that the Head of the Bureau (Dennis Trewin) had advised me that the ABS would be â€œvery willingâ€ to contribute to the work of the IPCC, and that I had â€œno doubt that many other national statistical offices would also welcome the opportunity to contribute their expertise to relevant aspects of the work of the IPCC.â€ The expertise in question was primarily in the area of economic statistics, but the IPCC would also have benefited from the maths/stats skills available in national statistical offices. In fact, David Stockwell quoted the above extracts from my email in his submission of 12 April 2008 to the Garnaut Climate Change Review, in which he (David) urged Professor Garnaut â€œto seek independent verification of the flaws with Rahmstorf et al (2007)” and offered to â€œprovide contacts to qualified statisticians if required.â€ The Australian Government had effectively supported my arguments in its submission to the IPCC on the scoping of AR4 in March 2003. Specifically, they had said that â€œthe second scoping meeting of the AR4 should engage a broader range of experts and ensure representative participation is sought at a national level from the economics, statistical, and economic history professions, and include representation from key international bodies such as the United Nations Statistical Commission.â€But it was not to be. Dennis Trewin was accepted as an Expert Reviewer of an AR4 chapter, but so far as I can see his highly pertinent comments were entirely disregarded. The IPCC is strongly averse to taking the advice of statistical experts.

Nick, I do not understand your constant attempts to confuse people here and elsewhere. Now, it seems that your central point comes down to this:”the use of extrapolation in constructing smoothing filters, which is a legitimate and accepted procedure, and does not discredit his paper.”You do not seem to understand that those “legitimate and accepted” procedures should (and do) carry also uncertainties for the smoothed values. If you consider those, there was nothing unusual in the end (“recent observations”) of the original graph to be even discussed. However, Rahmstorf et al did not even realize that the phenomenon they speculated about might be a simple result of “weather noise” combined with their filter. So the failure to understand uncertainties associated with the smoothing, discredited the paper already before it was published. It was further discredited by the fact that their next two padding guesses turned out to be a way wrong.Another “point” you keep repeating:”Name any symmetric, time invariant smoothing filter and and extrapolation procedure, and I can write down the asymmetric time-varying filters that they generate. They will be normal smoothing filters using only known data.”So what? “Normal” (smoothing) filters are time-invariant. Now you make a fuss about a trivial thing that you can describe padding+time-invariant filter without padding as time-varying filter. In other words, you hide the explicit prediction (padding) to implicit prediction (changing filter coefficients). How is that going to change anything? The end in the “smooth” is not comparable to the middle, and usually this is coupled with larger uncertainties. Failure to do so, and even making some inferences from the smooth, is a scientific oversight at best. I do not understand how you can take it so lighthearted.

Jean,I'm sorry if you find what I've said confusing. But I have offerred some solid statistical references to back it up, which will hopefully add explanation. The only reference on the other side of the argument seems to be to the “American Thinker”.I agree that the legitimate and accepted procedures carry uncertainties. In fact I've said over and over that near the ends something has to give, and here it's the degree of smoothing, which you could relate to some measure of variability. What I take issue with is statements like “Rightly or wrongly, padding is synonymous with fabrication in most peoples eyes”. That's the legitimacy issue, and when I point out the math justification and the statistical history, people seem to change the subject.Changing filter coefficients is not implicit prediction. It is making use of new data. Before you didn't know (and didn't predict); now you know. But OK, here's a riff on why people smooth (and I have sympathy with those who say that it may be overdone). It's done in the belief that a fluctuating data set hides a signal (reflecting some physical reality) overlaid with noise. We'd like to separate the signal, and seek to do it based on frequency separation – the belief that the signal will be low frequency, and the noise high frequency. A low pass filter gives a partial separation. The width of the filter determines the upper range of the low pass filter. The actual frequency which is assumed to separate signal from noise is rather arbitrary.But you want to remove the noise without unduly corrupting the signal. That is the point of the zero-lag requirement. You can get better noise identification by using a whole lot of past data, but then you only get an estimate of what the signal would have been in the past.So if you want to estimate what the signal was just a little while ago, you have a problem. You can only use a limited amount of past data, else you will create too much lag. So your filter has to narrow. And in fact, if you want to estimate the signal now, you can't use any past data at all. You can't remove any noise.This changes when the next data point comes in. Your previous most recent data point has neighbors on both sides, and you can start removing noise. That doesn't mean that you made a prediction and found it was wrong. It just means that you know something that you didn't know before.Anyway, the bottom line, if you don't like my explanations, is that what they've done is endorsed by statistical authorities, starting from 1877, and including the ABS. I could dig up lot's more, but what's the point? How about dealing with the actual statistical reasoning?

Nick, the fact is that if you wish to smooth your data, or estimate a trend, or remove noise as you want to call it, you need to model it somehow. In the case of (centered) MA filter, you can not filter the whole series. Then you can pad your data and continue to use the same filter, or you can model your filter coefficients to be time-variant. Either way, you have infinitely possibilities to choose from. And more importantly, it is an additional model, and therefore, you are making predictions. Wave your arms as much as you wish, but it is not going to change these basic facts. Nor it is going to bluff people who actually understand these things.And what comes to references, I suggest you read a good book on time series analysis. One worth recommending is W.A. Fuller, Introduction to Statistical Time Series, 2nd edition, Wiley, 1996.

Well, Jean, what do you think of David's proposition that these widely used statistical methods involve “fabrication of data”? Or Lucia's “bogus guesses”?

Nick, I didn't say that, and you know that. All you need to do is make sure the trends have the adequate uncertainty. In this case, 'model uncertainty' would capture concerns people have with the arbitrary choice of end treatments, with a flaring at the end of the series reflecting the possible choices of end treatment. The problem is choosing a method arbitrarily and then not estimating the uncertainty.

I wouldn't use such strong words, but these end-point conditions are predictions (guesses) anyhow. In this case, it is easy to say what the model was: temperatures follow the linear trend of the last m years. This is quite hard to justify, if your aim is to compare recent observations to other predictions. Especially, if you don't attach any uncertainties along.

Sorry David, a slight mis-quote. What you said was “

Rightly or wrongly, padding is synonymous with fabrication in most peoples eyes.“. And you put up a quote describing it as a deceptive practice, and left it there. Do you now want to resolve that issue of “Rightly or wrongly”?The nuance I was getting as was that there are some things that you might not want to do because they can be misinterpreted or misused easily – like cab charges or receiving gifts. And ultimately its the uncertainty limits and not the trend that matters.

Jean, I'll respond to this and your most recent post here (with more width). You don't need a model to smooth data, or perform a regression, say. It's just arithmetic. It's true that you need a model to explain what it means. One such is the spectral model that I described, based on frequency discrimination. Another involves the interpretation of the smooth, or its derivative, as some sort of underlying mean.I don't agree that the model here is “temperatures follow the linear trend of the last m years”. Steve M had fun with the phrase “nonlinear trend line”, although I think nonlinear referred to the SSA dependence on data. I presume “trend line” is meant to be the tangent; clearly the smooth is not a line. Nor is there any explicit polynomial fitting. As far as I can see, the model is indeed that the underlying signal is the low frequency component, found by applying a low pass filter (eg Steve's triangle).

Nick, there is really no need for you to respond, you seem to be far out of your expertise already. I don't have much more time for these “discussions” with you.”You don't need a model to smooth data, or perform a regression, say. It's just arithmetic. It's true that you need a model to explain what it means.”Oh really, in science we usually want to explain things, we are not doing arithmetics just for fun of it.”I don't agree that the model here is “temperatures follow the linear trend of the last m years”.That's exactly what the padding is doing. Did you even bother to look the code?

Jean S: Thank you for coming.

Jean, you are a very impatient fellow. You don't need to be condescending about my time series expertise. I have been for over thirty years a research scientist in the Division of Mathematics and Statistics (recently renamed) of Australia's major research organsiation. Although my speciality is numerical solution of differential equations, I have along the way done a great deal of applied time series analysis. I believe your understanding is quite shallow.The padding does not mean that the model is temperatures following the linear trend. That is just a device to construct a filter with appropriate properties, most notably zero lag. Mann's MRC method, which gives very similar results, uses reflection. What model do you think that implies?Smoothing in the interior is done using a weighted moving average. That is purely a low pass filter. No assumption of linear behaviour is involved. And nothing done at the edge can create it.

Nick, this is the last time I'm going to respond to you.I'm not interested in your CV. Your problem is not that you can not understand these things, your problem seems to be that you, for some reason beyond my comprehension, do not want to understand.The padding IS a model for your time series, and there is no way around it, even if you call it a “device”. What comes to Mann's MRC, it is, according to Mann (04), an “approximation” of the Park's roughness criterion. Now, at this point, you may want to consult Park (1992) to see if he has a data model for which he is proposing a minimization criterion. To save you some time, see around Equation~(10.9).

Nick's current definition of something being based on arithmetic if applied to AOGCMS would translate to this: AOGCM's do not use conservation of mass, momentum and energy. The are simply a set of arithmetic operations used to compute numbers.

Nick,”In fact, by eliminating lag, it behaves like an unbiased estimator.”Do you agree that this estimation can, at times, be wrong to the extent of having the wrong sign and large magnitude differences??Would you agree that it requires the SELECTION, by the user, from a range of values that can cause not only bias but totally wrong answers when later compared to reality??

Thin-ness test

wafer thin

weally weally weally thin!

K,All estimates turn out to be wrong. The paper that I cited by Gray and Thompson uses what they call a minimum revision criterion, which minimises the extent to which you expect to have to revise in the face of new information. They say it is closely related to the extrapolation technique. I think it ios also very like MRC.The methods cited here minimise user selection. In fact, with both MRC and Grinsted, the treatment of the end effect is totally predetermined. The complaints have been about revising a parameter associated with the time-invariant smoother which prevails in the interior.Even that is unjustified. On CA I cited this other 2007 paper by Rahmstorf in which he discusses various values of M, and settles on 15. This is what turned out to be used in the Copenhagen report, and my guess is that he used the programs from that second paper to update the curve, and forgot to tell whoever wrote the caption. In any case, he was using M=15 before it turned cold in 2008.

It only minimizes the extent to which you have to revise in the face of new information if a) the first derivative of the trend you used is real and not suprious,b) your assumption (i.e. guess) the second derivative of the time series is zero turns out to be correct and c) by some miracle you selected the value of “M” that maximizes signal to noise in your data.The fact that the user assumes (i.e. guesses) (a)-(c) makes the additional constraint imposed by minimizing the changes in the event that he guessed correctly a modest improvement over doing (a)-(c) and using Mannian roughness, which relies on (a)-(c) and then does

notminimize the revisions even if assumptions (a)-(c) turn out to be correct.“and forgot to tell whoever wrote the caption.”Why would “whoever wrote the caption” assume M=11 in the first place? Didn't Rahmstorf tell him what he used when he supplied the graph?”The complaints have been about revising a parameter associated with the time-invariant smoother which prevails in the interior.”I think the heading of this thread specifically mentions 2007 the change happened in another paper and a later one the blunder only served to highlight the problem with smoothing techniques that require the “guessing” of data beyond the end point in order to centre the running average.

In the best known R paper of 2007, M=11 was used. In another one about the same time, M=15 was used, with an investigation of different values. And yes, I just said I suspected R forgot to tell him, so M=11 was assumed.

No, MRC minimises the second derivative, It doesn't set it to zero.The first derivative of the trend won't be apurious. It will certainly change with new data.You don't select M to maximise signal to noise. Noise is minimised by maximum M. You use smaller M values to ensure that frequencies that you believe are signal are not unduly discarded as noise.Here's what statisticians Gray and Thompson say about their method:

For any given central moving-average trend filter, a family of end filters is constructed using a minimumrevisions criterion and a local dynamic model operating within the span of the central filter. These end filters are equivalent to evaluating the central filter with unknown observations replaced by constrained optimallinear predictors.Despite that equivalence, their criterion is just a minimisation (of an expected value statistic) over curves and data within the interval.These end filters are equivalent to evaluating the central filter with unknown observations replaced by constrained optimallinear predictors.In other words: You guess the constraints used in the optimization. The second derivative is minimized (i.e. approaches zero as you reach the end point.) If your guess (i.e. assumption) about the constraints is wrong, the changes are not minimized when data are updated.I don't know why you aren't getting this. What would happen if you used MRC on a signal that was a perfect sinewave. Depending on the phase at the end point. MRC could be good, or absolutely wretched.No matter how much algebra you do, the accuracy of any end point treatment depends on an analysist'sguess(i.e. assumption) about what will happen in the future.Nick it seems an extraordinary assumption that R would send a graph to his co-authors for inclusion and letting them gues what the caption aught to be.

R was not a co-author of the Copenhagen report.

Well Nick you are certainly better credentialled then me, at least in the statistical field; my experience is undergraduate quantitative psychology where the numbers are always defeated by the exception which proves the rule.

I’ve always thought smoothing, which as I understand it, is recursive, is a dumb way of ‘predicting’ climate/weather; and before I venture my opinion as to why, by way of digression, smoothing is why, rightly or wrongly, AGW is perceived and promoted as a linear upward trend [as Ramstorf’s little tale shows]; the reason why smoothing is inappropriate to climate is that the predictive/extrapolated endpoint is stuck in a different climate phase to the base; regardless of whether the smoothing is a running mean or the exponential weighted average, if the endpoint is in a new climate phase then the smoothing period will always be contradicted by the real data until the smoothing period is entirely within the new climate phase. With Ramstorf a cynical interpretation would be, since we are moving from a +ve PDO to a -ve PDO, to maintain an upward trend as the smoothing series moves into the -ve PDO, more and more of the +ve PDO will have to be included; going from an 11 year period to a 15 year period. IMO a new method of statistically dealing with climate data other than assuming some sort of ‘continuity’ is needed.

Coho, no, the issue with R is the opposite. At the end of the range you’re stuck between either using a whole lot of past data, getting a smooth response to new info at the cost of the phase lag that you describe, or avoiding the phase lag, which you can, at the cost of reduced smoothing and a jittery response to incoming new data. R does the latter and is being criticised for it.

I’ve been describing this at CA, and also noting that some of the effect is due to the other end of the filter. You’re probably familiar with the following behaviour of a moving average. Say you have a running annaul ave of monthly sales data. If you have a good month, the ave rises, but a year later, drops again. People often wrongly blame the current sales effort.

A lot of the recent difference between the M=11 and M=15 filters is the different way they see the 90’s warming leading up to 1998. It’s much more seen in the M=15 filter (for recent smooths), and shows as a higher predicted current gradient.

Well Nick you are certainly better credentialled then me, at least in the statistical field; my experience is undergraduate quantitative psychology where the numbers are always defeated by the exception which proves the rule.I've always thought smoothing, which as I understand it, is recursive, is a dumb way of 'predicting' climate/weather; and before I venture my opinion as to why, by way of digression, smoothing is why, rightly or wrongly, AGW is perceived and promoted as a linear upward trend [as Ramstorf's little tale shows]; the reason why smoothing is inappropriate to climate is that the predictive/extrapolated endpoint is stuck in a different climate phase to the base; regardless of whether the smoothing is a running mean or the exponential weighted average, if the endpoint is in a new climate phase then the smoothing period will always be contradicted by the real data until the smoothing period is entirely within the new climate phase. With Ramstorf a cynical interpretation would be, since we are moving from a +ve PDO to a -ve PDO, to maintain an upward trend as the smoothing series moves into the -ve PDO, more and more of the +ve PDO will have to be included; going from an 11 year period to a 15 year period. IMO a new method of statistically dealing with climate data other than assuming some sort of 'continuity' is needed.

“R was not a co-author of the Copenhagen report.”So he gave them a picture without telling them what it was? That's even worse.

Coho, no, the issue with R is the opposite. At the end of the range you're stuck between either using a whole lot of past data, getting a smooth response to new info at the cost of the phase lag that you describe, or avoiding the phase lag, which you can, at the cost of reduced smoothing and a jittery response to incoming new data. R does the latter and is being criticised for it.I've been describing this at CA, and also noting that some of the effect is due to the other end of the filter. You're probably familiar with the following behaviour of a moving average. Say you have a running annaul ave of monthly sales data. If you have a good month, the ave rises, but a year later, drops again. People often wrongly blame the current sales effort.A lot of the recent difference between the M=11 and M=15 filters is the different way they see the 90's warming leading up to 1998. It's much more seen in the M=15 filter (for recent smooths), and shows as a higher predicted current gradient.

Fair enough Nick but the point is the prediction/projection was contradicted by the real data and the prediction/projection was used for the accelerated trend assertions; a “jittery response” is not good enough to go around alarming people; and your sales example is, I think, what I was saying; 2 distinct climate phases are the equivalent of your 2 distinct sales periods; it’s unreasonable to compare the statistics between them; the point is R is not a good salesman, or as the new show describes them, “Madmen”.

Coho, predictions are always contradicted by real data. Actually, M=11 turned out to be a “better” predictor than M=15. In that sense, the cynical view of R’s recent use of M=15 is misplaced, if prediction quality is the key. But the reason why M=11 predicted better is spurious. It turned down because it was shedding the pre-1998 data, not foreseeing 2008. Unless you think those two things are related.

Fair enough Nick but the point is the prediction/projection was contradicted by the real data and the prediction/projection was used for the accelerated trend assertions; a “jittery response” is not good enough to go around alarming people; and your sales example is, I think, what I was saying; 2 distinct climate phases are the equivalent of your 2 distinct sales periods; it's unreasonable to compare the statistics between them; the point is R is not a good salesman, or as the new show describes them, “Madmen”.

Coho, predictions are always contradicted by real data. Actually, M=11 turned out to be a “better” predictor than M=15. In that sense, the cynical view of R's recent use of M=15 is misplaced, if prediction quality is the key. But the reason why M=11 predicted better is spurious. It turned down because it was shedding the pre-1998 data, not foreseeing 2008. Unless you think those two things are related.

Nick Stokes,

I’ve been reading you and the rest at CA also. I always thought that Steve McI. layed out one issue at a time due to a combination of reasons including time required to do a thorough job and keeping the issue focused and clear.

On Rahmstorf, I think I see another application, whether done consciously or accidentally. By dragging out the full critique, it allows apologists time to jump in and make excuses that do not address the full range of issues, and even contradict the facts later revealed. It can also allow them to “paint themselves into a corner!!”

Seen the latest two posts yet??

Keep up the Good Fight!! ;>)

Nick Stokes,I've been reading you and the rest at CA also. I always thought that Steve McI. layed out one issue at a time due to a combination of reasons including time required to do a thorough job and keeping the issue focused and clear.On Rahmstorf, I think I see another application, whether done consciously or accidentally. By dragging out the full critique, it allows apologists time to jump in and make excuses that do not address the full range of issues, and even contradict the facts later revealed. It can also allow them to “paint themselves into a corner!!”Seen the latest two posts yet??Keep up the Good Fight!! ;>)

Well Nick, are 1998 and 2008 related? Could be if you could prove that a new climate phase began just after 1998; watch this space.

Well Nick, are 1998 and 2008 related? Could be if you could prove that a new climate phase began just after 1998; watch this space.

It’s not just end effects that are guesses. Here is a piece about Australian temperature homogernisation procedures.

Ref:

Aust. Met. Mag. 53 (2004) 75-93

Updating Australia’s high-quality annual temperature dataset

Paul Della-Marta and Dean Collins

National Climate Centre, Bureau of Meteorology, Australia

and Karl Braganza

CSIRO Atmospheric Research, Aspendale, Australia

(Manuscript received June 2003; revised September 2003)

Quote – selected a little out of context to make a point – please read the whole paper for better understanding.

“This problem is not likely to have greatly affected this dataset, since only 22 per cent of adjustments made were based on the results of the objective test without supporting documentation. Some unsupported discontinuities must be accepted as real for it is known that many historical station changes have not been documented. ”

In other words, there were discontinuities in earlier data, which were corrected by guesswork in a quarter of these cases, but the score should be better than that because some were real but not written on the observer’s sheet.

What is the point of even trying to run a smoothing filter over data so full of guesswork? It’s hardly even worth going to the effort of discussing end problems when parts of the centre are artificial. So, when artificial data are close enough to the end to be enclosed by the filter, this will also incorrectly pad the end in an unknown manner.

All projections in this field are guesses. The sophistication of the guess is, alas, unknown to the climate system.

It's not just end effects that are guesses. Here is a piece about Australian temperature homogernisation procedures. Ref:Aust. Met. Mag. 53 (2004) 75-93Updating Australia's high-quality annual temperature datasetPaul Della-Marta and Dean CollinsNational Climate Centre, Bureau of Meteorology, Australiaand Karl BraganzaCSIRO Atmospheric Research, Aspendale, Australia(Manuscript received June 2003; revised September 2003)Quote – selected a little out of context to make a point – please read the whole paper for better understanding.”This problem is not likely to have greatly affected this dataset, since only 22 per cent of adjustments made were based on the results of the objective test without supporting documentation. Some unsupported discontinuities must be accepted as real for it is known that many historical station changes have not been documented. “In other words, there were discontinuities in earlier data, which were corrected by guesswork in a quarter of these cases, but the score should be better than that because some were real but not written on the observer's sheet.What is the point of even trying to run a smoothing filter over data so full of guesswork? It's hardly even worth going to the effort of discussing end problems when parts of the centre are artificial. So, when artificial data are close enough to the end to be enclosed by the filter, this will also incorrectly pad the end in an unknown manner.All projections in this field are guesses. The sophistication of the guess is, alas, unknown to the climate system.

Dr. Stockwell, I’d like confirmation that I undersand the “Rahm-Centering” that Steve posted on recently. You are one of three I know of that has replicated R07 in full, so I thought I’d come here. I’m trying to wrap my head around it.

So: 1) the smooth of model means and smooth of temps are computed. 2) the model smooth is truncated at 1990 and moved down to the model smooth so that in they are equal in 1990. 3) neither are re-normalized in any proportional way; the move up (or vice versa) is simply an arbitrary shift?

Presumably the modelers strove for the best fit over the *entire* calibration period but as it happens, in 1990 temperature was running cool, at about -1 s.d. down from the model mean.

After the arbitrarily shift up, going forward from 1990 temp will always be 1 s.d. too high?

I am worried I have it wrong because, well, I can’t believe they did this.

TIA

The way I see it, there was uncertainty introduced in the “Rahm-centering”, and uncertainty in the end treatment, above the usual uncertainties associated with noisy data. These ‘special’ uncertainties were not appreciated or accounted for. The description you give is what we are assuming he did from the look of the graph.

I couldn’t make head or tails of what the centering issue was. It’s just crappy explanation. I mean how fucking hard is it to draw clear before and after pictures and put arrows pointing to exactly what changed? Same thing happened with the Briffa snip. I’m sure the cheering choir just laps it up…but then they are curs.

Dr. Stockwell, I'd like confirmation that I undersand the “Rahm-Centering” that Steve posted on recently. You are one of three I know of that has replicated R07 in full, so I thought I'd come here. I'm trying to wrap my head around it.So: 1) the smooth of model means and smooth of temps are computed. 2) the model smooth is truncated at 1990 and moved down to the model smooth so that in they are equal in 1990. 3) neither are re-normalized in any proportional way; the move up (or vice versa) is simply an arbitrary shift?Presumably the modelers strove for the best fit over the *entire* calibration period but as it happens, in 1990 temperature was running cool, at about -1 s.d. down from the model mean.After the arbitrarily shift up, going forward from 1990 temp will always be 1 s.d. too high?I am worried I have it wrong because, well, I can't believe they did this.TIA

The way I see it, there was uncertainty introduced in the “Rahm-centering”, and uncertainty in the end treatment, above the usual uncertainties associated with noisy data. These 'special' uncertainties were not appreciated or accounted for. The description you give is what we are assuming he did from the look of the graph.

I couldn't make head or tails of what the centering issue was. It's just crappy explanation. I mean how fucking hard is it to draw clear before and after pictures and put arrows pointing to exactly what changed? Same thing happened with the Briffa snip. I'm sure the cheering choir just laps it up…but then they are curs.