Surface Temperatures – How significant is the January 2008 fall?

As in the previous post about recent plummeting global temperatures, I want to look at the statistics of the drop, and determine its significance. The sort of questions of interest are, how improbable is a fall in temperatures of that magnitude of a 12 month period? After all, it is irresponsible to report alarming results without demonstrating the statistical significance. Unfortunately it is a common practice, for example, see record high temperatures from NASA.

The statistical setup for answering the question is encoded in the question. As we are only looking at falls in temperature, this should be a one-tailed test. The data we need are the twelve-month changes in global temperature anomalies, of which there are twelve every year to compare against the previous year. We then need the area of the distribution curve for these results, up to and including the value in question, -0.5906 in the case of the HadCRU data.

The raw temperature change datasets can be found in the second column of the file at the UK Hadley Climate Research Unit

Programming in R is then a case of adding to the length of the temperature anomalies, both at the beginning and the end, and then subtracting them. In R (I am sure there is a more elegant way to do this BTW):

d< -read.table("hadcrut-monthly.txt")
d12f<-c(rep(NA,12),d$V2)
d12b<-c(d$V2,rep(NA,12))
d12d<-d12f-d12b

From the vector d12d of 12 month differences we calculate the mean and variace below, remembering to omit the NA’s.

> mean(na.omit(d12d))
[1] -0.005088594
> sd(d12d)
Error in var(x, na.rm = na.rm) : missing observations in cov/cor
> sd(na.omit(d12d))
[1] 0.201136

Hmmm… As the mean is negative it looks like falls tend to more negative than positive over the period (1850 to the present). Perhaps this is not a stationary process, and temperatures have been on a long term downward trend since the 19th century? Anyway, we then get the probability of the present fall from the R pnorm function.

> pnorm(mean=-0.005088594,sd=0.201136,-0.595)
[1] 0.00167907

This analysis, assuming the 12 month differences come from a normal distribution, suggests that there is a 0.00168 probability of a fall larger than 0.595C. In annual terms this is a one in 50 year event! Call the press!

However, being skeptical statisticians we suspect that the temperature differences are not from a normal differences. The most direct way to do this is to generate the actual histogram of differences and calculate the probability directly from the number of events with falling temperatures greater than 0.595C. The R code below gives us a histogram changes in global temperature:

histo-temp.png

h< -hist(d12d,breaks=seq(-1.1,1.3,0.1))

$breaks
 [1] -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1  0.0  0.1  0.2
[15]  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1.0  1.1  1.2  1.3

$counts
 [1]   0   0   2   2   4  18  30  71 146 281 391 413 291 122  61  31  16   5
[19]   0   0   0   0   1   0

From the counts above we get the counts less than 0.6C.

> 26/1885
[1] 0.01379

A probability of 0.0139 translates into a one in 6 year event. The advantages of the histogram is that it prevents us from exaggerating the significance of the event where we do not have a normal distribution. A fall in temperatures of this magnitude is only expected once in every 6 years – an order of magnitude different to 50 years with the false assumption of normality in the data.

Advertisements

0 thoughts on “Surface Temperatures – How significant is the January 2008 fall?

  1. Isn’t that 6/1885 or 0.00318? That would be once every 26.2 years. How many of these months back to back would be unusual? Remember we have no volcanic eruption to blame this cooling on. As we are into small numbers, would it be better to look at temps 364 days apart?
    Thanks

  2. Isn’t that 6/1885 or 0.00318? That would be once every 26.2 years. How many of these months back to back would be unusual? Remember we have no volcanic eruption to blame this cooling on. As we are into small numbers, would it be better to look at temps 364 days apart?
    Thanks

  3. John, summing the tail of the distribution to the -0.6 class gives 0+0+2+2+4+18=26 counts. This gives a monthly probability of 26/1885=0.0138, a probability of it occurring in a year of 12*0.0138=0.166 and an expected recurrence of 1/0.166=6 years. I have corrected the post.

    This should be about right for estimating the probabilities given on explanation like volcanic eruptions. The probability should be higher (less unusual) if there are eruptions as the cooling would be explained.

    These temperatures are 12 months apart, (or 364 days). I might look at the 13month statistics when they come out. If you are looking for a one in one hundred year event (0.01) my guess is that it would take another year of falling temperatures to get there.

    Sorry about the delay, but I am getting so much spam now I have to moderate.

  4. John, summing the tail of the distribution to the -0.6 class gives 0+0+2+2+4+18=26 counts. This gives a monthly probability of 26/1885=0.0138, a probability of it occurring in a year of 12*0.0138=0.166 and an expected recurrence of 1/0.166=6 years. I have corrected the post.

    This should be about right for estimating the probabilities given on explanation like volcanic eruptions. The probability should be higher (less unusual) if there are eruptions as the cooling would be explained.

    These temperatures are 12 months apart, (or 364 days). I might look at the 13month statistics when they come out. If you are looking for a one in one hundred year event (0.01) my guess is that it would take another year of falling temperatures to get there.

    Sorry about the delay, but I am getting so much spam now I have to moderate.

  5. Thanks for the reply. I was looking at the wrong tail. So if the drop had been 0.01 more we’d be summing the tail of the -0.7 class. 0+0+2+2+4=8 8/1885=0.00424 or once every 19.6 years and if the drop was up to 0.09 less it would still be a one in 6 year event.

    Somehow that doesn’t seem right even if we pretend HadCRU month data is accurate to 0.01 degree C.
    If we fitted a curve to the data and solved for the area of the tail to the left of -0.5906 would it still be a one in 6 year event? It has been 35 years since I’ve done these sort of things … About the number of years since the US has had so much snow cover :). Sorry if the answer is obvious.

  6. Thanks for the reply. I was looking at the wrong tail. So if the drop had been 0.01 more we’d be summing the tail of the -0.7 class. 0+0+2+2+4=8 8/1885=0.00424 or once every 19.6 years and if the drop was up to 0.09 less it would still be a one in 6 year event.

    Somehow that doesn’t seem right even if we pretend HadCRU month data is accurate to 0.01 degree C.
    If we fitted a curve to the data and solved for the area of the tail to the left of -0.5906 would it still be a one in 6 year event? It has been 35 years since I’ve done these sort of things … About the number of years since the US has had so much snow cover :). Sorry if the answer is obvious.

  7. John, You are right, the estimate is less accurate because it is only a sample. To me, the distribution looks Leptokurtic, the probability distribution is simultaneously “peaked” and has “fat tails.” So you would have to test for these and fit an appropriate distribution.

    Actually it looks too frequent to me too. I think that the runs are longer in the earlier portion of the temperature record, probably because there were fewer sites then, so the variance is higher. It would be worth trying to estimate the actual variance for the recent period of data collection. Because the present day data has lower variance and greater reliability, then runs may be less common. Just a guess at this stage.

  8. John, You are right, the estimate is less accurate because it is only a sample. To me, the distribution looks Leptokurtic, the probability distribution is simultaneously “peaked” and has “fat tails.” So you would have to test for these and fit an appropriate distribution.

    Actually it looks too frequent to me too. I think that the runs are longer in the earlier portion of the temperature record, probably because there were fewer sites then, so the variance is higher. It would be worth trying to estimate the actual variance for the recent period of data collection. Because the present day data has lower variance and greater reliability, then runs may be less common. Just a guess at this stage.

  9. Pingback: bateria do laptopa acer

  10. Pingback: wynajem aut

  11. Pingback: witryna

  12. Pingback: link do strony

  13. Pingback: kliknij link

  14. Pingback: sell your bitcoin

  15. Pingback: polecam

  16. Pingback: polecam

  17. Pingback: witryna

  18. Pingback: kliknij

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s