Review of Antarctic Snowfall

A recent Nature paper we have been reviewing, claims recent snowfall at Law Dome, Antarctica and the drought in Western Australia “lies outside the range of variability for the record as a whole”. Being about precipitation (often more important to us than temperature), and claims of evidence of AGW causing drought, its interesting.

I finally succeeded in replicating the results but only after resorting to viewing the code, due to omissions in the description of methods. Below I argue (at the end) that the precipitation in LD (and therefore in Western Australia) is not unusual, finding a better than 5% chance of an anomaly that size occurring in a record of that length.

To his credit, Tas van Ommen has been incredibly helpful, open with his code and data, and patient with my WTF moments.

The core presentation of the ice core anomalies is in Fig 3 from the paper, from which the method is described, shown below:

Antarctic snowfall

The focus of attention is on the size of the last anomaly that starts in 1970 (red far right), relative to the others. The supplementary information states:

The 56 anomalies in the period 1250-1969 have a mean precipitation of –0.0188 m and standard deviation, 0.719 m (ice-equivalent, i.e. for glacial ice density 917 kg.m-3). The post 1970 anomaly is 2.42 m (i.e.), which is extremely improbable from the distribution of 1250-1969 anomalies (P = 3.4×10-4, normal z-score).

I finally got agreement on a Mean=-0.0188m, SD=0.719 with a post-1970 anomaly of 2.42m but only after realizing the following:

1. The data starts in 1948, not 1950 as described.

2. The size of the anomalies is not the area of red and blue on the Figure, which are the smoothed anomalies, but the summed raw anomalies. The smoothed anomalies determine only the start and end points.

Strangely, the summed, smoothed anomalies have a non-normal distribution, but the raw anomalies are approximately normally distributed. This led me down the garden path with an hypothesis of ‘thick-tails’. Granted, a close reading of the text is correct, but doesn’t say, Why?

We define decadal-scale anomalies as the difference between the ten-year Gaussian smoothed series and the long-term mean, and use the sign changes in this anomaly to define successive anomalous intervals. The size of the event is the integrated accumulation anomaly over each interval.

3. There are actually 57 anomalies in the period 1250-1969, not 56 as stated. However, the first, large negative anomaly was omitted, and not mentioned in the method description. Tas says the rationale was because it is ‘partial’. Of course the last anomaly is partial too but it is included, so again one wonders, Why?

The incompleteness does raise some interesting questions, related to smoothing of endpoints that I raised here. The anomaly could get larger or smaller, depending on what data comes in next.

Anyway once I had sorted out these difficulties, my R emulation of his matlab code below gives similar results, mindful that I haven’t checked everything as I started from the raw and smoothed data, not the actual core data.

The omission of the first anomaly makes little difference, and to Tas’ credit, he I approached the editor offering to correct the record, who said the modification was unnecessary. I would have thought it worthwhile in the interests of accuracy and future attempts to replicate the study to clarify that the anomalies actually analyzed started in 1274, not 1250, as stated in the paper.

Statistics

Moving on. The claim is that the anomaly “lies outside the range of variability for the record as a whole”. Here the warrant is given by a simple z-test test:

The post 1970 anomaly is 2.42 m, which is extremely improbable from the distribution of 1250-1969 anomalies (P=3.4×10e-4, normal z-score).

I think this makes the same mistake as the much-criticised paper by Douglass et al on model consistency with observations, subsequently excoriated by Santer et al in 2008. Douglass responds to criticism here and a 600 comment discussion follows. Or search on douglass for the saga, see Lucia’s posts and Air Vent.

As Santer says of the Douglass test:

If the DCPS07 test were valid, a large value of d∗ would
imply a significant difference between the means. However, the test is not valid. There are a number of reasons for this:
1. DCPS07 ignore the pronounced influence of interannual variability on the observed trend (see Figure 2A). They make the implicit (and incorrect) assumption that the externally forced component in the observations is perfectly known.

I suspect the test here suffers the same problem by treating the size of the last anomaly as perfectly known, when it comes from a population with a similar variance as the previous 56 (albeit with a larger mean which is what we are trying to test).

In addition, the uncertainty of the size of the last anomaly is even greater because its incomplete. The anomaly could be larger, or smaller, because subsequent data could change the smoothing, and hence the size.

Santer proposes a difference of means test, where R is the set of 56 ‘natural’ anomalies and R1 is the last anomaly:

z = |mean(R)-R1|/sqrt(var(R)/n+E*var(R)/1)

I have included E for the additional uncertainty in R1 due to incompleteness. Whereas in the first case the z-score gives a P=0.00034, we now get a P=0.00038 (E=1). If we set E=1.2 to provide for 20% uncertainty in the final size of the anomaly, then P=0.001.

The papers logic says that low probability implies “lies outside the range of variability for the record as a whole.” But technically, an anomaly with a probability of P=0.001 is within the range of variability, just rare.

How can whether a record of Antarctic anomalies is “within the range of natural variability” be tested in a meaningful way? The paper says:

Using these probabilities, and given the mean duration of individual decadal scale anomalies (12.8 years), the post 1970 event might be expected almost once in ~38,000 years (for the preferred composite stack) or once in 5,400 years (for the alternate stack). In either case the event is clearly outside the envelope of natural variability.

All this shows in that the probability of an anomaly of that size is low (or infrequent on average). The step showing that such an anomaly is unexpected in the ~750 year period of 57 or so anomalies is missing.

I point out at this time that the paper has chosen to describe precipitation as anomalies — discrete events that occur independently of one another. They are not continuous levels one could fit a trend line through. So the natural description of the anomalies is a Poisson process.

A Poisson process gives us the notion of ‘normal’ for a system that generates discrete anomalies. An ‘abnormal’ system throwing out larger than expected anomalies could be said to be operating ‘outside the range of natural variability’.

To show this we need to calculate the probability of all occurrences of 1, 2, 3, etc. events of that size in a run of 57 anomalies. The Poisson distribution for frequency of rare events does the trick, and we take one minus the probability of zero occurrences to get the probability of any number of occurrences greater than zero, i.e.

P = 1-ppois(0,lambda=57*0.001) = 0.055

That is, there is slightly greater than a 5% chance of seeing one or more anomalies this size in the process of generating 57 anomalies. Given the height of the bar in experimental science is 95%CL or a 5% chance, it’s not clear that seeing one anomaly this size means the system is ‘outside the range of variability for the record as a whole’. More data is required to establish significance.

Unusual but not abnormal. Unlikely but not impossible.

IMHO one can’t conclude that the excess snowfall in Law Dome, Antarctica and similarly the drought in Western Australia is due to AGW.

Code


tasblips<-function()
{
allmean=mean(Y[,2])
# Get the decadal anomalies
SgnDecAnom=sign(Y[,3]-allmean);
# Find the sign switches in the decadal anomalies
countAnom=abs(SgnDecAnom[2:758]-SgnDecAnom[1:757])/2;
# Assign the individual anomalies a sequence number
x=cumsum(countAnom);
#This line added by DRS to line up properly anomalies
x=c(0,x)
print(max(x))
Temp<-cbind(Y[,3]-allmean,x)


# Now loop over anomalies and tally up the raw accumulation numbers for each
Event=NULL
Eventlen=Event;
#Q 1: Was 1:max now 0:max, missed first anomaly
for (i in 0:max(x)) {
#Q 2: (small) misses the first value
#ind=1+which(x==i);
ind=which(x==i);
Event<-c(Event,sum(Y[ind,2]-allmean));
Eventlen[i]=length(ind);
}
Event
}

Advertisements

0 thoughts on “Review of Antarctic Snowfall

  1. Are there not problems with distributions (apart from your observation that the recent event is incomplete) because of the possibility of zero precipitation? Perhaps not here, but there are plenty of places with zero annual precipitation. Also, sublimation is a process that enters the estimation of precipitation at Law Dome. I am unsure if it is accounted for (as by use of isotopes or dust density) but if not, then it would compound the problem because the remnant layer thickness would depend not only on how much was put down, but also on how much was wafted off – and the latter is hard to estimate and separate from the paramaters sought.

  2. I would think that generally a log-normal distribution would be good for precipitation total (with exponential tails), <img src=”http://www.aetheling.com/MI/Volatility/images/Lognormal.gif”> but in this case its anomalies, so the mean is zero. The anomalies are normal by the tests, so I am OK with that. There is also the troubling issue of the anomalies alternating +ve and -ve, so the degrees of freedom have to be less. I just don't like smoothing. It introduces too many problems.

  3. Are there not problems with distributions (apart from your observation that the recent event is incomplete) because of the possibility of zero precipitation? Perhaps not here, but there are plenty of places with zero annual precipitation. Also, sublimation is a process that enters the estimation of precipitation at Law Dome. I am unsure if it is accounted for (as by use of isotopes or dust density) but if not, then it would compound the problem because the remnant layer thickness would depend not only on how much was put down, but also on how much was wafted off – and the latter is hard to estimate and separate from the paramaters sought.

    • I would think that generally a log-normal distribution would be good for precipitation total (with exponential tails),

      but in this case its anomalies, so the mean is zero. The anomalies are normal by the tests, so I am OK with that. There is also the troubling issue of the anomalies alternating +ve and -ve, so the degrees of freedom have to be less. I just don’t like smoothing. It introduces too many problems.

  4. Prof. Emeritus Will Alexander shows a 21 year solar periodicity in the hydrometerological data in South Africa. e.g., A critical assessment of current climate change science W.J.R. Alexander 2006.See especially page 25 “Figure 2. Characteristics of the periodic sequences of river flow at representative dam sites. The double sunspot cycle is diagrammatically superimposed.”This periodicity might be evident in the Antartic snowfall.

  5. Prof. Emeritus Will Alexander shows a 21 year solar periodicity in the hydrometerological data in South Africa. e.g., A critical assessment of current climate change science W.J.R. Alexander 2006.

    See especially page 25 “Figure 2. Characteristics of the periodic sequences of river flow at representative dam sites. The double sunspot cycle is diagrammatically superimposed.”

    This periodicity might be evident in the Antartic snowfall.

    • David,
      I really didn’t spend much time on how I would analyse it, and yes I think that using EMD and seeing if the upturn in snowfall was predicted would be the way I would go.

  6. David,I really didn't spend much time on how I would analyse it, and yes I think that using EMD and seeing if the upturn in snowfall was predicted would be the way I would go.

  7. Prof. Emeritus Will Alexander shows a 21 year solar periodicity in the hydrometerological data in South Africa. e.g., A critical assessment of current climate change science W.J.R. Alexander 2006.See especially page 25 “Figure 2. Characteristics of the periodic sequences of river flow at representative dam sites. The double sunspot cycle is diagrammatically superimposed.”This periodicity might be evident in the Antartic snowfall.

  8. David,I really didn't spend much time on how I would analyse it, and yes I think that using EMD and seeing if the upturn in snowfall was predicted would be the way I would go.

  9. Pingback: zobacz tutaj

  10. Pingback: zobacz

  11. Pingback: wspaniala kuchnia lokalna

  12. Pingback: witryna www

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s