An issue in question here is whether the recent snowfall at Law Dome is unusually high relative to the 750 year long record (and therefore, so the argument goes, probably due to AGW).
Below is the snowfall at Law Dome from the ice core. Above is the actual snowfall, and below is the accumulation of the series minus the mean (using the R function cumsum) indicating where snowfall is above or below average.
This simple approach is not used in the paper. While the accumulation of snow at present (relative to the mean) is high, it was just as high earlier in the record around 1400. Figure 3 in the paper shows a 10 year Gaussian filter of these data, so I illustrate that below, with a similar result.
The approach in the paper records the size of filtered accumulation ‘events’. These would be the (approx. 60) areas under discrete sections of curves in the upper panel of the figures, postive if above or and negative if below the mean (red line). The event of most interest is the last one, where snow has accumulated since 1970, after filtering.
I start to get concerned with an approach like this because its unfamiliar to me, and I don’t recall anywhere in the statistics books where it has been characterized. I check the distribution. The figure below shows the distribution of these events, and compares them to a normal distribution.
The distribution of event sizes appears to be Leptokurtic (peaked with fat tails) although the supplementary information states that the distribution at a Gaussian half-filter size of 5 did not differ from the normal distribution. I have to check whether this deviation is accurate or due to the binning of the data.
Finally I looked at the improbability of the final event by calculating the set of events and their standard deviations for a range of filter sizes:
The dashed red line is the 2-sigma level, and dotted red line is the 3-sigma level. The size of the final event becomes 3-sigma significant at around half-filter size 4 and declines thereafter. In other words, the final event is significant at scales between 8 and 40 years.
Here is a link to the R code and data used to generate these plots.
Tas van Ommen has been a really good sport in providing data and information to allow me to check the figures. The relevant passage from the paper is as follows:
Long-term climate variability
In assessing the degree to which recent trends are natural or anthropogenic, it is useful to consider the size of the anomaly in the context of 750 years of data (Fig. 3). We define decadal-scale anomalies as the difference between the ten-year Gaussian smoothed series and the long-term mean, and use the sign changes in this anomaly to define successive anomalous intervals. The size of the event is the integrated accumulation anomaly over each interval. We find the event that commenced in 1970 is the largest in the record. It is significantly larger than the distribution of the 56 other events (P=7Ã—10âˆ’4,1-tail t -test; Supplementary Information). Only a single such event is expected in around 38,000 years for a climate unchanged from that of the past 750 years. were computed using one-sided t -tests.
In contrast, I get a significance of 99.7% or P=3×10-3 an expectation of every 333 years, or twice in the record, a similar level to the accumulation plot I presented first up.
There is an inconsistency in the passage quoted above. By my calculation a P=7Ã—10âˆ’4 event is 1/0.0007, equalling an occurance every 1428 years, so I am not sure where the figure of 38,000 years comes from.
Another puzzle is that statement “It is significantly larger than the distribution of the 56 other events (P=7Ã—10âˆ’4,1-tail t -test; Supplementary Information). ” indicates the test is based on the size of the final accumulation event. However the relevant statement in the supplementary information suggest this figure comes from a test of difference of distributions.
Law Dome 750-Year Accumulation Statistics
We computed decadal scale anomalies for the LD precipitation series as described in the main text, and found that the largest anomaly was the most recent decadal event, which began in 1970. This event appears to belong to a population with a larger mean than the remaining 56 decadal events â€“ we reject the null hypothesis (that it comes from the same distribution) using a 1-tailed t-test, (P = 7Ã—10-4).
The test of the size of the final event (comparable to the test I performed above) is described in the following paragraph. This is a order of magnitude different to my replication.
The 56 anomalies in the period 1250-1969 have a mean precipitation of â€“0.0188 m and standard deviation, 0.719 m (ice-equivalent, i.e. for glacial ice density 917 kg.m-3). The post 1970 anomaly is 2.42 m (i.e.), which is extremely improbable from the distribution of 1250-1969 anomalies (P = 3.4Ã—10-4, normal z-score).
It seems that the significance of the event might be based on the distribution of the (possibly filtered) annual snowfalls during that event. If so, put your statistician on danger money! I will try to replicate the stated values more closely tomorrow.
Even though this is an initial visual examination of the data, there are some questions that need to be looked at more closely:
Where do the test values come from and the 38,000 year figure?
Are the data really leptokurtic, meaning that even my initial estimates of P=0.003 are inflated anyway?
The snowfall at Law Dome does not look particularly impressive here, given that the present accumulation of snow is not greater that at other times in the 750 year record.
van Ommen, T. D. and V. Morgan (2010). Snowfall increase in coastal East Antarctica linked with southwest Western Australian drought, Nature Geoscience, Advance Online Publication, doi:10.1038/NGEO761