The timing of the 24-h observing window occasionally affects the determination of daily extrema through a mischaracterization of the diurnal minima and by extension can lead to errors in determining daily mean temperature.
The complete UAH v6.0 data for January have been released. Ken presents all the graphs for various regions, and as well summaries for easier comparison. Ken also include graphs for the North and South Temperate regions (20-60 North and South), estimated from Polar and Extra-Tropical data.
The Pause has ended globally…
Graham Lloyd, The Australian, June 19, 2015
Better data handling and statistical methods and the use of pre-1910 temperature records would improve the Bureau of Meteorology’s national temperature data set ACORN-SAT, an independent review has found.
A technical advisory panel, brought forward following public concerns that the bureau’s homogenisation process was exaggerating a warming trend, said it was “generally satisfied” with BoM’s performance.
But it said there was “scope for improvements that can boost the transparency of the data set”.
Scientists who queried BoM’s management of the national temperature data said they had been vindicated by the report.
The review panel made five recommendations and said it was “not currently possible to determine whether the improvements recommended by the forum will result in an increased or decreased warming trend as reflected in the ACORN-SAT dataset”.
The independent review panel was recommended by a peer review of the Australian Climate Observations Reference Network — Surface Air Temperature, but it not acted upon until public concerns were raised.
BoM’s technical advisory forum said ACORN-SAT was a complex and well-maintained data set. Public submissions about BoM’s work “do not provide evidence or offer a justification for contesting the overall need for homogenisation and the scientific integrity of the bureau’s climate records.”
Nonetheless, the review report said it considered its recommendations for improving the bureau’s communications, statistical methods and data handling, and further regional analysis based on the pre-1910 data, would address the most important concerns.
David Stockwell, who raised concerns, said he was “very pleased with the recommendations”.
“They largely identify and address all of the concerns that I have had with the past BoM work,” Dr Stockwell said. “When implemented, it should lead to considerable improvements.
“The panel recommended strongly that the BoM communicate the limitations and it agreed that errors in the data need to be corrected and homogenisation is necessary, as I do, although it must be communicated clearly that the ACORN result is a relative index of change and not an observational series.”
The forum received 20 public submissions which questioned;
• The 1910 starting date (although some pre-1910 records are available) and its potential effect on reported climate trends;
• The treatment of claimed cyclical warming and cooling periods in the adjustment process and its effect on reported warming trends;
• The potential effects of site selection and the later inclusion of stations in warmer regions;
• The treatment of statistical uncertainty associated with both raw and homogenised data sets;
• The ability of individuals to replicate or verify the data set; and
• the justification for adjusting historic temperature records.
Bob Baldwin, the parliamentary secretary responsible for BoM, said the bureau would work to adopt the recommendations.
“We believe the forum’s recommendations for improving the bureau’s overall communications, statistical methods and data handling, and further regional analysis based on the pre-1910 data, will help address the main concerns surrounding the dataset,” he said.
Mr Baldwin said the report was an important part of ensuring the bureau continued to provide world-class information on the climate trends affecting Australia.
The review panel said its recommendations predominately addressed two key aspects of ACORN-SAT.
They were: to improve the clarity and accessibility of information provided, in particular explaining the uncertainty inherent to both raw and homogenised datasets; and refining some data-handling and statistical methods through appropriate statistical standardisation procedures, sensitivity analysis and alternative data filling approaches.
High res PDF version
The climate sensitivity due to CO2 is expressed as the temperature change in °C associated with a doubling of the concentration of carbon dioxide in Earth’s atmosphere. The equilibrium climate sensitivity (ECS) refers to the equilibrium change in global mean near-surface air temperature that would result from a sustained doubling of the atmospheric carbon dioxide concentration. The transient climate response (TCR) is defined as the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year. The transient response is lower than the equilibrium sensitivity, due to the “inertia” of ocean heat uptake.
Scientists made numerous estimates of climate sensitivity over the last few decades and have yet to determine the correct value. The figure shows the change in published climate sensitivity measurements over the past 15 years (from here). The ECS and TCR estimates have both declined in the last 15 years, with the ECS declining from 6C to less than 2C. While one cannot extrapolate from past results, it is likely that the true figure is below 2C, and may continue to decline. Based on this historic pattern we should reject the studies that falsely exaggerated the climate sensitivity in the past and remember that global warming is not the most serious issue facing the world today.
For computational, statistical or display reasons, daily data are often aggregated to a coarser time scale. This is done by splitting the sequence into subsets along a coarser index grid, and calculating a summary statistics such as the mean value at each segment.
Missing values cause problems when calculating the mean. In the R default, the presence of a single NA returns an NA for most arithmetic operations. There is an option to calculate the mean after omitting the NAs. In the first case, the calculated means are valid but data is lost when converted to NAs. In the second, no data is lost but the means deviate wildly when the data come from strongly cyclical series such as temperature.
Figure 1 shows the reduction in monthly aggregate data when na.rm=T.
Figure 2 shows the reduction when na.rm=T, with almost total loss on annual aggregation. While data is not lost with the option na.rm=F, the outliers at the start and end of the Rutherglen minimum data series illustrates its unexpected biasing effect.
The figures illustrate that a heterogeneous sequence is not ‘invariant’ with respect to aggregation using a mean. The only way to ensure invariance, which confers a degree of reliability under aggregation, is if the missing data are randomly distributed within each section of the course index.
Most studies define rules about the number of allowable missing values, but either these are not clearly stated, or use rules that o not guarantee invariance, such as a set number of missing values (eg. CAWCR).
Because of the invariance of heterogeneous data under aggregation, it is best to analyze data at their original resolution.
The recorded sequences of temperature and rainfall from weather stations is often strikingly heterogeneous, with many different formats, protocols, and disruptions in the records. The Australian temperature record we see is the product of smoothing algorithms used to produce graphic displays that hide this structure. The problem of parameter estimation must be approached using methods that approximate the ideal homogeneous case.
While a standard formats for water data exists (WDF) there do not appear to be standards for temperature data. The function sequences reads and detects two types of data file downloaded the Australian Bureau of Meteorology: the Climate Data Online (CDO) and the ACORN-SAT reference data set. You can use a wild card to identify the ones you want or a specific one.
The sequences function returns a ‘zoo’ series which is a particularly powerful time series structure in R. The zoo series can be combined on the union or intersection of their dates with the ‘merge’ command.
Zoo series can represent time of day as well with the ?YYYY-mm-dd hh:mm:ss? format, allowing separate maximum and minimum temperature series to be effectively combined to achieve a single daily temperature series.
Figure 1 is the plot of the Rutherglen minimum daily temperature in from the raw CDO and homogenized ACORN network. The difference between the raw CDO data and the ACORN series is in blue. These are the adjustments to the Rutherglen minimum series.
All of the sequences that match criteria can also be loaded with a command such as the following.
The summary function returns descriptive statistics about the heterogeneity of single or multiple series such as the date of the first and last value, the number and proportion of NAs.
Figure 2 shows the number of active stations (non-NAs) in 176 stations in south-eastern Australian. Note the extremely uneven collection of weather data over time. Such extreme heterogeneity can easily bias analysis that is sensitive to the number of missing values over time.
For example, calculation of a mean temperature sequence could only done reliably if the missing values were distributed uniformly. If the weather stations that recorded during the latter 20th century tended to be situated inland where the weather is hotter, this would tend to bias a simple average warm over that period. If the sequences were standardized on a common time period such as the 60’s, there would be a bias between periods before and after the standard period.
Clearly, taking the mean, averaging, or any similar operation is fraught with danger with extremely heterogeneous sequences such as these.
Yesterday’s post noted the appearance of station summaries at the BoM adjustment page attempting to defend their adjustments to the temperature record at several stations. Some I have also examined. Today’s post compares and contrasts their approach with mine.
The figures below compare the minimum temperatures at Deniliquin with neighbouring stations. On the left, the BoM compares Deniliquin with minimum temperatures at Kerang (95 km west of Deniliquin) in the years around 1971. The figure on the right from my Deniliquin report shows the relative trend of daily temperature data from 26 neighbouring stations (ie ACORN-SAT – neighbour). The rising trends mean that the ACORN-SAT site is warming faster that the neighbour.
The BoMs caption:
Deniliquin is consistently warmer than Kerang prior to 1971, with similar or cooler temperatures after 1971. This, combined with similar results when Deniliquin’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.
Problems: Note the cherrypicking of a single site for comparison and the handwaving about “similar results” with other sites.
In my analysis, the ACORN-SAT version warms at 0.13C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, this puts the ACORN-SAT version outside the limit. Therefore the adjustments have made the trend of the official long term series for Deniliquin significantly warmer than the regional neighbours. I find that the residual trend of the raw data (before adjustment) for Deniliquin is -0.02C/decade which is not significant and so consistent with its neighbours.
Now look at the comparison of minimum temperatures for Rutherglen with neighbouring stations. On the left, the BoM compares Rutherglen with the adjusted data from three other ACORN-SAT stations in the region. The figure on the right from my Rutherglen report shows the relative trend of daily temperature in 24 neighbouring stations (ie ACORN-SAT – neighbour). As in Deniliquin, the rising trends mean that the ACORN-SAT site is warming faster that the neighbour.
The BoMs caption is
While the situation is complicated by the large amount of
missing data at Rutherglen in the 1960s, it is clear that, relative to the other sites, Rutherglen’s raw minimum temperatures are very much cooler after 1974, whereas they were only slightly cooler before the 1960s.
Problems: Note the cherrypicking of only three sites, but more seriously, the versions chosen are from the adjusted ACORN-SAT. That is, the already adjusted data is used to justify an adjustment — a classic circularity! This is not stated in the other BoM reports, but probably applies to the other station comparisons. Loss of data due to aggregation to annual data is also clear.
In my analysis, the ACORN-SAT version warms at 0.14C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, this puts the ACORN-SAT version outside the limit. Once again, the adjustments have made the trend of the official long term series for Deniliquin significantly warmer than the regional neighbours. As with Deniliquin, the residual trend of the raw data (before adjustment) is not significant and so consistent with its neighbours.
The raw data is not always more consistent, as Amberley shows. On the left, the BoM compares Amberley with Gatton (38 km west of Amberley) in
the years around 1980. On the right from my Amberley report is the relative trend of daily temperature to 19 neighbouring stations (ie ACORN-SAT – neighbour). In contrast to Rutherglen and Deniliquin, the mostly flat trends mean that the ACORN-SAT site is not warming faster than the raw neighbours.
The BoMs caption:
Amberley is consistently warmer than Gatton prior to 1980 and consistently cooler after 1980. This, combined with similar results when Amberley’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.
Problems: Note the cherrypicking and hand waving.
In my analysis, the ACORN-SAT version warms at 0.09C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, I class the ACORN-SAT version as borderline. The residual trend of the raw data (before adjustment) is -0.32C/decade which is very significant and so there is clearly a problem with the raw station record.
More cherrypicking, circularity, and hand-waving from the BoM — excellent examples of the inadequacy of the adjusted ACORN-SAT reference network and justification for a full audit of the Bureau’s climate change division.
Last night George Christensen MP gave a speech accusing the Bureau of Meteorology of “fudging figures”. He waved a 28 page of adjustments around, and called for a review. These adjustments can be found here. While I dont agree that adjusting to account for station moves can necessarily be regarded as fudging figures, I am finding issues with the ACORN-SAT data set.
The problem is that most of the adjustments are not supported by known station moves, and many may be wrong or exaggerated. It also means that if the adjustment decreases temperatures in the past, claims of current record temperatures become tenuous. A maximum daily temperature of 50C written in 1890 in black and white is higher than a temperature of 48C in 2014, regardless of any post-hoc statistical manipulation.
But I do take issue with a set of summaries being released as blatant “cherry-picking”.
Scroll down to the bottom of the BoM adjustment page. Listed are station summaries justifying the adjustments to Amberley, Deniliquin, Mackay, Orbost, Rutherglen and Thargomindah. The overlaps with the ones I have evaluated are Deniliquin, Rutherglen and Amberley (see previous posts). While the BoM finds the adjustments to these stations justified, my quality control check finds problems with the minimum temperature at Deniliquin and Rutherglen. I think the Amberly raw data may have needed adjusting.
WRT Rutherglen, BoM defends the adjustments with Chart 3 (my emphasis):
Chart 3 shows a comparison of the raw minimum temperatures at Rutherglen with the adjusted data from three other ACORN-SAT stations in the region. While the situation is complicated by the large amount of missing data at Rutherglen in the 1960s, it is clear that, relative to the other sites, Rutherglen’s raw minimum temperatures are very much cooler after 1974, whereas they were only slightly cooler before the 1960s.
WRT Deniliquin, BoM defends the adjustments on Chart 3 (my emphasis):
Chart 3 shows a comparison of minimum temperatures at Kerang (95 km west of Deniliquin) and Deniliquin in the years around 1971. Deniliquin is consistently warmer than Kerang prior to 1971, with similar or cooler temperatures after 1971. This, combined with similar results when Deniliquin’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.
My analysis is superior to flawed the BoMs analysis in 3 important ways:
1. I compare the trend in Rutherglen and Deniliquin with 23 and 27 stations respectively, not 3 and 1 neighbouring stations respectively (aka cherry-picking).
2. I also use a rigorous statistical panel test to show that the trend of the Rutherglen minimum exceeds the neighbouring group by O.1C per decade, which is outside the 95% confidence interval for Australian stations trends — not a visual assessment of a chart (aka eyeballing).
3. I use the trends of daily data and not annual aggregates, which are very sensitive to missing data.
An increase in global temperature required to match the Intergovernmental Panel on Climate Change (IPCC) projections is becoming increasingly unlikely. A shift to a mean projected pathway of 3 degrees increase by the end of the century would require an immediate, large, and sustained increase in temperature which seems physically impossible.
Global surface temperatures have not increased at all in the last 18 years. The trend over the last few years is even falling slightly.
Global temperatures continue to track at the low end of the range of global warming scenarios, expanding a significant gap between current trends and the course needed to be consistent with IPCC projections.
On-going international climate negotiations fail to recognise the growing gap between the model projections based on global greenhouse emissions and the increasingly unlikely chance of those models being correct.
Research led by Ben Santer, compared temperatures under emission scenarios used to project climate change by the (IPCC) with satellite temperature observations at all latitudes.
“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.” reports their paper in PNAS. Moreover, it is not known why they are failing.
“The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive.”
Explaining why they are failing will require a commitment to skeptical inquiry and an increasing need to rely on the scientific method.
The unquestioning acceptance of projections of IPCC climate models by the CSIRO, Australian Climate Change Science Program, and many other traditional scientific bodies that has informed policies and decisions on energy use and associated costs must be called into question. So to must the long-term warming scenarios based on the link between emissions and increases in temperature.
“How much do I fail thee. Let me count the ways”
Ben Santer’s latest model/observation comparison paper demonstrates that climate realists were right and climate models exaggerate warming:
The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.
Where do the models fail?
1. Significantly warmer than reality (95% CI) in the lower troposphere at all latitudes, except for the arctic.
2. Significantly warmer than reality (95% CI) in the mid-troposphere at all latitudes, except for the possible polar regions.
3. Significant warmer that reality (95% CI) in the lower stratosphere at all latitudes, except possibly polar regions.
Answer: Everywhere except for polar regions where uncertainty is greater.