Opinion: Can chess rescue Australia’s education system?

According to the results from the OECD Performance Information for International Student Assessment, the academic results of Australian school students across mathematics, English and science has been in relative decline to our international peers since the year 2000.

Moreover, Australia possesses one of the largest gaps within the OECD regarding academic achievement between our highest and lowest performing students.

From an education perspective, these worrying trends should be a concern for all Australians. They also have long-term implications for the Australian economy.

The effectiveness of Australia’s education system and whether it best meets the requirements of a 21st century economy that is more globalised, places greater emphasis on knowledge, is more high tech and is more complex is an issue that needs to be at the centre of the national policy debate.

Governments across the world are considering interesting interventions into their educational systems and societies that give their students and their economies a sustainable competitive economic advantage against rival nations over the long term.

During this century, sustainable competitive economic advantage will be defined not only by the attainment of knowledge or technical skills, but more by analytical and cognitive intellectual capacity that can solve complex industrial problems and process greater levels of detailed information.

Within this context, governments have turned to investigating whether chess can deliver benefits to children both academically and behaviourally, with some governments already taking action (e.g. the European Parliament, in 2011, called on all member states to introduce chess into their schooling systems).
Opinion: Can chess rescue Australia’s education system?

According to the results from the OECD Performance Information for International Student Assessment, the academic results of Australian school students across mathematics, English and science has been in relative decline to our international peers since the year 2000.

Moreover, Australia possesses one of the largest gaps within the OECD regarding academic achievement between our highest and lowest performing students.

From an education perspective, these worrying trends should be a concern for all Australians. They also have long-term implications for the Australian economy.

The effectiveness of Australia’s education system and whether it best meets the requirements of a 21st century economy that is more globalised, places greater emphasis on knowledge, is more high tech and is more complex is an issue that needs to be at the centre of the national policy debate.

Governments across the world are considering interesting interventions into their educational systems and societies that give their students and their economies a sustainable competitive economic advantage against rival nations over the long term.

During this century, sustainable competitive economic advantage will be defined not only by the attainment of knowledge or technical skills, but more by analytical and cognitive intellectual capacity that can solve complex industrial problems and process greater levels of detailed information.

Within this context, governments have turned to investigating whether chess can deliver benefits to children both academically and behaviourally, with some governments already taking action (e.g. the European Parliament, in 2011, called on all member states to introduce chess into their schooling systems).

A growing body of international research over the past four decades indicates that chess can be used as a tool to lift academic performance in maths and reading as well as improve student concentration and engagement with learning.

Given the growing international interest in chess and the worrying trends present in Australia’s education system, I recently announced that I was launching an Australian research project investigating the public policy benefits of chess across a suite of public policy fields with a primary emphasis on education.

It is logical that Australia, given international developments, should also be investigating whether chess can deliver public policy benefits to Australians given our existing challenges.

The research project will draw on both existing Australian and international chess related evidence that will produce a series of findings and recommendations for federal and state policy makers to consider.

The final research report is due in 2016.

John Adams is the current Government Relations Director at the Australian Chess Federation and can be directly contacted by e-mail at jadams1796@gmail.com.
A growing body of international research over the past four decades indicates that chess can be used as a tool to lift academic performance in maths and reading as well as improve student concentration and engagement with learning.

Given the growing international interest in chess and the worrying trends present in Australia’s education system, I recently announced that I was launching an Australian research project investigating the public policy benefits of chess across a suite of public policy fields with a primary emphasis on education.

It is logical that Australia, given international developments, should also be investigating whether chess can deliver public policy benefits to Australians given our existing challenges.

The research project will draw on both existing Australian and international chess related evidence that will produce a series of findings and recommendations for federal and state policy makers to consider.

The final research report is due in 2016.

John Adams is the current Government Relations Director at the Australian Chess Federation and can be directly contacted by e-mail at jadams1796@gmail.com. Reprinted from the Bundaberg Chess Newsletter.

Book: Niche Modeling

http://books.google.com.au/books?id=aTrp__h61pEC&lpg=PR19&ots=20nqK-udyG&lr&pg=PP1&output=embed

Using theory, applications, and examples of inferences, Niche Modeling: Predictions from Statistical Distributions demonstrates how to conduct and evaluate niche modeling projects in any area of application. It features a series of theoretical and practical exercises for developing and evaluating niche models.

BoM copies me, inadequately

Yesterday’s post noted the appearance of station summaries at the BoM adjustment page attempting to defend their adjustments to the temperature record at several stations. Some I have also examined. Today’s post compares and contrasts their approach with mine.

Deniliquin

The figures below compare the minimum temperatures at Deniliquin with neighbouring stations. On the left, the BoM compares Deniliquin with minimum temperatures at Kerang (95 km west of Deniliquin) in the years around 1971. The figure on the right from my Deniliquin report shows the relative trend of daily temperature data from 26 neighbouring stations (ie ACORN-SAT – neighbour). The rising trends mean that the ACORN-SAT site is warming faster that the neighbour.

BoMdeniliquin Deniliquin

The BoMs caption:

Deniliquin is consistently warmer than Kerang prior to 1971, with similar or cooler temperatures after 1971. This, combined with similar results when Deniliquin’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

Problems: Note the cherrypicking of a single site for comparison and the handwaving about “similar results” with other sites.

In my analysis, the ACORN-SAT version warms at 0.13C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, this puts the ACORN-SAT version outside the limit. Therefore the adjustments have made the trend of the official long term series for Deniliquin significantly warmer than the regional neighbours. I find that the residual trend of the raw data (before adjustment) for Deniliquin is -0.02C/decade which is not significant and so consistent with its neighbours.

Rutherglen

Now look at the comparison of minimum temperatures for Rutherglen with neighbouring stations. On the left, the BoM compares Rutherglen with the adjusted data from three other ACORN-SAT stations in the region. The figure on the right from my Rutherglen report shows the relative trend of daily temperature in 24 neighbouring stations (ie ACORN-SAT – neighbour). As in Deniliquin, the rising trends mean that the ACORN-SAT site is warming faster that the neighbour.

BoMrutherglen Deniliquin

The BoMs caption is

While the situation is complicated by the large amount of
missing data at Rutherglen in the 1960s, it is clear that, relative to the other sites, Rutherglen’s raw minimum temperatures are very much cooler after 1974, whereas they were only slightly cooler before the 1960s.

Problems: Note the cherrypicking of only three sites, but more seriously, the versions chosen are from the adjusted ACORN-SAT. That is, the already adjusted data is used to justify an adjustment — a classic circularity! This is not stated in the other BoM reports, but probably applies to the other station comparisons. Loss of data due to aggregation to annual data is also clear.

In my analysis, the ACORN-SAT version warms at 0.14C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, this puts the ACORN-SAT version outside the limit. Once again, the adjustments have made the trend of the official long term series for Deniliquin significantly warmer than the regional neighbours. As with Deniliquin, the residual trend of the raw data (before adjustment) is not significant and so consistent with its neighbours.

Amberley

The raw data is not always more consistent, as Amberley shows. On the left, the BoM compares Amberley with Gatton (38 km west of Amberley) in
the years around 1980. On the right from my Amberley report is the relative trend of daily temperature to 19 neighbouring stations (ie ACORN-SAT – neighbour). In contrast to Rutherglen and Deniliquin, the mostly flat trends mean that the ACORN-SAT site is not warming faster than the raw neighbours.

BoMamberley Amberley

The BoMs caption:

Amberley is consistently warmer than Gatton prior to 1980 and consistently cooler after 1980. This, combined with similar results when Amberley’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

Problems: Note the cherrypicking and hand waving.

In my analysis, the ACORN-SAT version warms at 0.09C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, I class the ACORN-SAT version as borderline. The residual trend of the raw data (before adjustment) is -0.32C/decade which is very significant and so there is clearly a problem with the raw station record.

Conclusions

More cherrypicking, circularity, and hand-waving from the BoM — excellent examples of the inadequacy of the adjusted ACORN-SAT reference network and justification for a full audit of the Bureau’s climate change division.

BoM publishing station summaries justifying adjustments

Last night George Christensen MP gave a speech accusing the Bureau of Meteorology of “fudging figures”. He waved a 28 page of adjustments around, and called for a review. These adjustments can be found here. While I dont agree that adjusting to account for station moves can necessarily be regarded as fudging figures, I am finding issues with the ACORN-SAT data set.

The problem is that most of the adjustments are not supported by known station moves, and many may be wrong or exaggerated. It also means that if the adjustment decreases temperatures in the past, claims of current record temperatures become tenuous. A maximum daily temperature of 50C written in 1890 in black and white is higher than a temperature of 48C in 2014, regardless of any post-hoc statistical manipulation.

But I do take issue with a set of summaries being released as blatant “cherry-picking”.

Scroll down to the bottom of the BoM adjustment page. Listed are station summaries justifying the adjustments to Amberley, Deniliquin, Mackay, Orbost, Rutherglen and Thargomindah. The overlaps with the ones I have evaluated are Deniliquin, Rutherglen and Amberley (see previous posts). While the BoM finds the adjustments to these stations justified, my quality control check finds problems with the minimum temperature at Deniliquin and Rutherglen. I think the Amberly raw data may have needed adjusting.

WRT Rutherglen, BoM defends the adjustments with Chart 3 (my emphasis):

Chart 3 shows a comparison of the raw minimum temperatures at Rutherglen with the adjusted data from three other ACORN-SAT stations in the region. While the situation is complicated by the large amount of missing data at Rutherglen in the 1960s, it is clear that, relative to the other sites, Rutherglen’s raw minimum temperatures are very much cooler after 1974, whereas they were only slightly cooler before the 1960s.

WRT Deniliquin, BoM defends the adjustments on Chart 3 (my emphasis):

Chart 3 shows a comparison of minimum temperatures at Kerang (95 km west of Deniliquin) and Deniliquin in the years around 1971. Deniliquin is consistently warmer than Kerang prior to 1971, with similar or cooler temperatures after 1971. This, combined with similar results when Deniliquin’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

My analysis is superior to flawed the BoMs analysis in 3 important ways:
1. I compare the trend in Rutherglen and Deniliquin with 23 and 27 stations respectively, not 3 and 1 neighbouring stations respectively (aka cherry-picking).
2. I also use a rigorous statistical panel test to show that the trend of the Rutherglen minimum exceeds the neighbouring group by O.1C per decade, which is outside the 95% confidence interval for Australian stations trends — not a visual assessment of a chart (aka eyeballing).
3. I use the trends of daily data and not annual aggregates, which are very sensitive to missing data.

Kerang: Where is the daily data?

Been looking forward to doing Kerang as I knew it was another dud series from ACORN-SAT. The report is here:

The first thing to notice in plotting up the time series data for the raw CDO and ACORN-SAT is that while the ACORN-SAT data goes back to 1910 the CDO data is truncated at 1962.

figure12

The monthly data, however, goes back almost to 1900. This is inexplicable as the monthly data is derived from the daily data! Here is proof that, contrary to some opinion pieces, all of the data to check the record is not available at the Bureau of Meteorology website, Climate Data Online.

The residual trends of ACORN-SAT are at benchmark and greatly exceeding benchmark, for maximum and minimum respectively.

While on the subject of opinion pieces, the statement from No, the Bureau of Meteorology is not fiddling its weather data:

Anyone who thinks they have found fault with the Bureau’s methods should document them thoroughly and reproducibly in the peer-reviewed scientific literature. This allows others to test, evaluate, find errors or produce new methods.

So you think skeptics haven’t tried? A couple of peer-review papers of mine on quality control problems in Bureau of Meterology use of models have not had a response from the Bureau in over 2 years. The sound of crickets chirping is all. Talk is cheap in climate science, I guess. Here they are:

Biases in the Australian High Quality Temperature Network

Critique of drought models in the Australian drought exceptional circumstances report (DECR)

Scorecard for ACORN-SAT Quality Assurance – the score so far.

Three more quality tests of stations in the ACORN-SAT series have been completed:

The test measures the deviation of the trend of the series from its neighbours since 1913 (or residual trend). A deviation of plus or minus 0.05 degrees per decade is within tolerance (green), 0.05 to 0.1 is borderline (amber), and greater than than is regarded a fail (red) and should not be used.

Maximum Minimum
Station CDO ACORN-SAT CDO ACORN-SAT
Rutherglen -0.02 -0.03 -0.05 0.14
Williamtown 0.08 -0.02 -0.13 -0.00
Deniliquin -0.05 0.03 -0.02 0.13
Amberley -0.01 -0.01 -0.32 0.09
Cape Otway Lighthouse -0.17 0.02 -0.05 0.01
Wilcannia -0.16 -0.06 -0.13 0.05
Kerang 0.02 0.05 -0.01 0.13

There are more inconsistent stations among the raw CDO data — as would be expected as it is “raw”. Howover the standout problems in this small sample are in the ACORN-SAT minimums.

Results so far suggest the Bureau of Meteorology has a quality control problem with its minimum temperatures, with almost all borderline residual trends and very large deviations from the neighbours in Rutherglen and Deniliquin.

Williamtown RAAF – an interesting case

Williamtown RAAF is one case where the quality control test indicates that the adjustments were justified, even though the homogenization in ACORN-SAT produced a strong change in trend. My report is here and Ken has posted graphs for Williamtown, along with a number of sites with large changes in trend by ACORN-SAT.

.

This illustrates neatly that a series is not rejected just because the ACORN-SAT increases the warming trend. An ACORN-SAT series should be rejected, however, if it’s trend is inconsistent with its raw neighbours.