NIPCC Report on Species Extinctions due to Climate Change

The NIPCC – Interim Report 2011 updates their last 2009 Report, with an overview of the research on climate change that the IPCC did not see fit to print. Its published by the Heartland Institute with lead authors Craig D. Idso, Australian Robert Carter, and S. Fred Singer with a number of other significant contributions.

I am grateful for inclusion of some of my work in Chapter 6 on the uncertainty of the range-shift method for modeling biodiversity under climate change.

The controversy centered on a paper by Thomas 2004 called “Extinction Risk from Climate Change“, that received exceptional worldwide media attention for its claims of potentially massive extinctions from global warming.

Briefly, the idea is to simulate the change in the range of a species under climate change by ‘shifting’ the range using a presumed climate change scenario.

Daniel Botkin said of the Thomas 2004 study

Yes, unfortunately, I do consider it to be the worst paper I have ever read in a major scientific journal. There are some close rivals, of course. I class this paper as I do for two reasons, which are explained more fully in the recent article in BioScience:

… written by 17 scientists from a range of fields and myself (here).

While there are many problems with this paper, the most amazing, as I see it, is the way they used changes in the size of species-ranges to determine extinctions. Its generally believed that contracting a species-range increases the probability of extinction.

Consider the case of a species that disperses freely under climate change. While the range-size of individuals change, the average range-size should stay the same, unless there is a major obstruction like an ocean or mountain range. Species whose range size decreases are balanced by species whose range size increases. Overall, the net rate of extinction should be unchanged.

However, Thomas 2004 simply deleted all species whose range expanded. A massive increase in extinctions was therefore a foregone conclusion, even assuming free dispersion.

There are a number of other ways a bias towards range-reduction can be introduced, such as edge effects and over-fitting assumptions, that I show in my book “Niche Modeling“. In a normal science this would have been a cautionary tale of the dangers of ad-hoc methodologies.

It’s an example of the intellectual bankruptcy of the IPCC report that the uncertainties of Thomas 2004 and other similar studies were ignored by Working Group II. For example, in Impacts, Adaption and Vulnerability, 13.4.1 Natural ecosystems

Modelling studies show that the ranges occupied by many species will become unsuitable for them as the climate changes (IUCN, 2004). Using modelling projections of species distributions for future climate scenarios, Thomas et al. (2004) show, for the year 2050 and for a mid-range climate change scenario, that species extinction in Mexico could sharply increase: mammals 8% or 26% loss of species (with or without dispersal), birds 5% or 8% loss of species (with or without dispersal), and butterflies 7% or 19% loss of species (with or without dispersal).

And in 19.3.4 Ecosystems and biodiversity:

… up to 30% of known species being committed to extinction * (Chapter 4 Section 4.4.11 and Table 4.1; Thomas et al., 2004;

And in other summaries Table 4.1

Clearly the major difficulty with all this work, something that turned me off it but few acknowledge, is that the lack of skill of simulations of climate change renders fraudulent any claim to skill at the species habitat scale. Only now is the broader climate community finally starting to accept this about multi-decadal climate model predictions, such as contained in the 2007 IPCC WG1 the climate assessments. The NIPCC illustrates the broader opinion which should have been integral to the IPCC process from the beginning, IMHO.


NIWA's Station Temperature Adjustments – CCG Audit

The New Zealand Climate Conversation Group have released their report and reanalysis of the NIWA 7-Station Review. CCG claim NIWA misrepresented the statistical techniques it used, and exaggerated warming over the last hundred years.

The CCG results (Figure 20 above) prove there are real problems in the adjustments to temperature measurements for moves and equipment changes in NZ (also seen in Australia).

As any trained scientist or engineer knows, failure to follow a well-documented and justified method is a sign of pseudoscience. The New Zealand Climate Conversation Group is correct in examining if Rhoades & Salinger (1993) has been followed, as advertised.

In 2010, NIWA published their review of their 7-station temperature series for New Zealand. The review was based upon the statistically-based adjustment method of Rhoades & Salinger (1993) for neighbouring stations. In this report, we examine the adjustments in detail, and show that NIWA did not follow the Rhoades & Salinger method correctly. We also show that had NIWA followed Rhoades & Salinger correctly, the resultant trend for the 7-station temperature series for New Zealand would have been significantly lower than the trend they obtained.

Despite searching, I cannot see a methodology section in NIWA’s report, which is a disjoint analysis of each of the seven sites, although it is clear in a number of places that they infer that Rhoades and Salinger (1993) forms the basis. For example, page 145 on Dunedin.

In February 2010, NIWA documented the adjustments in use at that time (see web link above). These adjustments to the multiple sites comprising the ‘seven-station’ series were calculated by Salinger et al. (1992), using the methodology of Rhoades and Salinger (1993), which extended the early work on New Zealand temperatures by Salinger (1981). Subsequent to 1992, the time series have been updated regularly, taking account of further site changes as circumstances required.

The Climate Conversation Group summarize the differences between Rhoades and Salinger (1993) and the method actually used by NIWA. The R&S method for comparing a station with neighbouring stations involves the use of:

– Monthly data
– Symmetric interval centred on the shift
– A 1-2 year period before and after the shift
– Weighted averages based on correlations with neighbouring stations
– Adjustments only performed if results are significant at the 95% confidence level

In contrast, the NIWA method uses:

– Annual data
– Asymmetric intervals
– Varying periods of up to 11 years before and after the shift
– No weighted averages
– No evidence of significance tests – adjustments are always applied.

Any of these methodological deviation could create substantial differences between the results, but the Climate Conversation Group (nor I) could not find a rationale or discussion in the NIWA review reports for not implementing the R&S method as stated.

What are the details of the methods? The CCG report compares a single station at Dunedin, using NIWA and R&S methods in their Table 1. There were five site moves — 1913, 1942, 1947, 1960, and 1997 — with five potential adjustments. The NIWA method adjusts at each of the moves, resulting in an increasing trend of 0.62C/century for Dunedin. The R&S method only implements two adjustments resulting in a 0.24C/century increasing trend.

The other six stations are similar (Masterton, Wellington, Nelson, Hokitika, and Lincoln), with the NIWA method doing generally more frequent, and more negative adjustments, and resulting in exaggerated trends, as shown in Figure 20 at the top of this post.

It would seem that significance tests and weighting of neighboring sites is very important. It ensures the nearby sites used to calibrate the site moves actually provide information on the site in question. A larger neighborhood of 11 years would probably confound short-term changes with the long-term warming trend, and may bias the adjustments to exaggerate the trend.

To ignore significance tests, weightings, and modify the method arbitrarily, whether sloppy or intentional, is bad practice, and would not be favorable to NIWA in their upcoming court case, brought by CCG.

Debt Wave Grows

Congratulations Julia and Wayne, on your new milestone – Australia’s National debt has topped $200 billion after Labor borrowing $100 million per day.

Australia now has its largest debt in history, after we borrowed $3.2 billion over the last week. On 11 March 2009, Treasurer Wayne Swan invoked “special circumstances” to increase the debt ceiling to a “temporary” level of $200 billion. In the last budget the government has increased the debt ceiling permanently to $250 billion.

See the Total Commonwealth Government Securities on Issue.

There are 12.3 million taxpayers in Australia, so thats $16,260 of debt on behalf of each of us. Are you any better off?

h/t Senator Barnaby Joyce (LNP)

Global Warming Trends – Gimme Some Truth

Richard Treadgold from the New Zealand Climate Conversation Group reports on the Statistical Audit of the NIWA 7-Station Review, claiming that New Zealand’s National Climate Center, NIWA, misrepresented the statistical techniques it used (Rhoades & Salinger – Adjustment of temperature and rainfall records for site changes) in order to fabricate strong warming over the last hundred years.

NIWA shows 168% more warming than Rhoades & Salinger – the method NIWA betrayed. The blue dashed line shows the warming trend when the method is used correctly. The red line reveals NIWA’s outrageous fraud – it’s much stronger warming, but it’s empty of truth.

The results of this audit corroborate the results of Ken Stewart’s audit of the Australian temperature record.

As yet, Ken has received an apology from the Australian BoM for tardiness, but no explanation for the 140% exaggeration of warming trends in Australia.

I have been begging BOM- or anyone- to check my analysis but to no avail.

Are we getting value from our public-funded science?

Just Gimme Some Truth original and HD version.

No short-haired, yellow-bellied, son of Tricky Dicky; Is gonna mother hubbard soft soap me; With just a pocketful of hope; It’s money for dope; Money for rope

Best Business Presentation Ever

You have probably heard about Steve Jobs retirement from CEO at Apple. If like me, you find him an inspiration, you might enjoy this video from the Apple Music Event in 2001, “The First Ever iPod Introduction”.

What I like is the steel-trap logic, the “quantum leap” vision, the love of speed, the sparse visuals, and the impeccable timing of the delivery.

Phase Lag of Global Temperature

Lag or phase relationships are to me one of the most convincing pieces of evidence for the accumulative theory.

The solar cycle varies over 11 years on average like a sine wave. This property can be used to probe contribution of total solar insolation (TSI) to global temperature.

Above is a plot of two linear regression models of the HadCRU global temperature series since 1950. The time since 1950 is chosen because it is the period that the IPCC states that most of the warming has been caused by greenhouse gasses GHG, like CO2, and because the data is more accurate.

The red model is a linear regression using TSI and a straight line representing the contributions of GHGs. This could be called the conventional IPCC model. The green model is the accumulated TSI only, the model I am exploring. Accumulative TSI is calculated by integrating the deviations from the long-term mean value of TSI.

You can see that both models are indistinguishable by their R2 values (CumTSI is slightly better than GHG+TSI at R2=0.73 and 0.71 respectively).

You can also see a lag or shift in the phase of the TSI between the direct solar influence (in the red model) and the accumulated TSI (green model). This shift comes about because integration shifts a periodic like a sine wave by 90 degrees.

While there is nothing to distinguish between the models on fit alone, the shift provides independent confirmation of the accumulative theory. Volcanic eruptions in the latter part of the century obscure the phase relation over this period somewhat, so I look at the phase relationships over the whole period of the data since 1850.

Above is the cross-correlation of HacCRU and TSI (ccf in R) showing the correlation at all the shifts between -10 and +10 years. The red dashed line is at 2.75 years, a 90 degree shift of the solar cycle, or 11 years divided by 4. This is the shift expected if the relationship between global temperature and TSI is an accumulative one.

The peak of the cross-correlation lies at exactly 2.75 years!

This is not a result I thought of when I started working on the accumulation theory. The situation reminds me of the famous talk by Richard Feynmann on “Cargo Cult Science“.

When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Direct solar irradiance is almost uncorrelated with global temperature partly due to the phase lag, and partly due to the accumulation dynamics. This is why previous studies have found little contribution from the Sun.

Accumulated solar irradiance, without recourse to GHGs, is highly correlated with global temperature, and recovers exactly the right phase lag.

Accumulation of TSI comes about simply from the accumulation of heat in the ocean, and also the land.

I think it is highly likely that previous studies have grossly underestimated the Sun’s contribution to climate change by incorrectly specifying the dynamic relationship between the Sun and global temperature.

Climate Sensitivity Reconsidered

The point of this post is to show a calculation by guest, Pochas, of the decay time that should be expected from the accumulation of heat in the mixed layer of the ocean.

I realized this prediction gives another test of the accumulation theory of climate change, that potentially explains high climate sensitivity to variations in solar forcing, without recourse to feedbacks, or greenhouse gasses, in more detail here and here.

The analysis is based on the most important parameter in all dynamic systems, called the time constant, Tau. Tau quantifies two aspects of the dynamics:

1. The time taken for an impulse forcing of the system, such as a sudden spike in solar radiation, to decay to 63% of the original response.

2. The inherent gain, or amplification. That is if the Tau=10, the amplification of a step increase in forcing will be x10. This is because at Tau=10, around one tenth of an increase above the equilibrium level will be released per time period. So the new equilibrium level must be 10 times higher than the forcing, before the energy output equals the energy input.

I previously estimated Tau from global temperature series, simply from the correlation between successive temperature values, a. The Tau is then given by:

Tau = 1/(1-a)

Pochas posted the theoretical estimate of the time constant, Tau, below, that results from a reasonable assumption of the ocean mixed zone depth of 100m.

The input – output = accumulation equation is:

q sin ωt /4 – kT = nCp dT/dt

where q = input flux signal amplitude, watts/(m^2 sec). The factor 4 corrects for the disk to sphere surface geometry.

k = relates thermal flux to temperature (see below) J/(sec m^2 ºK).

T = ocean temperature,

ºKn = mass of ocean, grams.

Cp = ocean heat capacity J/(g ºK)t = time, sec or years.

Rearranging to standard form (terms with T on the left side):

nCp dT/dt + kT = q sin ωt /4

Divide by k

nCp/k dT/dt + θ = q sin ωt /(4k)

The factor nCp/k has units of time and is the time constant Tau in the solution via Laplace Transform of the above.

n = mass of water 100 m deep and 1m^2 surface area = 10E8 grams.

Cp = Joules to heat 1 gram of water by 1ºK = 4.187 J/gram.

k = thermal flux equivalent to blackbody temperature, J/(m^2 sec ºK).

Solution after inverse transform, after transients die out:

Amplitude Ratio = 1/(1+ω²T²)^½

where ω = frequency, rad/yr

Derivation of k Stefan Boltzmann equation

q = σT^4k = dq/dt

Differentiating: dq/dt = 4σT^3

Evaluating at T = blackbody temp of the earth, -18 ºC = 256 ºK

k = 4 (5.67E-8) 256^3 = 3.8 J/(sec m^2 ºK)

Calculating Time Constant Tau

Tau = nCp/k = 10E8 (4.187) / 3.8 = 1.10E8 sec

Tau = 1.10E8 / 31,557,000 sec/yr = 3.4857 yr


The figure of Tau=3.5 yrs is in good agreement with the empirical figures from the correlation of the actual global surface temperature data of 6 to 10. The effective mixed zone may be closer to 150m, and so explains the difference.

This confirms another prediction of the theory that amplification of solar forcing can be explained entirely by the accumulation of heat, without recourse to feedbacks from changing concentrations of greenhouse gases.