Blogs on random temperature reconstruction

A new temperature reconstruction has certainly resonated with many people. Here is a summary of what some of the blogs have been saying, and my corrections of some small inaccuracies.

American Thinker wrote a very upbeat but over the top piece.

The scientific argument that humans have caused global warming – a major underpinning of the “Kyoto Protocols�? – suffered a major blow last week, with the publication of a new study. The implications have not yet spread very far beyond the rarified circles of specialists, but the gospel of “anthropogenic�? – human-caused – global warming has lost one of its intellectual foundations.

However, the article has not yet been through the rigor of publishing – but some preliminary results will be in the Australian Institute of Geologists newsletter next month.

From on High fielded a NYT article by David Ignatius with:

In other words, with the criteria used, it is possible that any random data set can provide results identical to those obtained in the model that global warming theorists have been touting for several years. Meaning – the model is, in and of itself, worthless.

The method may not be entirely worthless, but the detection limits of the methodology may by so high they may not have detected any temperature changes.

YARGB – Flares into Darkness resulted in a lot of discussion.

As those of you following the debate about global warming will know, the National Academies of Science are currently running a symposium to evaluate the results of Mann et al, and the criticisms of McIntyre and McKitrick. David Stockwell has published preliminary results of his new study, applying the methods of Mann et al to the same surface temperature data, but using other associated data.

The study simulates the simple select, calibrate and average methods of most researchers in use prior to Mann more complex methods using Principle Components Analysis. It is McIntyre and McKitrick who have been dismantling Mann’s methods.

At ClimateAudit the cheerful Steve McIntyre quipped:

Another excellent post by David Stockwell here. Everyone having fun?

Everyone?

The Great Satan saw it as grist to the mill.

Further evidence of Science becoming Political, to the point that “the science” is being manipulated to support an agenda instead of the facts.

At Ökologismus it was said:

Ähnlichkeiten mit der sog. “Hockeyschlägerkruve�? von Mann sind unverkennbar. Nach der Berechnung der Fehlerreduktion (RE) und der R2-Statistik bekommt David Stockwell tatsächlich eine deutliche Hockeyschlägerkurve.

At Bad Astronomy and Universe Today it was commented about Hockeysticks:

David Stockwell did one here… doubt anyone will be referencing it, however… he used the same methodology, just input random noise. ut oh.

oh, Steve McIntyre did one as well… turns out he actually used tree rings, too! he just picked a different set, out of those that the hockey stick team throw out regularly, and managed something quite different… turns out they really do pick and choose the proxies that match their desired outcome. so much for statistics.

At Keisarin uudet vaatteet it was written:

David Stockwell taas demonstroi miten Mannin metodeilla saadaan lätkämaila satunnaisesta datasta. Kannattaa tutustua.

Out of the Race made some very astute ‘man in the street’ deductions:

Now, I’m no scientist or mathematician, but when random data produce the same conclusion as real-world data when input into a statistical model, it seems to me that maybe, just maybe, the statistical model might have a built-in bias toward producing that result.

I’m not saying that the scientists who developed the statistical climatic models engaged in intellectual fraud. I am saying that anything created by humans, especially a statistical model that purports to reconstruct and/or mimic a system as complex as the earth’s climate, quite possibly may contain errors.

At the very least, this development suggests that qualified scientists should conduct a detailed review of the structure and process of the climactic models that produce the “hockey stick” pattern to make sure that the pattern is not an artifact of the models themselves. Until that happens, maybe it would be better to suspend expensive government-funded projects aimed at stopping global warming.

Advertisements

0 thoughts on “Blogs on random temperature reconstruction

  1. David,

    Thank you for the “Discover more …” button. By-and-large I am not very interested in blogs on climate because most of the postings are of the form “You [the previous person posting] haven’t taken X, Y or Z [fill in according to your own beliefs] and therefore you are at least wrong and more likely stupid.” However, having finished work for the day, I trolled through a couple pages of links largely confirming my prejudices. Sorry, but just because someone agrees with you doesn’t mean the agreement is sound. Anyway on the fifth page I found the a link to Roger Pielke’s (Sr.) site (Climate Science) and clicked over to “Reflections of a Climate Skeptic by Henk Tennekes”
    http://climatesci.atmos.colostate.edu/2006/01/06/guest-weblog-reflections-of-a-climate-skeptic-henk-tennekes/

    Prof. Tennekes’s essay is a nice statement of his skepticism which seems to be based on Popperian criteria and Tennekes’s own specialty, turbulence. But I comment here not on Tennekes rather on Pielke’s response (#18) to the query “Please define skillful.” Pielke gives the American Meteorological Society’s definition of skill:
    “Skill: A statistical evaluation of the accuracy of forecasts or the effectiveness of detection techniques.”

    That is bland enough for just about anyone. Pielke follows the bland definition with specific, but after reading those specifics I find I have no idea as to whether he would prefer an R2 (correlation squared), RE (reduction in error), CE (coefficient of efficiency) or maybe one of Prof. Theil’s statistics. Is this just non-technical commentary? Probably, but there is an implicit message here for all of us who investigate the empirical properties of various estimation and prediction schemes: the accuracy of a prediction [both forecasts and backcasts as in reconstructions] lies in the nature of the forecast. An R2 of 0.2 and an RE of -0.1 might be pretty good if we are trying to predict a 4 standard deviations outside our sample. The recent National Academy of Sciences conference on climate reconstruction asked the presenters if reconstructions for a 1000 years past were accurate within 0.5 degrees C. Let me ask: is this not a more interesting test than an R2 or RE?

    Of course, this test does not come with its algorithm, which of course means a debate on the application, but why not add this test to the mix. There is the complication in modeling of the 1000 years. I believe that is addressed by dual verification testing on the verification period and the “unobserved” period. That is simulate a 1000 years which should be broken into three parts: 1 to N1 — the unobserved, N1+1 to N2 — the verification, and N2+1 to N [the 1000] — the estimation. Then ask what skill (be measured in R2, RE, mean absolute deviations, or whatever) is needed in the verification period to assure a, say, 95% confidence of being within 0.5 degrees for a, again say, 30 year average 1000 years past?

    And yes, I have started, but I’m lazy and probably won’t finish, plus I don’t code as fast as you “kids” do. So while your reconstruction bones simulation are good, I am suggesting a — there are presumably many more — way to add some muscle the model.

  2. David,

    Thank you for the “Discover more …” button. By-and-large I am not very interested in blogs on climate because most of the postings are of the form “You [the previous person posting] haven’t taken X, Y or Z [fill in according to your own beliefs] and therefore you are at least wrong and more likely stupid.” However, having finished work for the day, I trolled through a couple pages of links largely confirming my prejudices. Sorry, but just because someone agrees with you doesn’t mean the agreement is sound. Anyway on the fifth page I found the a link to Roger Pielke’s (Sr.) site (Climate Science) and clicked over to “Reflections of a Climate Skeptic by Henk Tennekes”
    http://climatesci.atmos.colostate.edu/2006/01/06/guest-weblog-reflections-of-a-climate-skeptic-henk-tennekes/

    Prof. Tennekes’s essay is a nice statement of his skepticism which seems to be based on Popperian criteria and Tennekes’s own specialty, turbulence. But I comment here not on Tennekes rather on Pielke’s response (#18) to the query “Please define skillful.â€? Pielke gives the American Meteorological Society’s definition of skill:
    “Skill: A statistical evaluation of the accuracy of forecasts or the effectiveness of detection techniques.�

    That is bland enough for just about anyone. Pielke follows the bland definition with specific, but after reading those specifics I find I have no idea as to whether he would prefer an R2 (correlation squared), RE (reduction in error), CE (coefficient of efficiency) or maybe one of Prof. Theil’s statistics. Is this just non-technical commentary? Probably, but there is an implicit message here for all of us who investigate the empirical properties of various estimation and prediction schemes: the accuracy of a prediction [both forecasts and backcasts as in reconstructions] lies in the nature of the forecast. An R2 of 0.2 and an RE of -0.1 might be pretty good if we are trying to predict a 4 standard deviations outside our sample. The recent National Academy of Sciences conference on climate reconstruction asked the presenters if reconstructions for a 1000 years past were accurate within 0.5 degrees C. Let me ask: is this not a more interesting test than an R2 or RE?

    Of course, this test does not come with its algorithm, which of course means a debate on the application, but why not add this test to the mix. There is the complication in modeling of the 1000 years. I believe that is addressed by dual verification testing on the verification period and the “unobservedâ€? period. That is simulate a 1000 years which should be broken into three parts: 1 to N1 — the unobserved, N1+1 to N2 — the verification, and N2+1 to N [the 1000] — the estimation. Then ask what skill (be measured in R2, RE, mean absolute deviations, or whatever) is needed in the verification period to assure a, say, 95% confidence of being within 0.5 degrees for a, again say, 30 year average 1000 years past?

    And yes, I have started, but I’m lazy and probably won’t finish, plus I don’t code as fast as you “kidsâ€? do. So while your reconstruction bones simulation are good, I am suggesting a — there are presumably many more — way to add some muscle the model.

  3. Hi Marty, Glad you like the discover button. It runs a search on the post title to bring up relevant pages, so saves having to type anything in to the box. I agree with your sentiments on climate discussion to a degree. So often I meet people who think that AGW skepticism is just a right wing think tank plot that I think that a lot more discussion is needed. In general, I assume people don’t care about what I say, but if I can crank out some results and a figure or two we can all learn something. I just posted these comments out of interest. What would be good is a system that found them automatically and listed them under the post. Most blogs don’t seem to send out trackback pings.

    Absolutely agree that there are many more interesting issues such as detection limits of the methodology than ME vs R2. Most interesting are the basic assumptions, of which stationary fractional gaussian noise models are fascinating. Under these assumptions I have run verification protocols of MBH98, which indicated apparent skill for random recons by R2 but not by RE, and the protocol of just drawing random points for verification, where both RE and R2 indicated skill (wrongly). One could think up other ones, and try to identify protocols that did work in this case. I don’t know if I want to do that but it would be helpful for someone.

  4. Hi Marty, Glad you like the discover button. It runs a search on the post title to bring up relevant pages, so saves having to type anything in to the box. I agree with your sentiments on climate discussion to a degree. So often I meet people who think that AGW skepticism is just a right wing think tank plot that I think that a lot more discussion is needed. In general, I assume people don’t care about what I say, but if I can crank out some results and a figure or two we can all learn something. I just posted these comments out of interest. What would be good is a system that found them automatically and listed them under the post. Most blogs don’t seem to send out trackback pings.

    Absolutely agree that there are many more interesting issues such as detection limits of the methodology than ME vs R2. Most interesting are the basic assumptions, of which stationary fractional gaussian noise models are fascinating. Under these assumptions I have run verification protocols of MBH98, which indicated apparent skill for random recons by R2 but not by RE, and the protocol of just drawing random points for verification, where both RE and R2 indicated skill (wrongly). One could think up other ones, and try to identify protocols that did work in this case. I don’t know if I want to do that but it would be helpful for someone.

  5. Pingback: www.service-4u.co.uk

  6. Pingback: grumpy cat memes

  7. Pingback: link

  8. Pingback: kliknij link

  9. Pingback: polecam link

  10. Pingback: opieka nad osoba starsza

  11. Pingback: strona www

  12. Pingback: oferta

  13. Pingback: Clowns

  14. Pingback: cennypieniadz.blog.pl

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s