Detecting bias in research is not so difficult when you know what to look for. The conclusions are not justified by the data. Instead, the data may confirm, be consistent with, (or not inconsistent with) the conclusions. Working against this however are basic human motives on the part of the writer, to find novel and interesting approaches, find significant results when nothing is there, to be accepted by their colleagues, to get grants and be published.
According to Geoffrey Miller (The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature)
Geoffrey Miller: I think the interesting thing about human intelligence and capacities for abstract reasoning, and metaphor and analogy, is how very poor most people are at being evidenced based and sceptical. What we love to do is pick up little factoids and half-understood theories and repeat them to others to be interesting. Particularly on first dates. So we try to be interesting, we don’t really much care about the truth of what we’re saying, and scientists have to be extremely self conscious about this: not just to be interesting but to be right. Most humans most of the time though adopt ideologies and beliefs that are there principally to make their minds attractive to others, not because those beliefs actually correspond to the world.
John P. A. Ioannidis provides the proof of widespread research bias in
Why Most Published Research Findings Are False.
Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.
The factors identified by Ioannidis contributing to bias include:
- studies conducted in a field are smaller;
- when effect sizes are smaller;
- when there is a greater number and lesser preselection of tested relationships;
- where there is greater flexibility in designs, definitions, outcomes, and analytical modes;
- when there is greater financial and other interest and prejudice;
- and when more teams are involved in a scientific field in chase of statistical significance.
All of these factors apply to global warming and global warming effects science: the small effect, the significance chasing, the ad hoc methodologies, the competition, and of course the financial and other interests. All adding up to the increased probability of Type 1 error, of accepting a difference with none actually exists. So the strategy of skeptics is invariably constrained to saying ‘hold on, you have inflated certainty here, or made this mistake there’. But of the factors above, the only one that is really amenable to change that could counteract human bias is the fourth, greater standardization in designs and analytical modes.
This is where replication, checking, data access, and the service of the Accredited Statistician comes to the fore. This view is promoted in the recent article by McCullough and McKitrick entitled: Check the Numbers: The Case for Due Diligence in Policy Formation. This is something Steve McIntyre and myself, Ian Castles and others have been harping on for years, and the value should be a slam-dunk in the current spate of investment frauds. Its all about the numbers. Notably, their 44 page report has a section on Droughts in Australia (pp27).
In July 2008, the Australian Bureau of Meteorology and the Commonwealth Science and Industrial Research Organization (CSIRO) released a report entitled An Assessment of the Impact of Climate Change on the Nature and Frequency of Exceptional Climatic Events. It received considerable media attention for what appeared to be predictions of a dramatic increase in drought. News coverage by the Australian Broadcasting Corporation began, â€œA new report is predicting a dramatic loss of soil moisture, increased evaporation and reduced ground water levels across much of Australiaâ€™s farming regions, as temperatures begin to rise exponentiallyâ€ (ABC Rural, July 7, 2008).
Shortly after its release, David Stockwell, an ecological systems modeler and Australian expatriate living in San Diego, became doubtful about whether the models had any demonstrated ability to predict known past events and whether the forecast changes were statistically significantâ€“i.e., distinguishable from random guesses. However, neither the data nor the methodology were sufficiently well described in the report to allow him to investigate. Stockwell emailed CSIRO to request the data used for the claims in the report. The request was promptly refused. He was told on July 15, 2008, that the data would not be sent to him â€œdue to restrictions on Intellectual Propertyâ€ (Niche Modeling, July 15, 2008). About a month after Stockwellâ€™s requests began to get media and Internet attention, CSIRO changed course and released their data. Stockwell quickly found that the models were unable to replicate observed historical trends, typically generating patterns that were opposite to those in the data. Hence their predictions of future trends did not have the credibility CSIRO had claimed (Niche Modeling, August 28th, 2008). By this time, however, media interest in the report had substantially died away so the initial impression was not corrected.