# Dessler, Zhang and Yang fail significance tests

A concerned reader sent me this recent paper Water-vapor climate feedback inferred from climate fluctuations, 2003-2008, writing:

The following (ala Hansen) IMO should never have been accepted in a "peer reviewed" journal. "The existence of a strong and positive water-vapor feedback means that projected business-as-usual greenhouse gas emissions over the next century are virtually guaranteed to produce warming of several degrees Celsius. The only way that will not happen is if a strong, negative, and currently unknown feedback is discovered somewhere in our climate system."

After my previous post on detecting research bias by checking if the data justify the conclusions, this looked like a good candidate — no significance test, very few data points, idiosyncratic methodology, and bold claims. So I ran some significance tests (R script here).

Would someone like to read the paper and check the logic behind my choice of tests? Its very short.

Using data on water vapor and temperature fluctuations between 2003-2008 collected by the NASA satellite AIRS, Dessler et al. 2008 claim independent confirmation of a strongly positive feedback in the specific humidity parameter λq of 2.04W/m/K. However, they did not calculate an uncertainty for this value, raising the question of whether the results are distinguishable from the null hypothesis of no water-vapor feedback.

The values of the specific humidity feedback parameter λq for each of years 2003-2007 relative to year 2008 were 2.10, 2.69, 1.77, 2.69, and 0.94 W/m/K respectively. Direct calculation of the standard deviation of the five data poinst suggests an sd=0.73 with a standard error of 0.37, giving a lower limit to the 95% confidence interval of 1.32 — much greater than zero. However, this approach does not consider the way λq was calculated relative to a single 2008 value. The uncertainty in the 2008 value must be taken into account in estimating the range of the difference between the anomalous year of 2008 and the other years.

To incorporate the true uncertainty of the difference of two uncertain values, we use a t-test of the difference of means of unequal samples with equal variance. This will incorporate both the variance of the sample of five feedback values, and a sample of one value, zero, assumed to have the same variance as the sample of five. This test yields a t value of 2.54, which at 4 degrees of freedom gives a p value of 0.033, slightly larger than the p=0.025 critical value required for a conventional one-tailed test to be significant at the 95% confidence level. Below are the p values for other non-parametric tests of the difference of means.

Test P Significant
t.test 0.03 NS
Kruskal 0.14 NS
Wilcox 0.33 NS

All p values listed in Table 1 indicate the alternative hypothesis, that water-vapor feedback is significantly greater than zero, falls short of the conventional 95% confidence level. The probability the results arose by chance is high, largely due to a methodology relying on one anomalous year as a reference point. Therefore Dessler et al. 2008 provides no credible justification for claims of the “existence of a strong and positive feedback ” which is “virtually guaranteed to produce warming of several degrees Celsius” (in this century).

## 0 thoughts on “Dessler, Zhang and Yang fail significance tests”

1. Morgan says:

I haven’t had much time to review the paper, nor will I for a few days, but my first reaction is that there are more fundamental questions to answer before applying any significance test.

It isn’t entirely clear to me whether Dessler et al. based their conclusion on the data in the table, or whether the table represents a subset of the data. But if the conclusion *is* based on the data in the table (with everything calculated relative to January, 2008), it’s worth bearing in mind that January 2008 was a very atypical month (very much colder than usual, globally), and may not be appropriate to use as a baseline. It might be something like estimating the impact of house prices on the stock market based on October, 2008.

2. Morgan says:

I haven’t had much time to review the paper, nor will I for a few days, but my first reaction is that there are more fundamental questions to answer before applying any significance test.

It isn’t entirely clear to me whether Dessler et al. based their conclusion on the data in the table, or whether the table represents a subset of the data. But if the conclusion *is* based on the data in the table (with everything calculated relative to January, 2008), it’s worth bearing in mind that January 2008 was a very atypical month (very much colder than usual, globally), and may not be appropriate to use as a baseline. It might be something like estimating the impact of house prices on the stock market based on October, 2008.

3. KuhnKat says:

The link to the paper has some garbage appended to the end.

4. KuhnKat says:

The link to the paper has some garbage appended to the end.

5. jae says:

Very interesting that this came up. I just posted the following on the CA Message Board last evening:

“Proof that there can be no positive water vapor feedback related to any increased radiation from CO2

It is hypothesized that warming will occur as a result of increases in “greenhouse gases,” primarily CO2 from the combustion of fossil fuels. The hypothesis holds that, because these gases absorb and emit infra-red radiation, any increases in their concentrations will make the atmosphere more opaque to this radiation, slowing down the escape of energy to outer space, and thereby causing an increase in temperature. For a doubling of CO2 from 280 ppm to 560 ppm, calculations indicate that this mechanism can add about 2 degrees C.

Most climate scientists do not consider this amount of warming to be particularly troublesome. However, any increase in temperatures caused by the increases of greenhouse gases will also result in more evaporation of water vapor, via the Classius-Clapeyron equation. Since water vapor is also a very powerful greenhouse gas, many climate scientists believe that this increased evaporation will lead to even more warming, through a “positive water vapor feedback” mechanism. Estimates of the total effects of the CO2 and water vapor feedback range between 3 – 6 degrees C, increases which many believe would cause serious problems, such as rising sea levels, disease, etc., etc. The list of possible problems is so long it is actually quite comical. http://www.numberwatch.co.uk/warmlist.htm

Are they nuts or am I? 🙂

6. jae says:

Very interesting that this came up. I just posted the following on the CA Message Board last evening:

“Proof that there can be no positive water vapor feedback related to any increased radiation from CO2

It is hypothesized that warming will occur as a result of increases in â€œgreenhouse gases,â€ primarily CO2 from the combustion of fossil fuels. The hypothesis holds that, because these gases absorb and emit infra-red radiation, any increases in their concentrations will make the atmosphere more opaque to this radiation, slowing down the escape of energy to outer space, and thereby causing an increase in temperature. For a doubling of CO2 from 280 ppm to 560 ppm, calculations indicate that this mechanism can add about 2 degrees C.

Most climate scientists do not consider this amount of warming to be particularly troublesome. However, any increase in temperatures caused by the increases of greenhouse gases will also result in more evaporation of water vapor, via the Classius-Clapeyron equation. Since water vapor is also a very powerful greenhouse gas, many climate scientists believe that this increased evaporation will lead to even more warming, through a â€œpositive water vapor feedbackâ€ mechanism. Estimates of the total effects of the CO2 and water vapor feedback range between 3 â€“ 6 degrees C, increases which many believe would cause serious problems, such as rising sea levels, disease, etc., etc. The list of possible problems is so long it is actually quite comical. http://www.numberwatch.co.uk/warmlist.htm

Are they nuts or am I? 🙂

7. Anonymous says:

Does this mean what i think it means?

Quantifying Climate Feedbacks using Radiative Kernels
Brian J. Soden
http://www.gfdl.noaa.gov/~ih/papers/kernel_final.pdf

Abstract
The extent to which the climate will change due to an external forcing depends largely on radiative feedbacks, which act to amplify or damp the surface temperature response. There are a variety of issues that complicate the analysis of radiative feedbacks in global climate models, resulting in some confusion regarding their strengths and distributions. In this paper, we present a method for quantifying climate feedbacks based on “radiative kernels” which describe the differential response of the top-of-atmosphere radiative fluxes to incremental changes in the feedback variables. The use of radiative kernels enables one to decompose the feedback into one factor that depends on the radiative transfer algorithm and the unperturbed climate state, and a second factor that arises from the climate response of the feedback variables. Such decomposition facilitates an understanding of the spatial characteristics of the feedbacks and the causes of intermodel differences. This technique provides a simple and accurate way to compare feedbacks across different models using a consistent methodology. Cloud feedbacks cannot be evaluated directly from a cloud radiative kernel because of strong nonlinearities, but they can be estimated from the change in cloud forcing and the difference between the full-sky and clear-sky kernels. We construct maps to illustrate the regional structure of the feedbacks and compare results obtained using three different model kernels to demonstrate the robustness of the methodology. Our results confirm that models typically generate globally-averaged cloud feedbacks that are substantially positive or near neutral, unlike the change in cloud forcing itself which is as often negative as positive.

8. Does this mean what i think it means?

Quantifying Climate Feedbacks using Radiative Kernels
Brian J. Soden
http://www.gfdl.noaa.gov/~ih/papers/kernel_final.pdf

Abstract
The extent to which the climate will change due to an external forcing depends largely on radiative feedbacks, which act to amplify or damp the surface temperature response. There are a variety of issues that complicate the analysis of radiative feedbacks in global climate models, resulting in some confusion regarding their strengths and distributions. In this paper, we present a method for quantifying climate feedbacks based on â€œradiative kernelsâ€ which describe the differential response of the top-of-atmosphere radiative fluxes to incremental changes in the feedback variables. The use of radiative kernels enables one to decompose the feedback into one factor that depends on the radiative transfer algorithm and the unperturbed climate state, and a second factor that arises from the climate response of the feedback variables. Such decomposition facilitates an understanding of the spatial characteristics of the feedbacks and the causes of intermodel differences. This technique provides a simple and accurate way to compare feedbacks across different models using a consistent methodology. Cloud feedbacks cannot be evaluated directly from a cloud radiative kernel because of strong nonlinearities, but they can be estimated from the change in cloud forcing and the difference between the full-sky and clear-sky kernels. We construct maps to illustrate the regional structure of the feedbacks and compare results obtained using three different model kernels to demonstrate the robustness of the methodology. Our results confirm that models typically generate globally-averaged cloud feedbacks that are substantially positive or near neutral, unlike the change in cloud forcing itself which is as often negative as positive.

9. jae says:

Negative forcing causes positive feedback? I would like to see Roy Spencer’s comments on that paper!

10. jae says:

Negative forcing causes positive feedback? I would like to see Roy Spencer’s comments on that paper!

11. Nick Stokes says:

David, I can’t see the basis for your t-test at all. The feedback factors are computed relative to Jan 2008, but that doesn’t mean that a “feedback” for 2008 has been subtracted. It means that the differencing in ∆q and ∆T in their Eq 1 was done relative to 2008. That goes into the ratio ∆q/∆T.

Since you’ve computed an empirical sd of 0.73 etc, this includes the 2008 values, and is all the variability that you can deduce from the results. Your original test showing the lower limit of 1.32 is the best that can be done.

To put this another way, if you did try to substitute Jan 2008 values in Eq 1, you wouldn’t get zero. The result is 0/0 – indeterminate. So there’s no basis for testing whether 0 could be a member of the population.

Incidentally there is a typo in your first entry in rt().

12. Nick Stokes says:

David, I can’t see the basis for your t-test at all. The feedback factors are computed relative to Jan 2008, but that doesn’t mean that a “feedback” for 2008 has been subtracted. It means that the differencing in âˆ†q and âˆ†T in their Eq 1 was done relative to 2008. That goes into the ratio âˆ†q/âˆ†T.

Since you’ve computed an empirical sd of 0.73 etc, this includes the 2008 values, and is all the variability that you can deduce from the results. Your original test showing the lower limit of 1.32 is the best that can be done.

To put this another way, if you did try to substitute Jan 2008 values in Eq 1, you wouldn’t get zero. The result is 0/0 – indeterminate. So there’s no basis for testing whether 0 could be a member of the population.

Incidentally there is a typo in your first entry in rt().

13. Anonymous says:

Hi Nick. The feedback parameter isn’t subtracted between two periods, but the temperatures and the q’s are. So the parameter value is composed of a difference of two uncertain values. The sd is only calculated to 2007, as supplied in table 1, as 2008 is not available given the differencing methodology. Also eq 1 is a sigma, it gets to be zero by summing over negs and positives. Thanks for the typo alert.

14. Hi Nick. The feedback parameter isn’t subtracted between two periods, but the temperatures and the q’s are. So the parameter value is composed of a difference of two uncertain values. The sd is only calculated to 2007, as supplied in table 1, as 2008 is not available given the differencing methodology. Also eq 1 is a sigma, it gets to be zero by summing over negs and positives. Thanks for the typo alert.

15. Nick Stokes says:

David,
To put it yet another way, in your first test you queried whether zero could be derived from the distribution of five feedback factors, and said that no, that was improbable. Then you tested whether zero, assigned the variance of the feedback factors, could be in the distribution, and said, yes, that was possible. But that test is only reasonable if zero could be treated as an estimate of a factor, presumably 2008. There is no basis for that. There is no calculation which would yield zero as an estimate for a feedback factor for any year. So there is no basis for assigning it a variance and including it in a t-test..

16. Nick Stokes says:

David,
To put it yet another way, in your first test you queried whether zero could be derived from the distribution of five feedback factors, and said that no, that was improbable. Then you tested whether zero, assigned the variance of the feedback factors, could be in the distribution, and said, yes, that was possible. But that test is only reasonable if zero could be treated as an estimate of a factor, presumably 2008. There is no basis for that. There is no calculation which would yield zero as an estimate for a feedback factor for any year. So there is no basis for assigning it a variance and including it in a t-test..

17. Anonymous says:

“There is no calculation which would yield zero as an estimate for a feedback factor for any year.”

I think that zero is the no vapor feedback value. I can;t find that anywhere as they don’t do tests, but it could be achieved through the summation of +ve and -ve variations in eqn 1.

The test I have done is ‘in lieu’ of propagating the uncertainty through eqn 1. eqn 1 uses ΔT, composed of a difference of a start value and an end value. Same with Δq, start and end value, each with uncertainty. The result: λ = ΣKΔq/ΔT must incorporate the fact that 2008 is arbitrary, a single sample, and has high uncertainty because of that.

Imagine, if the deltas were calculated for two periods, 1900-1950 and 1950-2000. Then you would have two more precise estimates, but the uncertainty of each would still contribute to the uncertainty of the estimates. Same deal. Only in this case it is a sample of one.

18. “There is no calculation which would yield zero as an estimate for a feedback factor for any year.”

I think that zero is the no vapor feedback value. I can;t find that anywhere as they don’t do tests, but it could be achieved through the summation of +ve and -ve variations in eqn 1.

The test I have done is ‘in lieu’ of propagating the uncertainty through eqn 1. eqn 1 uses ΔT, composed of a difference of a start value and an end value. Same with Δq, start and end value, each with uncertainty. The result: λ = ΣKΔq/ΔT must incorporate the fact that 2008 is arbitrary, a single sample, and has high uncertainty because of that.

Imagine, if the deltas were calculated for two periods, 1900-1950 and 1950-2000. Then you would have two more precise estimates, but the uncertainty of each would still contribute to the uncertainty of the estimates. Same deal. Only in this case it is a sample of one.

19. Nick Stokes says:

David,
Yes, zero is the no vapor value, or more generally the “no change in vapor” value. But again, no year with that value was observed or implied, so the notion that there is a one-member population that you can assign a variance to and do a t-test to compare with has no basis. Incidentally, if you look at your R formula for snx you can see that the variance that you did assign is multiplied by (m-1), which is zero. In other words, your result is independent of the actual variance of the second population, which is very odd.

The empirical test you did first up includes subtracting off the 2008 values, so you can’t with the information we have, do any further correction. It’s true that the result is disproportionately sensitive to the possibility that 2008 was an unusual year, and that should somehow be reflected in the variance. However without getting into the actual q and T data, I don’t see how you could do that.

It seems to me that the best way of dealing with that is to include all the possible differences in the sum (eg 2006-2007), and I don’t know why they didn’t do that.

20. Nick Stokes says:

David,
Yes, zero is the no vapor value, or more generally the “no change in vapor” value. But again, no year with that value was observed or implied, so the notion that there is a one-member population that you can assign a variance to and do a t-test to compare with has no basis. Incidentally, if you look at your R formula for snx you can see that the variance that you did assign is multiplied by (m-1), which is zero. In other words, your result is independent of the actual variance of the second population, which is very odd.

The empirical test you did first up includes subtracting off the 2008 values, so you can’t with the information we have, do any further correction. It’s true that the result is disproportionately sensitive to the possibility that 2008 was an unusual year, and that should somehow be reflected in the variance. However without getting into the actual q and T data, I don’t see how you could do that.

It seems to me that the best way of dealing with that is to include all the possible differences in the sum (eg 2006-2007), and I don’t know why they didn’t do that.

21. Anonymous says:

Nick, Yes I agree that treating 2008 as zero is kind of a kludge. You are right it needs more justification. I don;t know why they didn;t run a regression. Would be a much better approach.

Come to think of it, it is not the 2008 value that is treated as zero. The issue is just, is the uncertainty of a ratio of differences, equal to the uncertainty of the difference of ratios? Hmm.

Also, as one mean is a sample of one, snmx is greater than snm, when typically it is less than snm. It doesn’t depend on the variance of the one point though, which is assumed, but as you say immaterial.

Thanks for the feedback. If I can work out how to justify it I ‘ll send it off to GRL.

22. Nick, Yes I agree that treating 2008 as zero is kind of a kludge. You are right it needs more justification. I don;t know why they didn;t run a regression. Would be a much better approach.

Come to think of it, it is not the 2008 value that is treated as zero. The issue is just, is the uncertainty of a ratio of differences, equal to the uncertainty of the difference of ratios? Hmm.

Also, as one mean is a sample of one, snmx is greater than snm, when typically it is less than snm. It doesn’t depend on the variance of the one point though, which is assumed, but as you say immaterial.

Thanks for the feedback. If I can work out how to justify it I ‘ll send it off to GRL.

23. cohenite says:

This Dessler piece follows on from his previous effort;

http://gesa.tamu.edu/people/faculty/dessler/Dessler2008.pdf

His conclusions about SH and RH are problematic and contrary to NOAA data from 1949; while it is true SH is increasing at the near-surface this is consistent with Miskolczi not AGW theory. I find the whole notion that increasing water vapour is a +ve feedback counterintuitive and inconsistent with other research by Spencer and Braswell and Kump and Pollard’s work with cloud condensation nuclei. Generally, increased water vapour evaporatively cools the surface and creates clouds; if Dessler was right he would be on board with Santer and we would have a clear THS instead of the statistical prestidigitation we do get from a succession of pro-AGW ‘papers’. It has reached the stage where the ‘back-radiation’ paradigm is no more than a set of assumed serendipities as Philipona’s latest paper demonstrates [GRL vol 36, 2009]; in this his modelled observations find that there is a parity between decreased shortwave net radiation and more cloudiness and increased SH induced LDR and more cloudiness. A perfect world!

AGW depends on the enhanced greenhouse but even if there is an increase in SH it is neither theoretically or empirically the case that the increase in SH would be a +ve feedback.

24. cohenite says:

This Dessler piece follows on from his previous effort;

http://gesa.tamu.edu/people/faculty/dessler/Dessler2008.pdf

His conclusions about SH and RH are problematic and contrary to NOAA data from 1949; while it is true SH is increasing at the near-surface this is consistent with Miskolczi not AGW theory. I find the whole notion that increasing water vapour is a +ve feedback counterintuitive and inconsistent with other research by Spencer and Braswell and Kump and Pollard’s work with cloud condensation nuclei. Generally, increased water vapour evaporatively cools the surface and creates clouds; if Dessler was right he would be on board with Santer and we would have a clear THS instead of the statistical prestidigitation we do get from a succession of pro-AGW ‘papers’. It has reached the stage where the ‘back-radiation’ paradigm is no more than a set of assumed serendipities as Philipona’s latest paper demonstrates [GRL vol 36, 2009]; in this his modelled observations find that there is a parity between decreased shortwave net radiation and more cloudiness and increased SH induced LDR and more cloudiness. A perfect world!

AGW depends on the enhanced greenhouse but even if there is an increase in SH it is neither theoretically or empirically the case that the increase in SH would be a +ve feedback.

25. DG says:

cohenite,
Notice Dessler does not cite Spencer. Also note there is not one mention of cloud feedbacks in the entire article. In fact, the word “cloud” only shows up twice.

My question would be, how does a paper like Dessler et al get published with such authoritative statements as “virtually guaranteed” when there is no such evidence to support it?

Yet, Spencer gets rejected and was told not to re-submit.

It appears so-called “peer review” is more like “pal review” advocacy than it is about presenting good science.

26. DG says:

cohenite,
Notice Dessler does not cite Spencer. Also note there is not one mention of cloud feedbacks in the entire article. In fact, the word “cloud” only shows up twice.

My question would be, how does a paper like Dessler et al get published with such authoritative statements as “virtually guaranteed” when there is no such evidence to support it?

Yet, Spencer gets rejected and was told not to re-submit.

It appears so-called “peer review” is more like “pal review” advocacy than it is about presenting good science.

27. Anonymous says:

Yes indeed. How does GRL publish results without confidence limits? If wasn’t strongly pro-AGW it would out on its A. Dessler is quite active on the blogs actually. http://gristmill.grist.org/user/Andrew%20Dessler

I am working of an approximate derivation of λ uncertainty, by propagation of errors, and will post it up soon.

28. Yes indeed. How does GRL publish results without confidence limits? If wasn’t strongly pro-AGW it would out on its A. Dessler is quite active on the blogs actually. http://gristmill.grist.org/user/Andrew%20Dessler

I am working of an approximate derivation of λ uncertainty, by propagation of errors, and will post it up soon.

29. bill r says:

A t(4) with a value of 2.54 is significant for a one-sided 95% level test, but not for a two-sided test. Additionally, the gross difference between the Wilcoxon and the t-test suggests that the data are not very normal, so that the significance is not very robust.(e.g. it comes from the assumptions, not from the data…)

30. bill r says:

A t(4) with a value of 2.54 is significant for a one-sided 95% level test, but not for a two-sided test. Additionally, the gross difference between the Wilcoxon and the t-test suggests that the data are not very normal, so that the significance is not very robust.(e.g. it comes from the assumptions, not from the data…)

31. cohenite says:

The thing about AGW and its proxies is that none of them have their null-hypothesis defeated by a 2-sided test; refutation is strictly one-sided; on the low side; results falling outside the confidence range on the high side are actually better fits for the hyperbole; thus temperature increases which exceed 3C are better ‘fits’; more frequent and intense hurricanes ditto; the shame for AGW is that all the results are on the low side. In respect of Dessler the SH record of NOAA is not only on the low side but negative;

http://wattsupwiththat.com/2008/06/21/a-window-on-water-vapor-and-planetary-temperature-part-2/

In bush parlance, that makes AGW lower than a snake’s belly.

32. cohenite says:

The thing about AGW and its proxies is that none of them have their null-hypothesis defeated by a 2-sided test; refutation is strictly one-sided; on the low side; results falling outside the confidence range on the high side are actually better fits for the hyperbole; thus temperature increases which exceed 3C are better ‘fits’; more frequent and intense hurricanes ditto; the shame for AGW is that all the results are on the low side. In respect of Dessler the SH record of NOAA is not only on the low side but negative;

http://wattsupwiththat.com/2008/06/21/a-window-on-water-vapor-and-planetary-temperature-part-2/

In bush parlance, that makes AGW lower than a snake’s belly.

33. Jan Pompe says:

Shifting the x-axis in Fig (1) of that paper gives a different visual impact so that if one is not careful to pay attention one might get the impression that the atmospheric temperature slightly leads the ENSO instead of lagging it by a bit over two months.

This leads to question regarding causation of the temperature fluctuations as well as the specific humidity, and cloud cover fluctations. If those three parameters are concurrently driven by the same ENSO signal then we do NOT have a case of feedback at all; even though it looks like it it might sound like but it doesn’t walk like it.

We need to check the timing of those variations to see if we can actually identify the beast.

34. Jan Pompe says:

Shifting the x-axis in Fig (1) of that paper gives a different visual impact so that if one is not careful to pay attention one might get the impression that the atmospheric temperature slightly leads the ENSO instead of lagging it by a bit over two months.

This leads to question regarding causation of the temperature fluctuations as well as the specific humidity, and cloud cover fluctations. If those three parameters are concurrently driven by the same ENSO signal then we do NOT have a case of feedback at all; even though it looks like it it might sound like but it doesn’t walk like it.

We need to check the timing of those variations to see if we can actually identify the beast.

35. David A. Evans says:

“Our results confirm that models typically generate globally-averaged cloud feedbacks that are substantially positive or near neutral, unlike the change in cloud forcing itself which is as often negative as positive.”

Means models are spherical aberrations.

DaveE.

36. David A. Evans says:

“Our results confirm that models typically generate globally-averaged cloud feedbacks that are substantially positive or near neutral, unlike the change in cloud forcing itself which is as often negative as positive.”

Means models are spherical aberrations.

DaveE.

37. Anonymous says:

spherical aberrations?

38. David A. Evans says:

Balls 🙂

DaveE.

39. David A. Evans says:

Balls 🙂

DaveE.

40. David A. Evans says:

I should probably explain.

I used to be in the RAF & worked extensively with servo systems.

Next door was the Laser Ranging & Marked Target Seeking workshop.

That was our polite way.

DaveE.

41. David A. Evans says:

I should probably explain.

I used to be in the RAF & worked extensively with servo systems.

Next door was the Laser Ranging & Marked Target Seeking workshop.

That was our polite way.

DaveE.

42. David A. Evans says:

I’m reading the 2000 paper now & to me, so far, it seems they weren’t convinced back then either, maybe too scared to come out of the sceptical closet.

DaveE.

43. David A. Evans says:

I’m reading the 2000 paper now & to me, so far, it seems they weren’t convinced back then either, maybe too scared to come out of the sceptical closet.

DaveE.

44. Anonymous says:

Is that Held and Soden 2000 David?

45. Is that Held and Soden 2000 David?

46. David A. Evans says:

Yes, I’m a bit new to some of this so there’s a lot of catching up to do.

47. David A. Evans says:

Yes, I’m a bit new to some of this so there’s a lot of catching up to do.

48. David A. Evans says:

I decided to skip to the conclusion before finishing. I didn’t realise until then that their scepticism was so clear back then.

I’ll get back to reading the body.

49. David A. Evans says:

I decided to skip to the conclusion before finishing. I didn’t realise until then that their scepticism was so clear back then.

I’ll get back to reading the body.

50. DavidLHagen says:

David S. Suggest reviewing Paltridge on relative humidity:Garth Paltridge & Albert Arking & Michael Pook, Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, Theor. Appl. Climatol, DOI 10.1007/s00704-009-0117-x Published online 26 Feb. 2009See particularly discussion at:Steve McIntyre (2009) A Peek behind the Curtain, ClimateAudit.org, March 4th, 2009.Particularly:Ryan Maue: March 4th, 2009 at 3:44 pm Re: Craig Loehle (#14), this link is a good place to start on the tropopause height issues. Science 2004 Pielke Sr. There have been three generations of reanalysis products. NCEP Reanalysis would belong to the first generation, ERA-40 to the second, and ERA-interim/JRA-25 as well as NASA's MERRA in the third generation. 4DVar is utilized in the recent reanalysis projects, whereas NCEP Reanalysis does not employ the latest state-of-the-art data assimilation procedures. Accordingly, it does not advertise to do such. This is 1990s NWP. Re: Steve McIntyre (#15), Yes, that's another example. The IPCC AR4 chapter written by Trenberth is an excellent summary of the pitfalls in using reanalysis data for climate trends. It is dangerous territory, indeed.I echo Re: Gerald Browning (#21) , but simply pointing out the caveats and the potential pitfalls is insufficient in my book. With the ERA-40 being freely downloadable (also the JRA-25) and available to the research community, a cross-comparison would be appropriate and fairly easy to achieve. This would add robustness and perhaps some indication of error in the upper-tropospheric humidity measurements. On a separate, my paper on the collapse in Northern Hemisphere Tropical Cyclone Activity since 2007 was published by GRL today. The review process was very helpful and I did not experience the aforementioned bias or resistance to my manuscript either scientifically or politically.”See links in original.It might help to comment on that level of evaluation required to address uncertainties in data and trends.

51. Anonymous says:

David S.
Suggest reviewing Paltridge on relative humidity:
Garth Paltridge & Albert Arking & Michael Pook, Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, Theor. Appl. Climatol, DOI 10.1007/s00704-009-0117-x Published online 26 Feb. 2009

See particularly discussion at:
Steve McIntyre (2009) A Peek behind the Curtain, ClimateAudit.org, March 4th, 2009.

Particularly:
Ryan Maue:
March 4th, 2009 at 3:44 pm
Re: Craig Loehle (#14), this link is a good place to start on the tropopause height issues. Science 2004 Pielke Sr.
There have been three generations of reanalysis products. NCEP Reanalysis would belong to the first generation, ERA-40 to the second, and ERA-interim/JRA-25 as well as NASA’s MERRA in the third generation. 4DVar is utilized in the recent reanalysis projects, whereas NCEP Reanalysis does not employ the latest state-of-the-art data assimilation procedures. Accordingly, it does not advertise to do such. This is 1990s NWP.
Re: Steve McIntyre (#15),
Yes, that’s another example. The IPCC AR4 chapter written by Trenberth is an excellent summary of the pitfalls in using reanalysis data for climate trends. It is dangerous territory, indeed.
I echo Re: Gerald Browning (#21) , but simply pointing out the caveats and the potential pitfalls is insufficient in my book. With the ERA-40 being freely downloadable (also the JRA-25) and available to the research community, a cross-comparison would be appropriate and fairly easy to achieve. This would add robustness and perhaps some indication of error in the upper-tropospheric humidity measurements.
On a separate, my paper on the collapse in Northern Hemisphere Tropical Cyclone Activity since 2007 was published by GRL today. The review process was very helpful and I did not experience the aforementioned bias or resistance to my manuscript either scientifically or politically.”