The following is an approximate propagation of uncertainty through Dessler et als. equation for estimating the strength of water vapor feedback λ. We have been looking at the error-bars in his recent paper Water-vapor climate feedback inferred from climate fluctuations, 2003-2008, not calculated in the published paper. Assumptions made are noted. Refer to wiki for propagation of error equations.

Here R is the top of atmosphere IR, q is the specific humidity and T is the temperature.

1. $$lambda = Sigma frac{partial R}{partial q}frac{Delta q}{Delta T} =Kfrac{Delta q}{Delta T} $$

Rolling up the summation over the earths surface into K.

2. $$lambda = Kfrac{q_1-q_0 pm sqrt{2}sigma_q}{T_1-T_0 pm sqrt{2}sigma_T} $$

Substituting the values two endpoint years used in calculating the differences, and their uncertainties, using propagation of errors for differences, assuming independence.

3. $$(frac{sigma_lambda}{lambda})^2 = (frac{sqrt{2}sigma_q}{Delta q})^2 + (frac{sqrt{2}sigma_T}{Delta T})^2 $$

Substituting uncertainty of q and T into equation for propagation of errors through ratios, assuming independence.

4. $${sigma_lambda}^2 = 2{sigma_q}^2 + 2{sigma_T}^2 $$

Assuming λ, q and T are the same magnitude. This is an underestimate if λ=2.

5. $${sigma_lambda} = 2sigma_{qT} $$

Assuming uncertainty of q and T are equal, and squaring.

So according to these rough calculations, the actual uncertainty in λ could be roughly twice the uncertainty observed in the Dressler figures. This increase is due to the use of a single year, 2008 as the reference point, for calculating the change in humidity and temperature relative to other years. The uncertainty in the arbitrary choice of this point increases the uncertainty when propagated through the calculations for water vapor feedback.

Our calculated standard deviation of the mean was 0.37 W/m^{2}/K. The confidence limits of the mean are then 1.96*2*0.37 or 1.45, giving a lower limits to the estimated 2.04 W/m^{2}/K value of vapor feedback of 0.59 W/m^{2}/K.

If we substitute values into step 3 of λ=2, q=2, T=1 we get an even higher uncertainty reflecting the effect on the ratio of dividing by a smaller number.

6. $${sigma_lambda}^2 = 2{sigma_q}^2 + 8{sigma_T}^2 $$

7. $${sigma_lambda} = sqrt{10}sigma_{qT} $$

The confidence limits of the mean are then 1.96*3.16*0.37 or 2.29, giving a lower limits to the estimated 2.04 W/m^{2}/K value of vapor feedback of -0.25 W/m^{2}/K. Being less than zero, this indicates that zero feedback is within the limits of uncertainty. This is very similar to the CI obtained be a t-test of difference of means in the previous post.

Dessler rambles on about the large influence temperature has on the uncertainty of the feedback here.

[20] Figure 4 also helps explain the large year-to-year

variability in our calculated values of lq in Table 1. It is

tropical q that primarily determines the size of the water

vapor feedback, and tropical q is primarily regulated by the

tropical surface temperature [e.g., Minschwaner and

Dessler, 2004]. The definition of lq, however, uses

changes in global-average surface temperature. While

changes in global and tropical temperatures are related,

there are often variations in the global average that are

not reflected in the tropical average and vice versa. Such

variations lead to large variations in lq.[21] Consider, for example, the small feedback lq

inferred between January 2007 and January 2008. The

difference in the global average surface temperature DTs

between these two months was 0.60 K. Much of this,

however, was due to extreme changes in the northern

hemisphere mid- and high latitudes. The tropical average

surface temperature difference DTtropics was a milder 0.37 K.

The relatively small change in tropical surface temperature

leads to a relatively small change in q, and therefore a

relatively small value of (@R/@q)Dq of 0.57 W/m2.

Dividing that by the large DTs leads to the small value

of 0.94 W/m2/K inferred for lq over that period.[22] The months with the largest inferred values of lq, on

the other hand, are the months where DTs is smaller than

DTtropics. For example, DTs between January 2008 and

January of 2006 was 0.28 K, while DTtropics between these

months was 0.33 K. This arrangement contributes to a large

value for the inferred lq between these months. Given

enough data, such variations should average out.In a short

data set such as the one analyzed here, however, such

variations can be significant.

You got that right. It would seem that three almost equal contributions to overall uncertainty are as follows:

Total uncertainty = measurements + reference point + ratio amplification

“Being less than zero, this indicates that zero feedback is within the limits of uncertainty. ”

Yup.

“Being less than zero, this indicates that zero feedback is within the limits of uncertainty. ”

Yup.

“Assuming λ, q and T are the same magnitude.” following Eq 4

You can’t really do that. They are different units.

“Assuming uncertainty of q and T are equal”

Same issue. Actually its better if you leave the denominators in, so you’re comparing dimensionless quantities. Or use coefficient of variation.

You need more justification for just multiplying 1.37 by 2. The 1.37 was an observed variance, which on the face of it included the effects of differencing. You can argue for an increase, based on the fact that the 2008 value appoears repeatedly, and the repeated occurrences are not independent variables.

“Assuming Î», q and T are the same magnitude.” following Eq 4

You can’t really do that. They are different units.

“Assuming uncertainty of q and T are equal”

Same issue. Actually its better if you leave the denominators in, so you’re comparing dimensionless quantities. Or use coefficient of variation.

You need more justification for just multiplying 1.37 by 2. The 1.37 was an observed variance, which on the face of it included the effects of differencing. You can argue for an increase, based on the fact that the 2008 value appoears repeatedly, and the repeated occurrences are not independent variables.

Spencer’s recent blog post is relevant: http://www.drroyspencer.com/

Spencer’s recent blog post is relevant: http://www.drroyspencer.com/

Nick: “You can argue for an increase, based on the fact that the 2008 value appears repeatedly, and the repeated occurrences are not independent variables.” Then that goes back to treating the 2008 value ‘as if’ it had variance, as I did by using a difference of means test, doesn’t it?

I know there are some major assumptions but you have to do that when reverse engineering from a few values. I like to get things in the right ballpark before I refine them, if possible. If I use CV’s that eliminates the magnitude assumption, but I then assume the CVq and CVT are similar, I still get a factor of 2 that variance due to q and T needs to be multiplied by.

Nick: “You can argue for an increase, based on the fact that the 2008 value appears repeatedly, and the repeated occurrences are not independent variables.” Then that goes back to treating the 2008 value ‘as if’ it had variance, as I did by using a difference of means test, doesn’t it?

I know there are some major assumptions but you have to do that when reverse engineering from a few values. I like to get things in the right ballpark before I refine them, if possible. If I use CV’s that eliminates the magnitude assumption, but I then assume the CVq and CVT are similar, I still get a factor of 2 that variance due to q and T needs to be multiplied by.

David,

You can argue that the 2009 value has variance. But it’s value isn’t zero, as you had argued.

And you really can’t argue that quantities with different dimensions have the same “magnitude”.

David,

You can argue that the 2009 value has variance. But it’s value isn’t zero, as you had argued.

And you really can’t argue that quantities with different dimensions have the same “magnitude”.

Oops, the

2008valueOops, the

2008valueThanks Nick. I thought of a simpler argument. You could say that the real variance should be increased in the proportion of the original variance, to the variance composed of 2 equal variances, ie increased by 1.41 to supply the variance lost by using the same year 2008, five times.

$$sigma_lambda = sqrt{sigma^2 + sigma^2} = sqrt{2}sigma $$

Thanks Nick. I thought of a simpler argument. You could say that the real variance should be increased in the proportion of the original variance, to the variance composed of 2 equal variances, ie increased by 1.41 to supply the variance lost by using the same year 2008, five times.

$$sigma_lambda = sqrt{sigma^2 + sigma^2} = sqrt{2}sigma $$

Email supposedly from Roy Spencer, at Jennifer Marohasy.

Email supposedly from Roy Spencer, at Jennifer Marohasy.

Showing my ignorance again. They mention “year-to-year variablilty” and do error calculations based on that. Buy years are composed of days and we often have daily readings. Should they not be calculating errors at daily scale and carrying error propagation through their annual work?

To be extreme, you can artificially decrease your errors as you go from daily to monthly to annual to decadal averages, simply because of the noise reduction that averaging can give. But if you wanted to do model projections with realistic error terms, would you not have to place yourself above this averaging effect and use errors applicable to the highest resolution available?

Rough analogy. Would you discover the mechanism of the classic banking dodge of rounding down fractions of a cent and putting the surplus accumulation in your own account, if you averaged everything up to large, rounded, annual figures?

Showing my ignorance again. They mention “year-to-year variablilty” and do error calculations based on that. Buy years are composed of days and we often have daily readings. Should they not be calculating errors at daily scale and carrying error propagation through their annual work?

To be extreme, you can artificially decrease your errors as you go from daily to monthly to annual to decadal averages, simply because of the noise reduction that averaging can give. But if you wanted to do model projections with realistic error terms, would you not have to place yourself above this averaging effect and use errors applicable to the highest resolution available?

Rough analogy. Would you discover the mechanism of the classic banking dodge of rounding down fractions of a cent and putting the surplus accumulation in your own account, if you averaged everything up to large, rounded, annual figures?

Pingback: bateria do laptopa fujitsu siemens

Pingback: Cartas de Amistad

Pingback: zobacz tutaj

Pingback: hearthstone accounts

Pingback: witryna firmowa

Pingback: socially responsible gifts

Pingback: zobacz

Pingback: witryna

Pingback: link do strony

Pingback: strona www

Pingback: polecam link

Pingback: link

Pingback: polecam link

Pingback: sufity armstrong