The value of tau

Admin: Posted up for Steve, with an initial response by Miklos. The slides Steve referred to are here. My bad for not telling Miklos that.

Link to TF&K08

Miskolczi theory proposes a tau (Ta if you will) significantly different from that found by at least a dozen other studies published in the peer-reviewed literature over more than a decade, as well as a number of other new relations A_A = E_D, f = 2/3 etc., etc.

In the scientific processes I have been involved myself a number of times, to resolve why one one or more studies get one or more critical parameter values significantly different from most other studies, a process is entered into whereby those who are getting the significantly different values have to demonstrate why that should be so. If their explanation proves good enough, and is verified/validated it usually results in a shift in the accepted values. This is just part of the normal processes of the advancement of scientific knowledge.

If this is not the case with Miskolczi then this is no better than the IPCC. I propose the following questions:

(1) Explain why neither Slides 68 or 69 explicitly state clear sky and all sky global means are being dealt with (respectively) as Nick has suggested is the case (and I concur seems the most likely interpretation).

(2) Confirm he is indeed referring to a global clear sky mean when he shows a slide (Slide 68) claiming the S_T = 90.7 W/m^2 and then explain why he simultaneously claims in the very same slide that K&T97 (known clear sky S_T ~100 W/m^2) is in error by 22.5 W/m^2!

(3) Identify which peer reviewed publication the interpretation given in Slide 70 claiming the Miskolczi HARTCODE interpretation of the NOAA 60 year average gives S_T = 60.9 W/m^2 appears when the AGW consensus is for a global all sky S_T of ~40 W/m^2 even as recently as F,T&K08 (which reviews/summarizes the finding of other radiative codes).

The bottom line is that Miskolczi is saying there is a ‘magic tau’ of magnitude 1.87. He has consistently got this by using a B anywhere from about 396 down to about 380 W/m^2 yet somehow the S_T values he gets at the same time always stays in the range of about 63 down to 58.5 W/m^2, resulting in a tau in the range 1.84 – 1.87 (say).

To accept Miskolczi Theory as viable we need to be technically very clear on why there is always this discrepancy where the Miskolczi S_T is always significantly greater by about 20 – 25 W/m^2 than the accepted literature range of values – from numerous studies – most using good radiative codes and putting great effort into correctly weighting the land and oceanic all sky values in order to derive a global mean.

We also need to understand why the Miskolczi S_T always appears to be much closer to the accepted literature range of values for net LW up i.e. the sum of S_T and that LW IR emitted upwards by clouds. Sheer coincidence? I would hope so.

Miklos Zagoni responds:

Here are some answers to Steve.

1. I do not know which of my presentations are you talking about, but as you all know, the clear-sky g is 1/3 while the all-sky global average is about 0.4. From the numbers one always able to figure out whether clear or all sky calculation is displayed.

2. The slide that gives you the KT97 22.5Wm-2 error, is calculated (as is written there) at their (reduced) h2o content. With those amounts of GHG in the air, Hartcode says that their St should be 90.7. But they (as they admitted) regarded only the WIN (833-1250 cm-1) region.

3. AGW consensus of St=40 rests on the mentioned KT97. Please point to the locus please in KT97, or TFK08 where they give the details beyond it.

Please give me also any indication how TFK08, or anyone else, measured the atmospheric window radiation.

Thanks,
Miklos

About these ads

0 thoughts on “The value of tau

  1. Not true. AGW consensus of St=40 rests on numerous papers and reviews since K&T97.

    Requested loci in T,F&K08 are as follows:

    (1) Page 6, referring to the findings of Trenberth et al. 2001; Trenberth et al. 2002; Trenberth and Stepaniak 2003a,b,2004; Zhang et al. 2004, 2006, 2007; Gupta et al. 1999; Smith et al. 2002; Wilber et al. 2006′ Wild et al. 2006.

    (2) Pages 10-11, referring to the findings of Rossow and Duenas 2004, Zhang et al. 2004; CERES Loeb et al. 2000; 2007; 2008; Wielicki et al. 2006; Kim and Ramanathan 2008.

    (3) Table1b

    (4) Table 2b

    (5) Figure 1.

    Please give me also any indication how TFK08, or anyone else, measured the atmospheric window radiation.

    TF&K08 page 6

    “The radiative aspects have been explored in several studies by Zhang et al. (2004, 2006, 2007) based on International Satellite Cloud Climatology Project (ISCCP) cloud data and other data in an advanced radiative code. In addition, estimates of surface radiation budgets have been given by Gupta et al. (1999) and used by Smith et al. (2002) and Wilber et al. (2006)…. Many new measurements have now been made from space, notably from Clouds and the Earth’s Radiant Energy System (CERES) instruments on several platforms (Wielicki et al. 1996; 2006). Moreover there are a number of new estimates of the atmospheric energy budget possible from new atmospheric reanalyses,….”

    1. Hi Steve,

      While I regret to say that I haven’t been following this debate very closely lately, I do note your reference above to Wielicki et al. 1996; 2006 cited in TFK08 apparently on direct measurements of the atmospheric window radiation.

      As most probably know, there has been a debate recently between Richard Lindzen, who had an informal essay published at WUWT, and young Chris Colose, who rejected Lindzen’s argument on the basis of corrections to the raw data given in Wielicki et al. 2006. Lindzen then responded briefly, saying that (a) he was skeptical of the Wielicki et al 2006 corrections and (b) a negative feedback was implied whether you accept the corrections anyway.

      Anyhow, suppose that Lindzen is right and the corrections in W et al 2006 are bogus; would this help Miskolczi’s argument at all?

      1. Hi Alex

        I don’t think this Wielicki et al. 2006 business helps Miskolczi’s Theory at all. Lindzen is a brilliant man and his comments regarding the Wielicki et al 2006 corrections to CERES data may well be technically justified. But I have never heard of Lindzen endorsing even any small part of Miskolczi Theory.

        From the TF&K08 review Table 2b the Net LW IR up at TOA from the CERES period March 2000 – May 2004 body of work (4 different data analysis groups) ranges from 48.5 – 72.8 W.m^2 i.e. a mean of 61.2±10.0 at the one standard deviation level. Similarly, the best estimate of S_U is a very tight 395.9±1.4 W/m^2.

        From the TF&K08 review Table 1b the Net LW IR up at TOA from the ERBE period February 1985 – April 1989 body of work (5 different data analysis groups) ranges from 51.1 – 71.3 W.m^2 i.e. a mean of 60.9±8.3 W/m^2 at the one standard deviation level. Similarly, the best estimate of S_U is a very tight 394.6±2.6 W/m^2.

        These two sets of estimates for Net LW IR up at TOA over two time windows (one preceding, one following the K7T97/99 period) are clearly the same within error. These two sets of estimates for S_U over two time windows (one preceding, one following the K7T97/99 period) are also clearly the same within error.

        In addition, we know the best estimates for global all sky Net LW IR up at TOA NOT for S_T as Net LW IR at TOA include a TOA-leaving component LW IR emitted by clouds (following release of latent heat from Evapotranpiration (ET – part of the Miskolczi K term)).

        Net LW IR up at TOA is therefore by definition greater than S_T.

        The K&T97 and TF&K08 reviews both imply that LW IR emitted by clouds is ~30 W/m^2. This means that both the CERES and ERBE period best estimates of all sky S_T should be about 41 W/m^2 respectively and that the clear sky (cloud free) estimates of S_T again for both these periods should not exceed about 41±10 W/m^2 i.e there is about a less than one chance in 40 (2.5%) (assuming binomial distribution) of a clear sky S_T exceeding 61 W/m^2. [It also means that both the CERES and ERBE period best estimates of S_U should be about 395 W/m^2. Thus the best estimate of mean tau should be about 2.27+0.27,-0.22 also indicating a mean global all sky tau is hardly likely to be as low as 1.87]

        These data are significantly at odds with the values presented in Zagon’s recent presentation, particularly where he showed slides for Miskolczi HARTCODE estimates of global clear sky S_T of 90.7 (Slide 68), 58.7 (Slide 69) and 60.9 (Slide 70) W/m^2 respectively. Miskolczi HARTCODE measurements of even clear sky S_T are thus at or even above the statistical upper limit of the mainstream science values (typically <5% probability of agreement with them).

        We are therefore entitled to ask: What is so different about Miskolczi's HARTCODE major parameter estimations from the mainstream findings over now two decades and why should we accept Miskolci's values against the weight of the findings of so many other study groups over some two decades?

      2. Correction:

        The K&T97 and TF&K08 reviews both imply that LW IR emitted by clouds is ~30 W/m^2. This means that both the CERES and ERBE period best estimates of all sky S_T should be about 31 W/m^2 respectively and that the clear sky (cloud free) estimates of S_T again for both these periods should not exceed about 61±10 W/m^2 i.e there is about a less than one chance in 40 (2.5%) (assuming binomial distribution) of a clear sky S_T exceeding 81 W/m^2. [It also means that both the CERES and ERBE period best estimates of S_U should be about 395 W/m^2. Thus the best estimate of clear sky mean tau should be about 1.87+0.18,-0.15 but also indicating a mean global all sky tau is hardly likely to be as low as 1.87]

        These data are consistent with the values presented in Zagon’s recent presentation, particularly where he showed slides for Miskolczi HARTCODE estimates of global clear sky S_T of 90.7 (Slide 68), 58.7 (Slide 69) and 60.9 (Slide 70) W/m^2 respectively. Miskolczi HARTCODE measurements of clear sky S_T are thus probably consistent with mainstream science values.

        But this still does not address the problem with the Miskolczi value for global all sky mean S_T and tau.

  2. Not true. AGW consensus of St=40 rests on numerous papers and reviews since K&T97.Requested loci in T,F&K08 are as follows:(1) Page 6, referring to the findings of Trenberth et al. 2001; Trenberth et al. 2002; Trenberth and Stepaniak 2003a,b,2004; Zhang et al. 2004, 2006, 2007; Gupta et al. 1999; Smith et al. 2002; Wilber et al. 2006' Wild et al. 2006.(2) Pages 10-11, referring to the findings of Rossow and Duenas 2004, Zhang et al. 2004; CERES Loeb et al. 2000; 2007; 2008; Wielicki et al. 2006; Kim and Ramanathan 2008.(3) Table1b(4) Table 2b(5) Figure 1.Please give me also any indication how TFK08, or anyone else, measured the atmospheric window radiation.TF&K08 page 6″The radiative aspects have been explored in several studies by Zhang et al. (2004, 2006, 2007) based on International Satellite Cloud Climatology Project (ISCCP) cloud data and other data in an advanced radiative code. In addition, estimates of surface radiation budgets have been given by Gupta et al. (1999) and used by Smith et al. (2002) and Wilber et al. (2006)…. Many new measurements have now been made from space, notably from Clouds and the Earth’s Radiant Energy System (CERES) instruments on several platforms (Wielicki et al. 1996; 2006). Moreover there are a number of new estimates of the atmospheric energy budget possible from new atmospheric reanalyses,….”

  3. I think Miklos is saying that light intensity measurements are made by different parties, between different low and high wavelength boundaries. Such an implication might be that ultraviolet light was included by one party but not by another, or far IR at the other end.

    Steve, it would save a large amount of reference perusal if you were able to confirm that the authors you quote all use the same window, and the same as Ferenc. Would that be a large job? Your resources are far better than mine.

    Another complication is that various instruments have various sensitivities of measurement at a given wavelength. That is, each has to be calibrated to give similar results to the others at each wavelength subset. I would assume that not all measurements were made with the same design of equipment; indeed at times different operational principles would be involved.

    I’m sorry that this is not a positive post providing spectroscopic information to settle the question decisively. It is just too long since I was last a spectroscopist.

  4. I think Miklos is saying that light intensity measurements are made by different parties, between different low and high wavelength boundaries. Such an implication might be that ultraviolet light was included by one party but not by another, or far IR at the other end.Steve, it would save a large amount of reference perusal if you were able to confirm that the authors you quote all use the same window, and the same as Ferenc. Would that be a large job? Your resources are far better than mine.Another complication is that various instruments have various sensitivities of measurement at a given wavelength. That is, each has to be calibrated to give similar results to the others at each wavelength subset. I would assume that not all measurements were made with the same design of equipment; indeed at times different operational principles would be involved.I'm sorry that this is not a positive post providing spectroscopic information to settle the question decisively. It is just too long since I was last a spectroscopist.

  5. Hi Steve,While I regret to say that I haven't been following this debate very closely lately, I do note your reference above to Wielicki et al. 1996; 2006 cited in TFK08 apparently on direct measurements of the atmospheric window radiation.As most probably know, there has been a debate recently between Richard Lindzen, who had an informal essay published at WUWT, and young Chris Colose, who rejected Lindzen's argument on the basis of corrections to the raw data given in Wielicki et al. 2006. Lindzen then responded briefly, saying that (a) he was skeptical of the Wielicki et al 2006 corrections and (b) a negative feedback was implied whether you accept the corrections anyway.Anyhow, suppose that Lindzen is right and the corrections in W et al 2006 are bogus; would this help Miskolczi's argument at all?

  6. Hi AlexI don't think this Wielicki et al. 2006 business helps Miskolczi's Theory at all. Lindzen is a brilliant man and his comments regarding the Wielicki et al 2006 corrections to CERES data may well be technically justified. But I have never heard of Lindzen endorsing even any small part of Miskolczi Theory.From the TF&K08 review Table 2b the Net LW IR up at TOA from the CERES period March 2000 – May 2004 body of work (4 different data analysis groups) ranges from 48.5 – 72.8 W.m^2 i.e. a mean of 61.2±10.0 at the one standard deviation level. Similarly, the best estimate of S_U is a very tight 395.9±1.4 W/m^2.From the TF&K08 review Table 1b the Net LW IR up at TOA from the ERBE period February 1985 – April 1989 body of work (5 different data analysis groups) ranges from 51.1 – 71.3 W.m^2 i.e. a mean of 60.9±8.3 W/m^2 at the one standard deviation level. Similarly, the best estimate of S_U is a very tight 394.6±2.6 W/m^2.These two sets of estimates for Net LW IR up at TOA over two time windows (one preceding, one following the K7T97/99 period) are clearly the same within error. These two sets of estimates for S_U over two time windows (one preceding, one following the K7T97/99 period) are also clearly the same within error.In addition, we know the best estimates for global all sky Net LW IR up at TOA NOT for S_T as Net LW IR at TOA include a TOA-leaving component LW IR emitted by clouds (following release of latent heat from Evapotranpiration (ET – part of the Miskolczi K term)). Net LW IR up at TOA is therefore by definition greater than S_T. The K&T97 and TF&K08 reviews both imply that LW IR emitted by clouds is ~30 W/m^2. This means that both the CERES and ERBE period best estimates of all sky S_T should be about 41 W/m^2 respectively and that the clear sky (cloud free) estimates of S_T again for both these periods should not exceed about 41±10 W/m^2 i.e there is about a less than one chance in 40 (2.5%) (assuming binomial distribution) of a clear sky S_T exceeding 61 W/m^2. [It also means that both the CERES and ERBE period best estimates of S_U should be about 395 W/m^2. Thus the best estimate of mean tau should be about 2.27+0.27,-0.22 also indicating a mean global all sky tau is hardly likely to be as low as 1.87]These data are significantly at odds with the values presented in Zagon's recent presentation, particularly where he showed slides for Miskolczi HARTCODE estimates of global clear sky S_T of 90.7 (Slide 68), 58.7 (Slide 69) and 60.9 (Slide 70) W/m^2 respectively. Miskolczi HARTCODE measurements of even clear sky S_T are thus at or even above the statistical upper limit of the mainstream science values (typically <5% probability of agreement with them).We are therefore entitled to ask: What is so different about Miskolczi's HARTCODE major parameter estimations from the mainstream findings over now two decades and why should we accept Miskolci's values against the weight of the findings of so many other study groups over some two decades?

  7. Correction:The K&T97 and TF&K08 reviews both imply that LW IR emitted by clouds is ~30 W/m^2. This means that both the CERES and ERBE period best estimates of all sky S_T should be about 31 W/m^2 respectively and that the clear sky (cloud free) estimates of S_T again for both these periods should not exceed about 61±10 W/m^2 i.e there is about a less than one chance in 40 (2.5%) (assuming binomial distribution) of a clear sky S_T exceeding 81 W/m^2. [It also means that both the CERES and ERBE period best estimates of S_U should be about 395 W/m^2. Thus the best estimate of clear sky mean tau should be about 1.87+0.18,-0.15 but also indicating a mean global all sky tau is hardly likely to be as low as 1.87]These data are consistent with the values presented in Zagon's recent presentation, particularly where he showed slides for Miskolczi HARTCODE estimates of global clear sky S_T of 90.7 (Slide 68), 58.7 (Slide 69) and 60.9 (Slide 70) W/m^2 respectively. Miskolczi HARTCODE measurements of clear sky S_T are thus probably consistent with mainstream science values.But this still does not address the problem with the Miskolczi value for global all sky mean S_T and tau.

  8. Steve, like Alex I too am interested in the Wong, Wielicki revision and I note RL’s response to Watts which Alex posted at Colose’s blog; it may interest you to know that RL was well aware of the revision before he wrote the Watts thread, which I think doesn’t reflect well on Colose; in respect of Miskolczi and the fact that despite the revision the OLR figures are still capable of sustaining a -ve feedback conclusion, is it not crucial to have some access to accurate upper water vapor data? I mentioned the NCEP and NOAA data to Nick recently and he was dismissive of this on the basis that it was patchy and preferred the modelled conclusions of Dessler and Soden etc; but if upper level water is declining then that must be a feather in M’s cap; speaking of feathers, you and anyone else for that matter may be interested in this;

    “I await the verdict of science. In the meantime – I will pay $100 to the first person who can balance Miskolczi’s Equation No. 7 to any reasonable accuracy using any published Earth radiative budget.

    SU – (F0 + Po) + ED – EU = OLR

    Where: SU is the surface upward radiative flux (= Sg in Miskolczi’s Figure 1)

    F0 is the net incoming short wave radiation (incoming solar radiation less the reflected component)

    PO is the friction heat from wind and waves and heat from the centre of the earth (hint – very minor)

    ED is the downward radiative flux from the atmosphere

    EU is the upward radiative flux from the atmosphere and

    OLR is the outgoing long wave radiation”

    1. I don’t have much enthusiasm for these WW&W type haggles, but I’m puzzled by:
      RL was well aware of the revision before he wrote the Watts thread, which I think doesn’t reflect well on Colose
      Surely if anything it reflects badly on RL – the complaint is that it is misleading, and you’re defending him by saying that he knew it was misleading. The defence that he doesn’t like the correction is lame. He quoted the results based on W&W’s authority. If they’ve retracted, then the results don’t have that authority – in fact, they don’t have anyone’s.

      On Eq 7, I’ve always thought it was a complete shambles. Not only has there never been a shred of justification offerred, but as an apparent energy balance equation, the directions are wrong. ED and EU are fluxes at the opposite sides of the atmosphere and both outward. As flux summations on the atmosphere itself, they should be added, and I can’t see what other balance they could be part of. I too would be interested to see any attempt to balance it with any real figures.

  9. Steve, like Alex I too am interested in the Wong, Wielicki revision and I note RL's response to Watts which Alex posted at Colose's blog; it may interest you to know that RL was well aware of the revision before he wrote the Watts thread, which I think doesn't reflect well on Colose; in respect of Miskolczi and the fact that despite the revision the OLR figures are still capable of sustaining a -ve feedback conclusion, is it not crucial to have some access to accurate upper water vapor data? I mentioned the NCEP and NOAA data to Nick recently and he was dismissive of this on the basis that it was patchy and preferred the modelled conclusions of Dessler and Soden etc; but if upper level water is declining then that must be a feather in M's cap; speaking of feathers, you and anyone else for that matter may be interested in this;”I await the verdict of science. In the meantime – I will pay $100 to the first person who can balance Miskolczi’s Equation No. 7 to any reasonable accuracy using any published Earth radiative budget. SU – (F0 + Po) + ED – EU = OLR Where: SU is the surface upward radiative flux (= Sg in Miskolczi’s Figure 1) F0 is the net incoming short wave radiation (incoming solar radiation less the reflected component) PO is the friction heat from wind and waves and heat from the centre of the earth (hint – very minor)ED is the downward radiative flux from the atmosphereEU is the upward radiative flux from the atmosphere and OLR is the outgoing long wave radiation”

  10. I should have put that this largess is being offered by one Robert Indigo Ellison, not me; quite frankly I would be prepared to put up a lot more but don’t quote me.

  11. I should have put that this largess is being offered by one Robert Indigo Ellison, not me; quite frankly I would be prepared to put up a lot more but don't quote me.

  12. I don't have much enthusiasm for these WW&W type haggles, but I'm puzzled by:RL was well aware of the revision before he wrote the Watts thread, which I think doesn't reflect well on ColoseSurely if anything it reflects badly on RL – the complaint is that it is misleading, and you're defending him by saying that he knew it was misleading. The defence that he doesn't like the correction is lame. He quoted the results based on W&W's authority. If they've retracted, then the results don't have that authority – in fact, they don't have anyone's.On Eq 7, I've always thought it was a complete shambles. Not only has there never been a shred of justification offerred, but as an apparent energy balance equation, the directions are wrong. ED and EU are fluxes at the opposite sides of the atmosphere and both outward. As flux summations on the atmosphere itself, they should be added, and I can't see what other balance they could be part of. I too would be interested to see any attempt to balance it with any real figures.

  13. Nick, I’m not sure RL has been duplicitous; here is RL’s follow up from WUWT;

    UPDATE3: I received this email today (4/10) from Dr. Lindzen. My sincere thanks for his response.

    Dear Anthony,

    The paper was sent out for comments, and the comments (even those from “realclimate”) are appreciated. In fact, the reduction of the difference in OLR between the 80’s and 90’s due to orbital decay seems to me to be largely correct. However, the reduction in Wong, Wielicki et al (2006) of the difference in the spikes of OLR between observations and models cannot be attributed to orbital decay, and seem to me to be questionable. Nevertheless, the differences that remain still imply negative feedbacks. We are proceeding to redo the analysis of satellite data in order to better understand what went into these analyses. The matter of net differences between the 80’s and 90’s is an interesting question. Given enough time, the radiative balance is reestablished and the anomalies can be wiped out. The time it takes for this to happen depends on climate sensitivity with adjustments occurring more rapidly when sensitivity is less. However, for the spikes, the time scales are short enough to preclude adjustment except for very low sensitivity.

    That said, it has become standard in climate science that data in contradiction to alarmism is inevitably ‘corrected’ to bring it closer to alarming models. None of us would argue that this data is perfect, and the corrections are often plausible. What is implausible is that the ‘corrections’ should always bring the data closer to models.

    Best wishes,

    Dick

    Now, RL is aware of the amendments in 2007 but uses the originals in the WUWT post. The reason he does this is that he appears to disagree with the reason for the amendments, namely orbital decay; if, as he states, the amendments are not grounded then he is justified in continuing to use them. I don’t know; the OLR is a crucial one and would assist M’s 1.87; now back to work on eqn 7 and that vast prize of $100. What’s the exchange rate again?

  14. Nick, I'm not sure RL has been duplicitous; here is RL's follow up from WUWT;UPDATE3: I received this email today (4/10) from Dr. Lindzen. My sincere thanks for his response.Dear Anthony,The paper was sent out for comments, and the comments (even those from “realclimate”) are appreciated. In fact, the reduction of the difference in OLR between the 80’s and 90’s due to orbital decay seems to me to be largely correct. However, the reduction in Wong, Wielicki et al (2006) of the difference in the spikes of OLR between observations and models cannot be attributed to orbital decay, and seem to me to be questionable. Nevertheless, the differences that remain still imply negative feedbacks. We are proceeding to redo the analysis of satellite data in order to better understand what went into these analyses. The matter of net differences between the 80’s and 90’s is an interesting question. Given enough time, the radiative balance is reestablished and the anomalies can be wiped out. The time it takes for this to happen depends on climate sensitivity with adjustments occurring more rapidly when sensitivity is less. However, for the spikes, the time scales are short enough to preclude adjustment except for very low sensitivity.That said, it has become standard in climate science that data in contradiction to alarmism is inevitably ‘corrected’ to bring it closer to alarming models. None of us would argue that this data is perfect, and the corrections are often plausible. What is implausible is that the ‘corrections’ should always bring the data closer to models.Best wishes,DickNow, RL is aware of the amendments in 2007 but uses the originals in the WUWT post. The reason he does this is that he appears to disagree with the reason for the amendments, namely orbital decay; if, as he states, the amendments are not grounded then he is justified in continuing to use them. I don't know; the OLR is a crucial one and would assist M's 1.87; now back to work on eqn 7 and that vast prize of $100. What's the exchange rate again?

  15. Published paper:
    EARTH’S GLOBAL ENERGY BUDGET
    by Kevin E. Trenberth, John T. Fasullo, and Jeffrey Kiehl
    Bulletin AMERICAN METEOROLOGICAL SOCIETY March 2009 | 311-324

    Miklos
    Would welcome your comments on the changes from 1997 to 2009 relative to Miskolczi’s theory.

    1. David Hagen:

      “Miklos
      Would welcome your comments on the changes from 1997 to 2009 relative to Miskolczi’s theory.”

      Well, given that:

      (1) T,F&K now 2009 has been out in draft form since early 2008 (repeatedly referred-to above in this thread as T,F&K08) and may be taken as providing a reasonably comprehensive review of the body of modern literature on the ‘consensual’ all sky global energy atmospheric balance over the last 20 years; and

      (2) The whole purpose of this thread which David established was to obtain a frank response from Miklos Zagoni to discrepancies Nick and I had identified between Miskolczi’s all sky global S_T and all sky global tau (not to be confused with a confusing range of ‘clear sky’ equivalents presented by Miskolci and Zagoni in recent years) and the ‘consensual’ values; and

      (3) Since 26 April when David had an initial response from Zagoni which (a) avoided more questions that it answered, (b) made the erroneous claim all current global energy balance data rested only on K&T97 (!); and (c) asked for specific locations in T,F&K08 where the accepted ‘consensual’ S_T = 40 W/m^2 could be identified – to which I replied giving precise details, we have had no further response from Zagoni despite him issuing a YouTube video which repeated the same old claims which appeared in his Newcastle presentation,

      I don’t fancy your chances.

      Despite the terrible attractiveness of aspects of Miskolczi Theory to me (and evidently some other climate change sceptics) I have to admit that to this day it still appears to contains significant aspects of old fashioned ‘smoke and mirrors’ obfuscation and those who cannot or will not recognise that are themselves in denial…..

      The proof of a pudding is in the eating. If this theory had the ability to fly it would have been re-presented again at the 2nd Heartland Conference and would to this day be rising and rising – not sinking and sinking, perpetuated only by Zagoni’s promotion (noting Miskolczi himself has retired back into petulant silence as his is wont).

      1. “noting Miskolczi himself has retired back into petulant silence as his is wont).”

        That is why I gave up on the M’s stuff, until such time as the author comes to his OWN rescue.

    2. David,
      In the Trenberth et al paper, the net radiative imbalance is listed as .9 w/m squared, which is .05 more than Hansen et al 2005, based on 1993-2003 data.

      From what I can tell, the bone of contention is OHC, which has much uncertainty even with the ARGO system in place. OHC has more variability than SST!! There is not much agreement in this field. Levitus 2009 confirms this.

      Wouldn’t a time period from 2003-2008 be more informative to understand the current state of Earth’s energy budget as regardless of the uncertainty there arguably has been a reduction in OHC since 2003? Why 2000-2004? It doesn’t make sense.

  16. David Hagen:”MiklosWould welcome your comments on the changes from 1997 to 2009 relative to Miskolczi's theory.”Well, given that:(1) T,F&K now 2009 has been out in draft form since early 2008 (repeatedly referred-to above in this thread as T,F&K08) and may be taken as providing a reasonably comprehensive review of the body of modern literature on the 'consensual' all sky global energy atmospheric balance over the last 20 years; and(2) The whole purpose of this thread which David established was to obtain a frank response from Miklos Zagoni to discrepancies Nick and I had identified between Miskolczi's all sky global S_T and all sky global tau (not to be confused with a confusing range of 'clear sky' equivalents presented by Miskolci and Zagoni in recent years) and the 'consensual' values; and(3) Since 26 April when David had an initial response from Zagoni which (a) avoided more questions that it answered, (b) made the erroneous claim all current global energy balance data rested only on K&T97 (!); and (c) asked for specific locations in T,F&K08 where the accepted 'consensual' S_T = 40 W/m^2 could be identified – to which I replied giving precise details, we have had no further response from Zagoni despite him issuing a YouTube video which repeated the same old claims which appeared in his Newcastle presentation,I don't fancy your chances.Despite the terrible attractiveness of aspects of Miskolczi Theory to me (and evidently some other climate change sceptics) I have to admit that to this day it still appears to contains significant aspects of old fashioned 'smoke and mirrors' obfuscation and those who cannot or will not recognise that are themselves in denial…..The proof of a pudding is in the eating. If this theory had the ability to fly it would have been re-presented again at the 2nd Heartland Conference and would to this day be rising and rising – not sinking and sinking, perpetuated only by Zagoni's promotion (noting Miskolczi himself has retired back into petulant silence as his is wont).

  17. Steve; given the “terrible attractiveness” of certain aspects of Miskolczi which parts are salvageable? And by that I mean not only those parts which have theoretical coherence with [any] data but those parts which could be verified by further empirical studies?

    1. Hi Anthony

      What I think may be salvageable from Miskolczi is the so-called virial rule S_U = 2E_U. I can find no refutation of that empirical fact in the modern literature studies. Clearly this fact (?) can be verified by further empirical studies. If so this may save a small bit of the edifice Miskolczi constructed.

      Other than that I am now pessimistic as I agree with Nick that Eqn 7 is a complete shambles.

      If only Miskolczi had had the brains to follow through with the consequences that the real all sky global S_T is down around 30 – 40 W/m^2 (giving an S_T around 2.5) i.e. not the ~60 W/m^2 he (for some utterly obscure reason) erroneously insists it is, that his K parameter is not just something there to pay lip service to latent heat transfer to the clouds, and hence by definition MUST lead to some LW IR departing at TOA that is distinctly separate to S_T, then something might have come of all this.

      1. I take M’s “empirical” results with a big grain of salt, because he just doesn’t describe properly what he does.

        But even so, his empirical correlation of S_U and E_U is surprisingly poor. The points are quite scattered. And the regression line that he has obtained is not S_U=2E_U. It doesn’t pass through the origin.

  18. Steve; given the “terrible attractiveness” of certain aspects of Miskolczi which parts are salvageable? And by that I mean not only those parts which have theoretical coherence with [any] data but those parts which could be verified by further empirical studies?

  19. David, In the Trenberth et al paper, the net radiative imbalance is listed as .9 w/m squared, which is .05 more than Hansen et al 2005, based on 1993-2003 data.From what I can tell, the bone of contention is OHC, which has much uncertainty even with the ARGO system in place. OHC has more variability than SST!! There is not much agreement in this field. Levitus 2009 confirms this.Wouldn't a time period from 2003-2008 be more informative to understand the current state of Earth's energy budget as regardless of the uncertainty there arguably has been a reduction in OHC since 2003? Why 2000-2004? It doesn't make sense.

  20. Sorry I have been off air finishing a paper that I just sent off. It seems to me that the G=0.33 value that is linked to tau etc would be a very strange coincidence if it was that value purely by chance. If this is what the standard theory says, and what M says is no its not that by chance, there’s constraint(s) then well and good. Eqn 7 has always been my major concern, as its a conditional equality where the conditions for equality are not specified. Z says its where G is maximally developed, but for that to be true you have to accept the optimality calculations that assume a constant ground temperature.

    I like Steve’s approach as the devil is in the details, but I am afraid I don’t have the time to go into them in detail.

  21. Sorry I have been off air finishing a paper that I just sent off. It seems to me that the G=0.33 value that is linked to tau etc would be a very strange coincidence if it was that value purely by chance. If this is what the standard theory says, and what M says is no its not that by chance, there's constraint(s) then well and good. Eqn 7 has always been my major concern, as its a conditional equality where the conditions for equality are not specified. Z says its where G is maximally developed, but for that to be true you have to accept the optimality calculations that assume a constant ground temperature. I like Steve's approach as the devil is in the details, but I am afraid I don't have the time to go into them in detail.

  22. Hi AnthonyWhat I think may be salvageable from Miskolczi is the so-called virial rule S_U = 2E_U. I can find no refutation of that empirical fact in the modern literature studies. Clearly this fact (?) can be verified by further empirical studies. If so this may save a small bit of the edifice Miskolczi constructed.Other than that I am now pessimistic as I agree with Nick that Eqn 7 is a complete shambles. If only Miskolczi had had the brains to follow through with the consequences that the real all sky global S_T is down around 30 – 40 W/m^2 (giving an S_T around 2.5) i.e. not the ~60 W/m^2 he (for some utterly obscure reason) erroneously insists it is, that his K parameter is not just something there to pay lip service to latent heat transfer to the clouds, and hence by definition MUST lead to some LW IR departing at TOA that is distinctly separate to S_T, then something might have come of all this.

  23. Thanks Steve; a couple of queries; firstly, is eqn 7 valid in any way for clear sky values? The reason I ask this is your earlier comment that M’s clear sky mean tau of 1.87 is reasonable. The problem is with the all-sky values; that is clouds. In this respect I note you say the K value must contribute to the OLR in a way which is seperate to the window, S_T. Mike Hammer has a paper which looks at this; he states;

    “The implication is that thermal energy from the surface can escape to space in only two ways. First, by surface emission escaping directly to space at wavelengths which the greenhouse gases do not absorb. Second, by emission from the tropopause at wavelengths corresponding to the water vapour absorption/emission lines”

    If high water/cloud is declining, and this is a controversial point with Dessler and crew all modeling that it isn’t while casting aspersions on the NCEP data, especially pre-1973 data [and ignoring the recent Paltridge et al paper], then that addition to S_T will be as a result of the decline in high cloud; if the S_T plus the extra OLR at water wavelengths can achieve a figure of ~60Wm^2 would that then give mean Tau for all-sky a fighting chance of being 1.87 as well?

    1. Hi Anthony. I hope my last two posts adequately provide my response to your questions to me? Do you have the references-for (or pdf copies-of) the recent Mike Hammer and Paltridge et al. papers? Thanks.

  24. Thanks Steve; a couple of queries; firstly, is eqn 7 valid in any way for clear sky values? The reason I ask this is your earlier comment that M's clear sky mean tau of 1.87 is reasonable. The problem is with the all-sky values; that is clouds. In this respect I note you say the K value must contribute to the OLR in a way which is seperate to the window, S_T. Mike Hammer has a paper which looks at this; he states;”The implication is that thermal energy from the surface can escape to space in only two ways. First, by surface emission escaping directly to space at wavelengths which the greenhouse gases do not absorb. Second, by emission from the tropopause at wavelengths corresponding to the water vapour absorption/emission lines”If high water/cloud is declining, and this is a controversial point with Dessler and crew all modeling that it isn't while casting aspersions on the NCEP data, especially pre-1973 data [and ignoring the recent Paltridge et al paper], then that addition to S_T will be as a result of the decline in high cloud; if the S_T plus the extra OLR at water wavelengths can achieve a figure of ~60Wm^2 would that then give mean Tau for all-sky a fighting chance of being 1.87 as well?

  25. I’d like to comment on what I believe M has accomplished. He has identified the proper boundary conditions for IR radiative calculations. The “Atmospheric Kirchoff’s Law” eqivalence of Aa and Ed is new and it eliminates the surface temperature discontinuity that could directly result in modeling errors if not included. This is, or should be considered, an advance.

    To include the effect of clouds, he models them as though they are all at 2 km altitude and effectively transparent to IR. “Clouds at around 2 km altitude have minimal effect on the LW energy balance and seem to regulate the SW absorption of the system by adjusting the effective cloud cover beta.” (M, p.19) That is, more surface OLR -> more low level clouds -> cooling, i.e., negative feedback. This is certainly not unheard of in the annals of climatology.

    1. What are “the proper boundary conditions for IR radiative calculations”?

      “it eliminates the surface temperature discontinuity” What discontinuity? How?

    2. “I’d like to comment on what I believe M has accomplished. He has identified the proper boundary conditions for IR radiative calculations. The “Atmospheric Kirchoff’s Law” eqivalence of Aa and Ed is new and it eliminates the surface temperature discontinuity that could directly result in modeling errors if not included. This is, or should be considered, an advance.”

      This, of course, complete and utter drivel. None of these assertions have been proven anywhere in the total literature, including in M&M04 and M07 because the global all sky mean S_T has never been proven to be ~60 W/m^2. If anything, in excess of about 20 papers show the global all sky mean S_T to be about 30 – 40 W/m^2 as I have comprehensively shown above (e.g. ERBE and CERES averages around 31±10 W/m^2)..

      What Miskolczi (and Zagoni) has done is to try to fool people who don’t read the literature that data from various clear sky studies can somehow be generalized to the all sky situation by completely wrapping the fraction of LW IR emission (@ water emission lines) from the tops of clouds (deriving originally from surface evapotranspiration (ET) and resulting from release of latent heat) which escapes the tropopause and contributes to OLR into his S_T term. Put simply this was a mistake.

      In the average (global all sky; cloud cover ~60%) ) this emission is ~30 W/m^2 and is about 30/80 = 0.375 of the total latent heat release – a proportion which is governed by relatively simple geometric considerations and hence remains a relatively constant fraction of the (variable) ET.

      In the clear sky situation there is no clouds, negligible ET and hence the poor neglected Miskolczi K term reduces from ~97 W/m^2 (17 for dry thermals + 80 for ET) to 17 W/m^2. There are no clouds to block IR transmission so S_T rises from ~31±10 W/m^2 (ERBE and CERES studies) to the clear sky ~61±10 W/m^2. The error bars I quote here are my approximations to the one standard deviations.

      Obviously it is possible to imagine a continuum of cloud covers from 100% through 60% (global average) to 0% (clear sky) because, hey, that is what happens!

      I fail to see why we should only imagine the cloud cover range from 0% to 60% which Miskolczi would both have us do and even fudge values of S_T near the global mean cloud cover of 60% as well!

      In my view, at 100% cloud cover ET rises to about (100/60) x 80 = 133 W/m^2 and hence the emission from clouds which escapes the tropopause and contributes to OLR should be about 0.375 x 133 = 50 W/m^2.

      Under these circumstances S_T should decline to ~31 – 10 – 10 = 11 W/m^2 and hence tau ~ -ln(11/396) = 3.58!

      However, in compensation for the much reduced S_T, the sum of the emission from clouds which contributes to OLR PLUS the S_T which contributes to OLR thus = 50 + 11 = 61 W/m^2 i.e. it remains constant!

      So, I now give you not Miskolczi Theory but basic Short Theory which suggests the following:

      (1) The LW IR tau is NOT constant (and there is no reason in the wide world why it should be) but varies from a 100% cloud cover situation value of ~ 3.58 all the way down to a 0% cloud cover situation value of ~ – ln(61/396) = 1.87;

      (2) As S_T reduces with increasing cloud cover, the fraction of ET which is emitted from the tops of clouds and escapes the tropopause (let’s call that fraction ET_U) remains relatively constant @ ~0.375 = 3/8; and hence

      (3) the sum of amount of LW IR contributing to OLR which is emitted both from the surface (S_T) and from the tops of the clouds (ET_U) remains constant @ ~61 W/m^2 and hence remains a constant ~25% of OLR.

      (4) There is no reason why A_A should = E_D but the remainder of OLR – (ET_U + S_T) = ‘the real E_U’ clearly also remains ~75% of OLR. In the next instalment (we all gotta eat) I’ll dissect this ‘real E_U’.

      1. Something very roughly like the following (assuming constant F, S_U and OLR across all cloud covers – which is not strictly true of course):

        Assumptions:
        S_U = 396 = constant
        ET = Evapotranspiration
        ET_U = LW IR emitted to OLR by clouds (as above) = 0.375ET
        OLR – (ET_U + S_T) = ‘the real E_U’ denoted rE_U
        Old E_U = ET_U + rE-U by definition
        A_A = S_U – S_T (by definition and as per Miskolczi)
        DT = Dry Thermals ~ 17 @ 60% cloud cover
        K = ET + DT (by definition and as per Miskolczi)
        F = absorbed SW IR (as per Miskolczi)
        E_D ~ 0.625(ET + DT) + 0.5F + 0.625A_A on the grounds: (a) SW IR absorbed throughout entire atmosphere but(b) LW IR absorbed below the clouds.

        Cloud%, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, Old E_U
        100, 3.58, 11, 133, 50, 0, 178, 385, 362, 228
        80, 2.94, 21, 107, 40, 8, 178, 375, 345, 218
        60 2.55, 31, 80, 30, 17, 178, 365, 328, 208
        40, 2.27, 41, 53, 20, 23, 178, 355, 288, 198
        20, 2.05, 51, 27, 10, 29, 178, 345, 302, 188
        0, 1.87, 61, 0, 0, 34, 178, 335, 270, 178,

        Note: S_U does not = 2 x old E_U except around 40% cloud cover.

        A_A does not = E_D (not required).

        LW IR Tau is not constant (not required).

        LW IR homeostasis arises through constancy of sum of (ET_U + S_T) and constancy of rE_U and proportionality of OLR to incoming Fo

        This approach unifies the roles of the important K and F terms (which Miskolczi essentially ignored) into the all sky global energy balance/framework.

        It shows how and why they are critical to a concept of low CO2 sensitivity ‘homeostasis’ applying across the full range of cloud covers (which of course applies naturally), in line with Lindzen’s concept of negative cloud forcing (iris).

        At least for LW IR and the role of ET and clouds, this is the core issue which IMO Miskolczi singularly failed to address (despite a massive smoke screen).

        Miskolczi’s so-called ‘constant tau’, so-called elimination of the surface/atmosphere temperature discontinuity and the so-called Atmospheric Kirchoff Law were all irrelevant distractions.

    3. Hi Pochas,

      If you were able to support this claim with reference to the historical literature of radiative transfer it would greatly help Ferenc’s case.

      Unfortunately I have reviewed the literature in the hope that I could bolster this case but found instead that all the evidence goes against the Miskolczi storyline.

      The issue as I see it is that Ferenc has written (M07, p. 13):

      “…There were several attempts to resolve the above deficiencies by developing simple semi-empirical spectral models, see for example Weaver and Ramanathan (1995), but the fundamental theoretical problem was never resolved. The source of this inconsistency can be traced back to several decades ago, when the semi-infinite solution was first used to solve bounded atmosphere problems. About 80 years ago Milne stated: “Assumption of infinite thickness involves little or no loss of generality”, and later, in the same paper, he created the concept of a secondary (internal) boundary (Milne, 1922). He did not realize that the classic Eddington solution is not the general solution of the bounded atmosphere problem and he did not re-compute the appropriate integration constant. This is the reason why scientists have problems with a mysterious surface temperature discontinuity and unphysical solutions, as in Lorenz and McKay (2003)…”

      The problem with all this is that the temperature discontinuity originated earlier in the work of Robert Emden in 1913 so it obviously can’t really be the result of any error that either Milne or Eddington made later.

      Emden, R 1913: Uber Strahlungsgleichgewicht und atmosphrische Strahlung. Sitz. d. Bayerische Akad. d. Wiss., Math. Phys. Klass, 55.

      A partial translation and early commentary for this can be found in Bateman 1916 here: http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044-08-0450.pdf

      Now throughout history it seems that temperature discontinuity has been derived by many people in many situations but in all instances that I have been able to find it is always derived from first principles with an acknowledgement to Emden as the first to discover it.

      One example is in the astrophysicist Jeremiah Ostriker who studied the problem in the 1960s.

      Ostriker, J. P. [1963], Radiative Transfer in a Finite Gray Atmosphere, Astrophysical Journal, 138, 281-290, here: http://adsabs.harvard.edu/full/1963ApJ…138..281O

      On p. 284 he concludes:

      “…we see that there will be a discontinuity in the temperature between the lowest layer of the atmosphere (T1) and the ground (Tb). This discontinuity, though somewhat reduced, persists in higher approximations and in the exact solution; in an early paper on a similar problem, Emden (1913) found the same type of discontinuity.”

      Now it seems perfectly clear that when Ostriker discusses the work of Eddington he knows perfectly well that he’s dealing with a semi-infinite approximation — yet he still derives the temperature discontinuity by another route.

      So the question is what has Ostriker done wrong then? He clearly knows the atmosphere is bounded, he’s clearly not following Milne at all (he doesn’t even cite Milne), his work follows from first principles, he seems to know to keep that exponential term, and yet he derives a temperature discontinuity.

      If people still want to defend the Miskolczi theory, someone has got to explain this to us.

      1. Well that certainly answered Nick Stokes question about “what discontinuity” at least. I trust he’ll stop repeating it now.

      2. No such luck. I jumped on Pochas because he said “the surface temperature discontinuity that could directly result in modeling errors” and I’ve been saying over and over – this has nothing to do with modelling. No-one uses it. People here have very fuzzy ideas on this. The first order ode approx that leads to this “discontinuity” is elementary. It’s now derived as a reduction from more exact theory, but it’s likely that people thought of it independently in the 19th C. I’m not surprised at Alex’s discovery. It’s just what you get if you assume the transport is a constant energy stream and entirely radiative. There’s nothing semi-infinite about it. It just works from the top down, until it doesn’t.

        As I’ve said over and over, the “discontinuity” arises because you only have one free parameter, eg the constant flux, and you have to match a boundary condition at the top of the atmosphere, where the assumption really is true. Thereafter, the approx works as long as the condition holds. It breaks down when other fluxes become important.

        FM’s variant is nonsense. You can’t force it to satisfy another condition without sacrificing the one that is really needed, at TOA. Which he does.

      3. Nick,

        I am not sure what I’ve discovered to be honest.

        I should have added that Milne 1922 contains a discussion of Emden 1913 and Milne claims in there that he only became aware of Emden’s paper after he’d already finished a draft of his own. However Milne doesn’t seem to discuss Emden’s temperature disctontinuity. So it is not impossible that Milne didn’t merely carry forward an earlier mistake made by Emden or even Schwarzschild.

        You may be interested to read this post here:

        http://complexclimate.blogspot.com/2008/09/even-more-complex-answer.html

        Quote: “But why did their simple model not work? The reason is that a mistake was made by a giant of astrophysics Robert Emden. In 1913 he applied the equations for internal solar radiation derived by his brother in law, another giant of astrophysics Karl Schwarzschild, to the earth’s atmosphere. The radiation scheme is based on Schwarzschild’s equation. However, we have a planetary atmosphere not a stellar one!”

        Sounds like the familiar Miskolczi storyline with Milne replaced by Emden?

        So it’s not impossible that I’ve discovered nothing other than a minor historical inaccuracy. I suppose it’s not impossible that Ostriker is likewise following Emden in making an error.

        If so, someone who really understands the theory would be able to respond to this and say what Ostriker did wrong, or why it is not relevant.

      4. Well, I guess you have two storylines here, and need to decide which is more believable. One is that that three giants of astrophysics somehow made a gross error in an elementary approximation, which was then taught to thousands of students over the years, and remained unnoticed until genius FM came along in 2007 and explained it all with his trademark clarity, or..
        The elementary approximation is just what it is, as explained so long ago, and misunderstood by FM in 2007, as he has misunderstood so much other physics in his paper. A few bloggers in 2009, who don’t understand the alleged error at all, jmst won’t let it go.

      5. Hi Alex – Nice job of tracing back the literature (I didn’t know that Emden had worked this problem – he was famous for many other things).

        Now, having gone through these papers, you don’t really have to decide between Nick’s storylines; you can see for yourself if Miskolczi’s “solution” satisfies the TOA boundary conditions. [To do this, you need the relation between B and the upward and downward fluxes at tau = 0, in the Eddington approximation (upward flux = OLR; downward = 0). I imagine that these relations are given in Milne, Emden or Ostriker, or certainly in standard textbooks.] You will find that Nick is right; M’s solution fails.

        I once posted here something to the effect that nothing is ever settled on blogs. Perhaps, thanks to Nick, Steve Short and yourself, I will be proved wrong. I would be delighted if that were the case. (Thanks, of course, to our host, also.)

      6. Pat what are the boundary conditions for a boundless integral?

        What are the boundary conditions for a (semi-infinite) Laplace Transform?

        Then kindly tell us why climatologists have found three one for TOA and one for the surface.

      7. Correction.

        that’s two for the surface.

        Also tell us how as you seem to imply Ms boundary condition at TOA for tau=0 differs from the the rest?

  26. I'd like to comment on what I believe M has accomplished. He has identified the proper boundary conditions for IR radiative calculations. The “Atmospheric Kirchoff's Law” eqivalence of Aa and Ed is new and it eliminates the surface temperature discontinuity that could directly result in modeling errors if not included. This is, or should be considered, an advance. To include the effect of clouds, he models them as though they are all at 2 km altitude and effectively transparent to IR. “Clouds at around 2 km altitude have minimal effect on the LW energy balance and seem to regulate the SW absorption of the system by adjusting the effective cloud cover beta.” (M, p.19) That is, more surface OLR -> more low level clouds -> cooling, i.e., negative feedback. This is certainly not unheard of in the annals of climatology.

  27. I take M's “empirical” results with a big grain of salt, because he just doesn't describe properly what he does.But even so, his empirical correlation of S_U and E_U is surprisingly poor. The points are quite scattered. And the regression line that he has obtained is not S_U=2E_U. It doesn't pass through the origin.

  28. What are “the proper boundary conditions for IR radiative calculations”?”it eliminates the surface temperature discontinuity” What discontinuity? How?

  29. “I'd like to comment on what I believe M has accomplished. He has identified the proper boundary conditions for IR radiative calculations. The “Atmospheric Kirchoff's Law” eqivalence of Aa and Ed is new and it eliminates the surface temperature discontinuity that could directly result in modeling errors if not included. This is, or should be considered, an advance.”This, of course, complete and utter drivel. None of these assertions have been proven anywhere in the total literature, including in M&M04 and M07 because the global all sky mean S_T has never been proven to be ~60 W/m^2. If anything, in excess of about 20 papers show the global all sky mean S_T to be about 30 – 40 W/m^2 as I have comprehensively shown above (e.g. ERBE and CERES averages around 31±10 W/m^2).. What Miskolczi (and Zagoni) has done is to try to fool people who don't read the literature that data from various clear sky studies can somehow be generalized to the all sky situation by completely wrapping the fraction of LW IR emission (@ water emission lines) from the tops of clouds (deriving originally from surface evapotranspiration (ET) and resulting from release of latent heat) which escapes the tropopause and contributes to OLR into his S_T term. Put simply this was a mistake. In the average (global all sky; cloud cover ~60%) ) this emission is ~30 W/m^2 and is about 30/80 = 0.375 of the total latent heat release – a proportion which is governed by relatively simple geometric considerations and hence remains a relatively constant fraction of the (variable) ET.In the clear sky situation there is no clouds, negligible ET and hence the poor neglected Miskolczi K term reduces from ~97 W/m^2 (17 for dry thermals + 80 for ET) to 17 W/m^2. There are no clouds to block IR transmission so S_T rises from ~31±10 W/m^2 (ERBE and CERES studies) to the clear sky ~61±10 W/m^2. The error bars I quote here are my approximations to the one standard deviations.Obviously it is possible to imagine a continuum of cloud covers from 100% through 60% (global average) to 0% (clear sky) because, hey, that is what happens!I fail to see why we should only imagine the cloud cover range from 0% to 60% which Miskolczi would both have us do and even fudge values of S_T near the global mean cloud cover of 60% as well! In my view, at 100% cloud cover ET rises to about (100/60) x 80 = 133 W/m^2 and hence the emission from clouds which escapes the tropopause and contributes to OLR should be about 0.375 x 133 = 50 W/m^2. Under these circumstances S_T should decline to ~31 – 10 – 10 = 11 W/m^2 and hence tau ~ -ln(11/396) = 3.58! However, in compensation for the much reduced S_T, the sum of the emission from clouds which contributes to OLR PLUS the S_T which contributes to OLR thus = 50 + 11 = 61 W/m^2 i.e. it remains constant!So, I now give you not Miskolczi Theory but basic Short Theory which suggests the following: (1) The LW IR tau is NOT constant (and there is no reason in the wide world why it should be) but varies from a 100% cloud cover situation value of ~ 3.58 all the way down to a 0% cloud cover situation value of ~ – ln(61/396) = 1.87;(2) As S_T reduces with increasing cloud cover, the fraction of ET which is emitted from the tops of clouds and escapes the tropopause (let's call that fraction ET_U) remains relatively constant @ ~0.375 = 3/8; and hence(3) the sum of amount of LW IR contributing to OLR which is emitted both from the surface (S_T) and from the tops of the clouds (ET_U) remains constant @ ~61 W/m^2 and hence remains a constant ~25% of OLR.(4) There is no reason why A_A should = E_D but the remainder of OLR – (ET_U + S_T) = 'the real E_U' clearly also remains ~75% of OLR. In the next instalment (we all gotta eat) I'll dissect this 'real E_U'.

  30. Something very roughly like the following (assuming constant F, S_U and OLR across all cloud covers – which is not strictly true of course):Assumptions:S_U = 396 = constantET = EvapotranspirationET_U = LW IR emitted to OLR by clouds (as above) = 0.375ETOLR – (ET_U + S_T) = 'the real E_U' denoted rE_UOld E_U = ET_U + rE-U by definitionA_A = S_U – S_T (by definition and as per Miskolczi)DT = Dry Thermals ~ 17 @ 60% cloud coverK = ET + DT (by definition and as per Miskolczi)F = absorbed SW IR (as per Miskolczi)E_D ~ 0.625(ET + DT) + 0.5F + 0.625A_A on the grounds: (a) SW IR absorbed throughout entire atmosphere but(b) LW IR absorbed below the clouds.Cloud%, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, Old E_U100, 3.58, 11, 133, 50, 0, 178, 385, 362, 22880, 2.94, 21, 107, 40, 8, 178, 375, 345, 218 60 2.55, 31, 80, 30, 17, 178, 365, 328, 208 40, 2.27, 41, 53, 20, 23, 178, 355, 288, 198 20, 2.05, 51, 27, 10, 29, 178, 345, 302, 1880, 1.87, 61, 0, 0, 34, 178, 335, 270, 178,Note: S_U does not = 2 x old E_U except around 40% cloud cover. A_A does not = E_D (not required).LW IR Tau is not constant (not required).LW IR homeostasis arises through constancy of sum of (ET_U + S_T) and constancy of rE_U and proportionality of OLR to incoming FoThis approach unifies the roles of the important K and F terms (which Miskolczi essentially ignored) into the all sky global energy balance/framework. It shows how and why they are critical to a concept of low CO2 sensitivity 'homeostasis' applying across the full range of cloud covers (which of course applies naturally), in line with Lindzen's concept of negative cloud forcing (iris). At least for LW IR and the role of ET and clouds, this is the core issue which IMO Miskolczi singularly failed to address (despite a massive smoke screen). Miskolczi's so-called 'constant tau', so-called elimination of the surface/atmosphere temperature discontinuity and the so-called Atmospheric Kirchoff Law were all irrelevant distractions.

  31. Hi Anthony. I hope my last two posts adequately provide my response to your questions to me? Do you have the references-for (or pdf copies-of) the recent Mike Hammer and Paltridge et al. papers? Thanks.

  32. Hi Pochas,If you were able to support this claim with reference to the historical literature of radiative transfer it would greatly help Ferenc's case.Unfortunately I have reviewed the literature in the hope that I could bolster this case but found instead that all the evidence goes against the Miskolczi storyline.The issue as I see it is that Ferenc has written (M07, p. 13):”…There were several attempts to resolve the above deficiencies by developing simple semi-empirical spectral models, see for example Weaver and Ramanathan (1995), but the fundamental theoretical problem was never resolved. The source of this inconsistency can be traced back to several decades ago, when the semi-infinite solution was first used to solve bounded atmosphere problems. About 80 years ago Milne stated: “Assumption of infinite thickness involves little or no loss of generality”, and later, in the same paper, he created the concept of a secondary (internal) boundary (Milne, 1922). He did not realize that the classic Eddington solution is not the general solution of the bounded atmosphere problem and he did not re-compute the appropriate integration constant. This is the reason why scientists have problems with a mysterious surface temperature discontinuity and unphysical solutions, as in Lorenz and McKay (2003)…”The problem with all this is that the temperature discontinuity originated earlier in the work of Robert Emden in 1913 so it obviously can't really be the result of any error that either Milne or Eddington made later.Emden, R 1913: Uber Strahlungsgleichgewicht und atmosphrische Strahlung. Sitz. d. Bayerische Akad. d. Wiss., Math. Phys. Klass, 55.A partial translation and early commentary for this can be found in Bateman 1916 here: http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044…Now throughout history it seems that temperature discontinuity has been derived by many people in many situations but in all instances that I have been able to find it is always derived from first principles with an acknowledgement to Emden as the first to discover it.One example is in the astrophysicist Jeremiah Ostriker who studied the problem in the 1960s.Ostriker, J. P. [1963], Radiative Transfer in a Finite Gray Atmosphere, Astrophysical Journal, 138, 281-290, here: http://adsabs.harvard.edu/full/1963ApJ…138..281OOn p. 284 he concludes:”…we see that there will be a discontinuity in the temperature between the lowest layer of the atmosphere (T1) and the ground (Tb). This discontinuity, though somewhat reduced, persists in higher approximations and in the exact solution; in an early paper on a similar problem, Emden (1913) found the same type of discontinuity.”Now it seems perfectly clear that when Ostriker discusses the work of Eddington he knows perfectly well that he's dealing with a semi-infinite approximation — yet he still derives the temperature discontinuity by another route.So the question is what has Ostriker done wrong then? He clearly knows the atmosphere is bounded, he's clearly not following Milne at all (he doesn't even cite Milne), his work follows from first principles, he seems to know to keep that exponential term, and yet he derives a temperature discontinuity.If people still want to defend the Miskolczi theory, someone has got to explain this to us.

  33. Well that certainly answered Nick Stokes question about “what discontinuity” at least. I trust he'll stop repeating it now.

  34. No such luck. I jumped on Pochas because he said “the surface temperature discontinuity that could directly result in modeling errors” and I've been saying over and over – this has nothing to do with modelling. No-one uses it. People here have very fuzzy ideas on this. The first order ode approx that leads to this “discontinuity” is elementary. It's now derived as a reduction from more exact theory, but it's likely that people thought of it independently in the 19th C. I'm not surprised at Alex's discovery. It's just what you get if you assume the transport is a constant energy stream and entirely radiative. There's nothing semi-infinite about it. It just works from the top down, until it doesn't.As I've said over and over, the “discontinuity” arises because you only have one free parameter, eg the constant flux, and you have to match a boundary condition at the top of the atmosphere, where the assumption really is true. Thereafter, the approx works as long as the condition holds. It breaks down when other fluxes become important.FM's variant is nonsense. You can't force it to satisfy another condition without sacrificing the one that is really needed, at TOA. Which he does.

  35. Nick,I am not sure what I've discovered to be honest.I should have added that Milne 1922 contains a discussion of Emden 1913 and Milne claims in there that he only became aware of Emden's paper after he'd already finished a draft of his own. However Milne doesn't seem to discuss Emden's temperature disctontinuity. So it is not impossible that Milne didn't merely carry forward an earlier mistake made by Emden or even Schwarzschild.You may be interested to read this post here:http://complexclimate.blogspot.com/2008/09/even…Quote: “But why did their simple model not work? The reason is that a mistake was made by a giant of astrophysics Robert Emden. In 1913 he applied the equations for internal solar radiation derived by his brother in law, another giant of astrophysics Karl Schwarzschild, to the earth’s atmosphere. The radiation scheme is based on Schwarzschild's equation. However, we have a planetary atmosphere not a stellar one!”Sounds like the familiar Miskolczi storyline with Milne replaced by Emden?So it's not impossible that I've discovered nothing other than a minor historical inaccuracy. I suppose it's not impossible that Ostriker is likewise following Emden in making an error.If so, someone who really understands the theory would be able to respond to this and say what Ostriker did wrong, or why it is not relevant.

  36. Well, I guess you have two storylines here, and need to decide which is more believable. One is that that three giants of astrophysics somehow made a gross error in an elementary approximation, which was then taught to thousands of students over the years, and remained unnoticed until genius FM came along in 2007 and explained it all with his trademark clarity, or..The elementary approximation is just what it is, as explained so long ago, and misunderstood by FM in 2007, as he has misunderstood so much other physics in his paper. A few bloggers in 2009, who don't understand the alleged error at all, jmst won't let it go.

  37. “noting Miskolczi himself has retired back into petulant silence as his is wont).”That is why I gave up on the M's stuff, until such time as the author comes to his OWN rescue.

  38. I don’t think you should read anything into people not commenting on a blog, particularly when they have work to do. Just appreciate when they do put in the effort to explain and engage. For me, I look forward to the next installment from M, especially if its about Venus.

  39. I don't think you should read anything into people not commenting on a blog, particularly when they have work to do. Just appreciate when they do put in the effort to explain and engage. For me, I look forward to the next installment from M, especially if its about Venus.

  40. Hi Alex – Nice job of tracing back the literature (I didn’t know that Emden had worked this problem – he was famous for many other things).Now, having gone through these papers, you don’t really have to decide between Nick’s storylines; you can see for yourself if Miskolczi’s “solution” satisfies the TOA boundary conditions. [To do this, you need the relation between B and the upward and downward fluxes at tau = 0, in the Eddington approximation (upward flux = OLR; downward = 0). I imagine that these relations are given in Milne, Emden or Ostriker, or certainly in standard textbooks.] You will find that Nick is right; M's solution fails.I once posted here something to the effect that nothing is ever settled on blogs. Perhaps, thanks to Nick, Steve Short and yourself, I will be proved wrong. I would be delighted if that were the case. (Thanks, of course, to our host, also.)

  41. Pat what are the boundary conditions for a boundless integral? What are the boundary conditions for a (semi-infinite) Laplace Transform? Then kindly tell us why climatologists have found three one for TOA and one for the surface.

  42. Correction.that's two for the surface.Also tell us how as you seem to imply Ms boundary condition at TOA for tau=0 differs from the the rest?

  43. Steve has posted a more comprehensive theory for maintainence of LW IR homoeostasis than offered by M; in Steve’s theory clouds are the deux machina; when clouds increase S_T decreases but the total OLR remains constant because the low level clouds emit more through ET_U thus bringing the non-radiative thermal energy, K and the incoming SW F into the picture; in this way TAU may vary but the greenhouse effect doesn’t. The key is still high water; if that increases then the low-level cloud effect, ET_U will be blocked, so we are still at the cutting edge Dessler/IPCC vs NCEP/Paltridge dispute.

    As to the boundary situation; Nick says only the TOA boundary is essential and indeed extant; I still don’t see that that changes the necessity for the AGW model to be a semi-infinite one; AGW predicts a THS; a THS is a raising of the tropopause whereby the cooler stratosphere air is replaced by warmer upper troposphere air; as AGW claims, with more CO2 the higher the THS is and the higher the CO2 has to be and longer the CO2 has to wait before it can strike cold air to emit to space; without wanting to go into the problematic existence of a THS, the point is that AGW is based on a semi-infinite atmospheric model.

    1. I’ve cross posted this from Jennifer Marohasy’s blog for those who don’t go there.

      Here is another run of the basic spreadsheet (slightly expanded to improve self-explanation) for my little model. This time I once again kept S_U = 396 W/m^2, OLR = 239 W/m^2, F=78 W/m^2 again all as per the T,F&K09 review (see the cartoon therein).

      However, I also forced rE_U (i.e. the real E_U) to be 169 as per T,F&K09 and I set S_T to average 40 at 60% cloud cover again as per T,F&K09 (rather than the 31±10 of the CERES and ERBE averages).

      All other assumptions were as listed previously including that again I assumed E_D~0.625(ET+DT)+0.5F+0.625A_A on the grounds I have previously explained above.

      Once again I set Dry Thermals (DT; convective sensible heat) to be 17 W/m^2 at 60% cloud cover but as before scaled DT to be 0 at 100% cloud cover and 34 at 0% cloud cover. This is a conservative assumption in that it tends to force my estimate of E_D towards A_A i.e. in the direction of Miskolczi’s so-called Atmospheric Kirchoff Law. One could just as easily run this assuming DT = 17 at all cloud covers (probably unlikely physically) and the outcomes would only be trivially different.

      Here are the results:

      %Cloud, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, oE_U, S_U/oE_U, A_A/E_D, S_U, OLR, F
      100 2.99 20 133 50 0 169 376 357 219 1.81 1.05 396 239 78
      80 2.58 30 107 40 8 169 366 340 209 1.89 1.08 396 239 78
      60 2.29 40 80 30 17 169 356 322 199 1.99 1.11 396 239 78
      40 2.07 50 53 20 23 169 346 303 189 2.10 1.14 396 239 78
      20 1.89 60 27 10 29 169 336 284 179 2.21 1.18 396 239 78
      0 1.73 70 0 0 34 169 326 264 169 2.34 1.23 396 239 78

      It can be seen that the real (LW IR) Tau ranges from 1.73 at full clear sky to 2.99 at full 100% cloud cover being 1.89 i.e. ~1.87 only at 20% cloud cover.

      The Miskolci ‘Kirchoff Law’ test ratio A_A/E_D ranges from 1.05 at 100% cloud cover to 1.23% at clear sky

      The Miskolczi ‘Virial Rule’ test ratio S_U /oE_U (i.e. S_U /old E_U) ranges from 1.81 at 100% cloud cover to 2.34 at clear sky. It is 1.99 i.e. ~2.00 only at 60% cloud cover i.e. at the global all sky % cloud cover.

      I would be happy to send back to anyone, if they send an email to me, a copy of this little Excel spreadsheet so they can play around with it themselves. Your email address would not be recorded.

      You can then make up your own minds what this simple exercise tells you about:

      (1) the likely validity of the major tenets of Miskolczi Theory; and

      (2) the significance of LW IR by release of latent heat in clouds (water emission lines) which typically escapes to contribute to OLR (as a simple function of % cloud cover).

      1. Jan/Pat/Nick/David/Anthony

        I know this is not what Jan/Pat & Nick are currently (+ with deju vu ;-) scrapping about (BTW ‘whacko’ is one of my favourite slang words) but I wonder if you guys would feel like having a look at my little Miskolczi-modifying (refuting?) spreadsheet above in the light of Table 1 (page 444) in Ozawa and Ohmura (1996) – which paper I presume you guys are familiar with.

        I’m well on the way to modifying my spreadsheet model to have slightly different S_U values (and surface temperatures) for different (true) LW IR (and hence tau) values as per O&O96 Table 1 and intriguingly seem to be getting close to a situation which does actually maximize MEP along the lines of the (relatively simple) approach well-described in O&O96.

        Unfortunately I am in the middle of a really big work project at the moment (designing a hydromet plant for a magnesium production facility) and am having great difficulty concentrating on this stuff.

        Maybe none of you are not interested in MEP – in which case please ignore this message. But I seem to be on the verge of something rather interesting i.e. an atmospheric box model even simpler than Miskolczi which gets around the dodgy Eqn 7, doesn’t require ‘Kirchoff’ or Virial’, doesn’t need a constant tau, involves M’s K & F terms and provides an MEP-based basis for an inferred homeostasis.

        As it intimately involves a variable tau, convection and a surface S_U = S-B sigma*T^4 perhaps your interest might be piqued?

        Regards

      2. Steve, Interesting O&O as you say. I didn’t have a chance to follow-up on your references before. The proportionality of long-wave and short-wave optical depth assumption (between equations 2 and 3) would imply that an increase in optical depth due to increased GHGs would also increase SW optical depth, presumably due to increased water vapor and hence cloudiness.

        What would you suggest ofr a more recent follow-up to this model?

      3. Pauluis, OM and Held IM (2002a) Entropy budget of an atmosphere in radiative-convective equilibrium. Part I: maximum work and frictional dissipation. J. Atmos. Sci. 59: 125-139

        This is interesting because it concludes that moist convection (ET) behaves more as an atmopsheric dehumidifier than as a heat engine.

        Pauluis OM, Held IM (2002b) Entropy budget of an a atmosphere in radiative-convective equilibrium. Part II: Latent heat transport and moist processes. J. Atmos. Sci. 59: 140-149

        Conclusion: Frictional dissipation of atmopsheric motions accounts for ~30% of total entropy production, frictional dissipation of failing rain ~12%, phase changes and diffusion of water vapor ~40% and remaining ~20% uncertainties in the above.

      4. Hmmm, interesting, a 6 W/m^2 increase in OLR going to 100% cloud cover, a 9 W/m^2 decrease in OLR going to zero cloud cover.

        %Cloud,Tau,S_T,ET,ET_U,DT rE_U,A_A,E_D,oE_U,S_U/oE_U,A_A/E_D,S_U,OLR,F,(ET+DT)/S_U
        100,2.72,26,133,50,7,169,370,371,219,1.81,1.00,396,245,78,0.354
        80,2.48,33,107,40,12,169,363,353,209,1.89,1.03,396,242,78,0.300
        60,2.29,40,80,30,17,169,356,335,199,1.99,1.06,396,239,78,0.245
        40,2.13,47,53,20,23,169,349,317,189,2.10,1.10,396,236,78,193
        20,1.99,54,27,10,29,169,342,300,179,2.21,1.14,396,233,78,0.141
        0,1.87,61,0,0,34,169,335,281,169,2.34,1.19,396,230,78,0.086

        This suggests a key issue to resolve is whether with increasing cloud cover the release in latent heat (in all directions) rises proportionately. Intuitively one would think so. After all, over large areas of cloud there is (presumably) always about the same probability that about the same proportion are condensing into rain.

        After O&O96 and P&H02 this also suggests that entropy production (EP) is a function of cloud cover due to the fact that, as P&H02 suggest, moist convection accounts for an ~40% of EP.

  44. Steve has posted a more comprehensive theory for maintainence of LW IR homoeostasis than offered by M; in Steve's theory clouds are the deux machina; when clouds increase S_T decreases but the total OLR remains constant because the low level clouds emit more through ET_U thus bringing the non-radiative thermal energy, K and the incoming SW F into the picture; in this way TAU may vary but the greenhouse effect doesn't. The key is still high water; if that increases then the low-level cloud effect, ET_U will be blocked, so we are still at the cutting edge Dessler/IPCC vs NCEP/Paltridge dispute.As to the boundary situation; Nick says only the TOA boundary is essential and indeed extant; I still don't see that that changes the necessity for the AGW model to be a semi-infinite one; AGW predicts a THS; a THS is a raising of the tropopause whereby the cooler stratosphere air is replaced by warmer upper troposphere air; as AGW claims, with more CO2 the higher the THS is and the higher the CO2 has to be and longer the CO2 has to wait before it can strike cold air to emit to space; without wanting to go into the problematic existence of a THS, the point is that AGW is based on a semi-infinite atmospheric model.

  45. I’ve cross posted this from Jennifer Marohasy’s blog for those who don’t go there.

    Here is another run of the basic spreadsheet (slightly expanded to improve self-explanation) for my little model. This time I once again kept S_U = 396 W/m^2, OLR = 239 W/m^2, F=78 W/m^2 again all as per the T,F&K09 review (see the cartoon therein).

    However, I also forced rE_U (i.e. the real E_U) to be 169 as per T,F&K09 and I set S_T to average 40 at 60% cloud cover again as per T,F&K09 (rather than the 31±10 of the CERES and ERBE averages).

    All other assumptions were as listed previously including that again I assumed E_D~0.625(ET+DT)+0.5F+0.625A_A on the grounds I have previously explained above.

    Once again I set Dry Thermals (DT; convective sensible heat) to be 17 W/m^2 at 60% cloud cover but as before scaled DT to be 0 at 100% cloud cover and 34 at 0% cloud cover. This is a conservative assumption in that it tends to force my estimate of E_D towards A_A i.e. in the direction of Miskolczi’s so-called Atmospheric Kirchoff Law. One could just as easily run this assuming DT = 17 at all cloud covers (probably unlikely physically) and the outcomes would only be trivially different.

    Here are the results:

    %Cloud, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, oE_U, S_U/oE_U, A_A/E_D, S_U, OLR, F
    100 2.99 20 133 50 0 169 376 357 219 1.81 1.05 396 239 78
    80 2.58 30 107 40 8 169 366 340 209 1.89 1.08 396 239 78
    60 2.29 40 80 30 17 169 356 322 199 1.99 1.11 396 239 78
    40 2.07 50 53 20 23 169 346 303 189 2.10 1.14 396 239 78
    20 1.89 60 27 10 29 169 336 284 179 2.21 1.18 396 239 78
    0 1.73 70 0 0 34 169 326 264 169 2.34 1.23 396 239 78

    It can be seen that the real (LW IR) Tau ranges from 1.73 at full clear sky to 2.99 at full 100% cloud cover being 1.89 i.e. ~1.87 only at 20% cloud cover.

    The Miskolci ‘Kirchoff Law’ test ratio A_A/E_D ranges from 1.05 at 100% cloud cover to 1.23% at clear sky

    The Miskolczi ‘Virial Rule’ test ratio S_U /oE_U (i.e. S_U /old E_U) ranges from 1.81 at 100% cloud cover to 2.34 at clear sky. It is 1.99 i.e. ~2.00 only at 60% cloud cover i.e. at the global all sky % cloud cover.

    I would be happy to send back to anyone, if they send an email to me, a copy of this little Excel spreadsheet so they can play around with it themselves. Your email address would not be recorded.

    You can then make up your own minds what this simple exercise tells you about:

    (1) the likely validity of the major tenets of Miskolczi Theory; and

    (2) the significance of LW IR by release of latent heat in clouds (water emission lines) which typically escapes to contribute to OLR (as a simple function of % cloud cover).

  46. I've cross posted this from Jennifer Marohasy's blog for those who don't go there.Here is another run of the basic spreadsheet (slightly expanded to improve self-explanation) for my little model. This time I once again kept S_U = 396 W/m^2, OLR = 239 W/m^2, F=78 W/m^2 again all as per the T,F&K09 review (see the cartoon therein).However, I also forced rE_U (i.e. the real E_U) to be 169 as per T,F&K09 and I set S_T to average 40 at 60% cloud cover again as per T,F&K09 (rather than the 31±10 of the CERES and ERBE averages).All other assumptions were as listed previously including that again I assumed E_D~0.625(ET+DT)+0.5F+0.625A_A on the grounds I have previously explained above.Once again I set Dry Thermals (DT; convective sensible heat) to be 17 W/m^2 at 60% cloud cover but as before scaled DT to be 0 at 100% cloud cover and 34 at 0% cloud cover. This is a conservative assumption in that it tends to force my estimate of E_D towards A_A i.e. in the direction of Miskolczi’s so-called Atmospheric Kirchoff Law. One could just as easily run this assuming DT = 17 at all cloud covers (probably unlikely physically) and the outcomes would only be trivially different.Here are the results:%Cloud, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, oE_U, S_U/oE_U, A_A/E_D, S_U, OLR, F 100 2.99 20 133 50 0 169 376 357 219 1.81 1.05 396 239 78 80 2.58 30 107 40 8 169 366 340 209 1.89 1.08 396 239 78 60 2.29 40 80 30 17 169 356 322 199 1.99 1.11 396 239 78 40 2.07 50 53 20 23 169 346 303 189 2.10 1.14 396 239 78 20 1.89 60 27 10 29 169 336 284 179 2.21 1.18 396 239 78 0 1.73 70 0 0 34 169 326 264 169 2.34 1.23 396 239 78It can be seen that the real (LW IR) Tau ranges from 1.73 at full clear sky to 2.99 at full 100% cloud cover being 1.89 i.e. ~1.87 only at 20% cloud cover.The Miskolci ‘Kirchoff Law’ test ratio A_A/E_D ranges from 1.05 at 100% cloud cover to 1.23% at clear skyThe Miskolczi ‘Virial Rule’ test ratio S_U /oE_U (i.e. S_U /old E_U) ranges from 1.81 at 100% cloud cover to 2.34 at clear sky. It is 1.99 i.e. ~2.00 only at 60% cloud cover i.e. at the global all sky % cloud cover.I would be happy to send back to anyone, if they send an email to me, a copy of this little Excel spreadsheet so they can play around with it themselves. Your email address would not be recorded.You can then make up your own minds what this simple exercise tells you about:(1) the likely validity of the major tenets of Miskolczi Theory; and(2) the significance of LW IR by release of latent heat in clouds (water emission lines) which typically escapes to contribute to OLR (as a simple function of % cloud cover).

  47. Hi Jan –
    M’s solution fails to meet the TOA boundary conditions in the following sense:
    If one imposes the condition that the downward LW flux = 0 at tau = 0, as it must, the upward flux does not equal OLR. If one imposes the condition that the upward flux equals OLR, as it must, then the downward LW flux does not equal zero.

    The classical solution of Emden, Milne, et al. does satisfy these boundary conditions. (The two conditions are used to determine the two coefficients that are introduced by the Eddington approximation, which M is using.)

    I don’t understand your other questions.

    1. “If one imposes the condition that the upward flux equals OLR, as it must, then the downward LW flux does not equal zero.”

      I do not see why if tau at TOA is zero there is no absorption and no re-emission only transmission I see no reason to assume that because transmission at TOA is non zero that there must also be a downward flux. I certainly see no such assumption made in M-07.

      “I don’t understand your other questions.”

      I didn’t think so.

      Put simply semi-infinite means unbounded in one direction.
      If and integral is unbounded in one direction there is NO boundary so why are there two in the classical theory?

      1. Jan says: “I see no reason to assume that because transmission at TOA is non zero that there must also be a downward flux”

        Right. The point is that there really is no downward LW flux at TOA, but M’s solution (incorrectly) gives a non-zero downward flux, if you demand that that the upward flux is OLR, as it must be.

        The Eddington approximation, which M uses to derive eqn. 11 and the following, introduces coefficients that specify the radiation field in terms of the angular distributon of the specific intensity. Boundary conditions at tau = 0 are used to determine these coefficients. M doesn’t talk about any of this, but he cannot legitimately ignore these boundary conditions. A nice feature of eqn 11, etc., is that the spatial variable (height) is effectively replaced by the optical depth tau, which is well-behaved (goes to zero) at “infinte” height (TOA). So, although there is “no boundary” in space, there is an optical depth boundary.

      2. Pat “Right. The point is that there really is no downward LW flux at TOA, but M’s solution (incorrectly) gives a non-zero downward flux,”

        Where?

        “if you demand that that the upward flux is OLR, as it must be.”

        Why?

        a non zero OLR says nothing about downward LW at TOA.

        I asked you why you haven’t answered.

        Your last paragraph seems confused in the extreme M does not ignore the upper boundary and the boundary conditions that I’m complaining about is not the one at the optical boundary where tau = 0 but those two where the equations are unbounded at tau = infinity.

  48. Hi Jan – M’s solution fails to meet the TOA boundary conditions in the following sense:If one imposes the condition that the downward LW flux = 0 at tau = 0, as it must, the upward flux does not equal OLR. If one imposes the condition that the upward flux equals OLR, as it must, then the downward LW flux does not equal zero.The classical solution of Emden, Milne, et al. does satisfy these boundary conditions. (The two conditions are used to determine the two coefficients that are introduced by the Eddington approximation, which M is using.)I don’t understand your other questions.

  49. “If one imposes the condition that the upward flux equals OLR, as it must, then the downward LW flux does not equal zero.”I do not see why if tau at TOA is zero there is no absorption and no re-emission only transmission I see no reason to assume that because transmission at TOA is non zero that there must also be a downward flux. I certainly see no such assumption made in M-07. “I don’t understand your other questions.”I didn't think so.Put simply semi-infinite means unbounded in one direction.If and integral is unbounded in one direction there is NO boundary so why are there two in the classical theory?

  50. Jan says: “I see no reason to assume that because transmission at TOA is non zero that there must also be a downward flux”Right. The point is that there really is no downward LW flux at TOA, but M’s solution (incorrectly) gives a non-zero downward flux, if you demand that that the upward flux is OLR, as it must be.The Eddington approximation, which M uses to derive eqn. 11 and the following, introduces coefficients that specify the radiation field in terms of the angular distributon of the specific intensity. Boundary conditions at tau = 0 are used to determine these coefficients. M doesn’t talk about any of this, but he cannot legitimately ignore these boundary conditions. A nice feature of eqn 11, etc., is that the spatial variable (height) is effectively replaced by the optical depth tau, which is well-behaved (goes to zero) at “infinte” height (TOA). So, although there is “no boundary” in space, there is an optical depth boundary.

  51. Pat “Right. The point is that there really is no downward LW flux at TOA, but M’s solution (incorrectly) gives a non-zero downward flux,”Where?”if you demand that that the upward flux is OLR, as it must be.”Why?a non zero OLR says nothing about downward LW at TOA.I asked you why you haven't answered.Your last paragraph seems confused in the extreme M does not ignore the upper boundary and the boundary conditions that I'm complaining about is not the one at the optical boundary where tau = 0 but those two where the equations are unbounded at tau = infinity.

  52. Jan –
    Where?
    Not in his paper; you have to calculate it yourself.

    Why?
    Why what? Why must the upward LW radiation be OLR? Conservation of energy, and the definition of OLR. Why does OLR at TOA demand (by M’s solution) a non-zero downward flux at TOA? Because that’s what his incorrect “solution” gives. Why do I know this? Because I calculated it. (As Nick implies above, it’s obvious.)

    “a non zero OLR says nothing about downward LW at TOA”
    Right. Downward LW at TOA is zero (in a proper solution) because there’s no LW radiation from space.

    “M does not ignore the upper boundary”
    You’re right, he claims to deal with it in Apendix B. He just gets it wrong.

    You have a problem with the lower boundary. I don’t.

    Jan, I’m sorry, there’s so much wrong with this paper I can’t believe we’re still talking about it. As I can’t recall a single instance when you agreed with me, and I expect more of the same, I’m just going to let you work it out for yourself. Google “Eddington approximation” + “plane parallel atmosphere”. Lots of good resources. Understand the derivation of the classical solution. Try putting M’s solution into the same context. See for yourself why it is whacko. Leave your preconceptions in your pocket, learn something new, and have fun.

    1. Pat Cassen:

      “Jan –
      Where?
      Not in his paper; you have to calculate it yourself.”

      What?

      You are saying

      “but M’s solution (incorrectly) gives a non-zero downward flux”

      but not in his paper and I have to calculate it myself?

      Do you think this makes any sense?

      I’f I have to invent a downward flux for myself then M’s solution does not give a downward flux.

      Makes about as much sense as your earlier complaint about three equations to solve 9 variables ( better check I may have miscounted).

      “See for yourself why it is whacko.”

      What is whacko is boundary conditions for differential equation where the solution is unbounded. What is whacko is using an unbounded solution where the system is bounded. What is whacko is two boundary conditions for a first order differential equation.

      All this whackiness comes about by using a approximations suitable for optically stellar atmospheres for an optically thin one.

      1. Jan,
        As Pat says, it is obvious. I doubt if he’ll want to deal further with your wearisome barrage of incoherent objections, but verification is simple. Go to M’s Eq 15. As M says, that is the Eddington approximation. It is based on constant radiative flux H – see start of para above Eq 12. Substutute tau=0, and you regenerate the boundary condition as a value of B.

        Now go to M’s Eq 21. Substitute tau=0. You get a quite different value of B. M’s solution does not satisfy the bc at TOA.

        Whether you interpret that as implying a non-zero return flux, or an incorrect OLR doesn’t matter. It isn’t right, and something’s gotta give.

      2. Nick
        First you need to look up the meaning of “barrage” and then explain ho I’m putting up a barrier to understanding by asking questions that neither you nor Cassen will answer?

        Sorry but you have it back to front Cassen put up the wearisome barrage and you are helping.

        Kindly answer the question I asked instead of putting up red herrings.

        Equation 15 is the semi-infinite solution otherwise there would be transcendental terms of tau in it, so where are they.

        Equations 20 and 21 are the bounded solution of the DE they have the transcendental term (Ta= Exp(-tau)) of course you are going to get different boundary conditions we expect that.

        The question that you you need to answer what makes eqn 15 right when it requires the two equations to satisfy the boundary condition (16 & 17) at the surface where according to the solution (eq 15) tau is infinite leading to a temperature discontinuity that is NOT OBSERVED or as Milne put it a singularity at the surface.

        Please no more red herrings they are getting tiresome.

      3. Jan,

        Pat seems to be undeniably correct when he suggests we need to understand the classical solution first, rather than Miskolczi’s critique of it.

        I have shown these papers to you before but again how is it that Ostriker says that the temperature discontinuity persists not only in the Eddington approximation but in the exact solution? This seems to completely contradict the Miskolczi storyline. The Miskolczi storyline holds that there never even WAS an exact solution before his 2007 paper. Ostriker is referring I believe to the 1955 solution of Jean I.F. King, and King acknowledges a debt to Chandrasekhar who (I believe) derives the whole thing from first principles.

        This is really starting to get silly.

        Can you show us what Jean I.F. King did wrong? Can you show us what Chandrasekhar did wrong? Otherwise we still have Miskolczi’s theory correcting an error in a theory and there has never been a single piece of evidence produced that anyone ever made this error in the first place.

      4. Alex,
        You should also note this observation of Orstriker on p 285:
        Thus, for example, this model could not be applied to the optically thin atmosphere of the earth, since (a) the surface discontinuity would be quite significant and (b) the approximate solution would be rather inaccurate.
        I think his caveat (b) applies to the thinness; it causes a local near the surface (O says for tau<1/2) and this is true for all the solutions in FM as well. But he may also be referring to the effect of LH and convection in the lower atmosphere – as I've said many times, I think this is the major issue on earth, and makes worrying about a surface discontinuity irrelevant. The basic assumption of radiative equilibrium has failed well before you get to the surface.

      5. “Pat seems to be undeniably correct when he suggests we need to understand the classical solution first, rather than Miskolczi’s critique of it.”
        Sure Alex and we also need to study flat earth theories before we can see the world as an oblate spheroid.

        Sorry but disagree with you, unless you can show me that it is reasonable to have a boundary condition for unbounded directions of integrals of course and then that it’s reasonable to come up with two for a first order differential equation.

        You may also throw this at some people a remark from Nick that I and M also would agree with.

        But he may also be referring to the effect of LH and convection in the lower atmosphere – as I’ve said many times, I think this is the major issue on earth, and makes worrying about a surface discontinuity irrelevant.

        FMs model despite the fact that he doesn’t quantify convection or latent heat in the paper because it’s beyond the scope of the paper is in fact a radiation/convection model.

      6. “FMs model despite the fact that he doesn’t quantify convection or latent heat in the paper because it’s beyond the scope of the paper is in fact a radiation/convection model.”

        Hi Jan. You must have extremely strong teeth (or a titanium denture)! To see you masticating that statement after all this time is quite a stunner, especially the little gem therein…. “because its beyond the scope of the paper” indeed!!!!

        What’s that you were saying about flat earths and the like? Choke!

      7. Steve “Hi Jan. You must have extremely strong teeth (or a titanium denture)!”

        Yes they are quite strong but I can assure you they are quite natural. On the other hand if yours are falling out you might be ready for a visit by ACAT.
        (Your partner will know what that is)

  53. Jan -Where? Not in his paper; you have to calculate it yourself.Why?Why what? Why must the upward LW radiation be OLR? Conservation of energy, and the definition of OLR. Why does OLR at TOA demand (by M’s solution) a non-zero downward flux at TOA? Because that’s what his incorrect “solution” gives. Why do I know this? Because I calculated it. (As Nick implies above, it’s obvious.)“a non zero OLR says nothing about downward LW at TOA”Right. Downward LW at TOA is zero (in a proper solution) because there’s no LW radiation from space.“M does not ignore the upper boundary”You’re right, he claims to deal with it in Apendix B. He just gets it wrong.You have a problem with the lower boundary. I don’t.Jan, I’m sorry, there’s so much wrong with this paper I can’t believe we’re still talking about it. As I can’t recall a single instance when you agreed with me, and I expect more of the same, I’m just going to let you work it out for yourself. Google “Eddington approximation” + “plane parallel atmosphere”. Lots of good resources. Understand the derivation of the classical solution. Try putting M’s solution into the same context. See for yourself why it is whacko. Leave your preconceptions in your pocket, learn something new, and have fun.

  54. Pat Cassen:”Jan -Where? Not in his paper; you have to calculate it yourself.”What?You are saying “but M’s solution (incorrectly) gives a non-zero downward flux”but not in his paper and I have to calculate it myself? Do you think this makes any sense?I'f I have to invent a downward flux for myself then M's solution does not give a downward flux.Makes about as much sense as your earlier complaint about three equations to solve 9 variables ( better check I may have miscounted). “See for yourself why it is whacko.”What is whacko is boundary conditions for differential equation where the solution is unbounded. What is whacko is using an unbounded solution where the system is bounded. What is whacko is two boundary conditions for a first order differential equation. All this whackiness comes about by using a approximations suitable for optically stellar atmospheres for an optically thin one.

  55. Jan/Pat/Nick/David/AnthonyI know this is not what Jan/Pat & Nick are currently (+ with deju vu ;-) scrapping about (BTW 'whacko' is one of my favourite slang words) but I wonder if you guys would feel like having a look at my little Miskolczi-modifying (refuting?) spreadsheet above in the light of Table 1 (page 444) in Ozawa and Ohmura (1996) – which paper I presume you guys are familiar with.I'm well on the way to modifying my spreadsheet model to have slightly different S_U values (and surface temperatures) for different (true) LW IR (and hence tau) values as per O&O96 Table 1 and intriguingly seem to be getting close to a situation which does actually maximize MEP along the lines of the (relatively simple) approach well-described in O&O96.Unfortunately I am in the middle of a really big work project at the moment (designing a hydromet plant for a magnesium production facility) and am having great difficulty concentrating on this stuff. Maybe none of you are not interested in MEP – in which case please ignore this message. But I seem to be on the verge of something rather interesting i.e. an atmospheric box model even simpler than Miskolczi which gets around the dodgy Eqn 7, doesn't require 'Kirchoff' or Virial', doesn't need a constant tau, involves M's K & F terms and provides an MEP-based basis for an inferred homeostasis.As it intimately involves a variable tau, convection and a surface S_U = S-B sigma*T^4 perhaps your interest might be piqued?Regards

  56. Jan,As Pat says, it is obvious. I doubt if he'll want to deal further with your wearisome barrage of incoherent objections, but verification is simple. Go to M's Eq 15. As M says, that is the Eddington approximation. It is based on constant radiative flux H – see start of para above Eq 12. Substutute tau=0, and you regenerate the boundary condition as a value of B. Now go to M's Eq 21. Substitute tau=0. You get a quite different value of B. M's solution does not satisfy the bc at TOA.Whether you interpret that as implying a non-zero return flux, or an incorrect OLR doesn't matter. It isn't right, and something's gotta give.

  57. Steve, Interesting O&O as you say. I didn't have a chance to follow-up on your references before. The proportionality of long-wave and short-wave optical depth assumption (between equations 2 and 3) would imply that an increase in optical depth due to increased GHGs would also increase SW optical depth, presumably due to increased water vapor and hence cloudiness. What would you suggest ofr a more recent follow-up to this model?

  58. Nick First you need to look up the meaning of “barrage” and then explain ho I'm putting up a barrier to understanding by asking questions that neither you nor Cassen will answer?Sorry but you have it back to front Cassen put up the wearisome barrage and you are helping. Kindly answer the question I asked instead of putting up red herrings.Equation 15 is the semi-infinite solution otherwise there would be transcendental terms of tau in it, so where are they.Equations 20 and 21 are the bounded solution of the DE they have the transcendental term (Ta= Exp(-tau)) of course you are going to get different boundary conditions we expect that. The question that you you need to answer what makes eqn 15 right when it requires the two equations to satisfy the boundary condition (16 & 17) at the surface where according to the solution (eq 15) tau is infinite leading to a temperature discontinuity that is NOT OBSERVED or as Milne put it a singularity at the surface.Please no more red herrings they are getting tiresome.

  59. Jan,Pat seems to be undeniably correct when he suggests we need to understand the classical solution first, rather than Miskolczi's critique of it.I have shown these papers to you before but again how is it that Ostriker says that the temperature discontinuity persists not only in the Eddington approximation but in the exact solution? This seems to completely contradict the Miskolczi storyline. The Miskolczi storyline holds that there never even WAS an exact solution before his 2007 paper. Ostriker is referring I believe to the 1955 solution of Jean I.F. King, and King acknowledges a debt to Chandrasekhar who (I believe) derives the whole thing from first principles.This is really starting to get silly.Can you show us what Jean I.F. King did wrong? Can you show us what Chandrasekhar did wrong? Otherwise we still have Miskolczi's theory correcting an error in a theory and there has never been a single piece of evidence produced that anyone ever made this error in the first place.

  60. Alex,You should also note this observation of Orstriker on p 285:Thus, for example, this model could not be applied to the optically thin atmosphere of the earth, since (a) the surface discontinuity would be quite significant and (b) the approximate solution would be rather inaccurate.I think his caveat (b) applies to the thinness; it causes a local near the surface (O says for tau<1/2) and this is true for all the solutions in FM as well. But he may also be referring to the effect of LH and convection in the lower atmosphere – as I've said many times, I think this is the major issue on earth, and makes worrying about a surface discontinuity irrelevant. The basic assumption of radiative equilibrium has failed well before you get to the surface.

  61. Pauluis, OM and Held IM (2002a) Entropy budget of an atmosphere in radiative-convective equilibrium. Part I: maximum work and frictional dissipation. J. Atmos. Sci. 59: 125-139This is interesting because it concludes that moist convection (ET) behaves more as an atmopsheric dehumidifier than as a heat engine.Pauluis OM, Held IM (2002b) Entropy budget of an a atmosphere in radiative-convective equilibrium. Part II: Latent heat transport and moist processes. J. Atmos. Sci. 59: 140-149Conclusion: Frictional dissipation of atmopsheric motions accounts for ~30% of total entropy production, frictional dissipation of failing rain ~12%, phase changes and diffusion of water vapor ~40% and remaining ~20% uncertainties in the above.

  62. “Pat seems to be undeniably correct when he suggests we need to understand the classical solution first, rather than Miskolczi's critique of it.”Sure Alex and we also need to study flat earth theories before we can see the world as an oblate spheroid.Sorry but disagree with you, unless you can show me that it is reasonable to have a boundary condition for unbounded directions of integrals of course and then that it's reasonable to come up with two for a first order differential equation.You may also throw this at some people a remark from Nick that I and M also would agree with.

    But he may also be referring to the effect of LH and convection in the lower atmosphere – as I've said many times, I think this is the major issue on earth, and makes worrying about a surface discontinuity irrelevant.

    FMs model despite the fact that he doesn't quantify convection or latent heat in the paper because it's beyond the scope of the paper is in fact a radiation/convection model.

  63. Hmmm, interesting, a 6 W/m^2 increase in OLR going to 100% cloud cover, a 9 W/m^2 decrease in OLR going to zero cloud cover.%Cloud,Tau,S_T,ET,ET_U,DT rE_U,A_A,E_D,oE_U,S_U/oE_U,A_A/E_D,S_U,OLR,F,(ET+DT)/S_U100,2.72,26,133,50,7,169,370,371,219,1.81,1.00,396,245,78,0.35480,2.48,33,107,40,12,169,363,353,209,1.89,1.03,396,242,78,0.30060,2.29,40,80,30,17,169,356,335,199,1.99,1.06,396,239,78,0.24540,2.13,47,53,20,23,169,349,317,189,2.10,1.10,396,236,78,19320,1.99,54,27,10,29,169,342,300,179,2.21,1.14,396,233,78,0.1410,1.87,61,0,0,34,169,335,281,169,2.34,1.19,396,230,78,0.086This suggests a key issue to resolve is whether with increasing cloud cover the release in latent heat (in all directions) rises proportionately. Intuitively one would think so. After all, over large areas of cloud there is (presumably) always about the same probability that about the same proportion are condensing into rain.After O&O96 and P&H02 this also suggests that entropy production (EP) is a function of cloud cover due to the fact that, as P&H02 suggest, moist convection accounts for an ~40% of EP.

  64. Alex,
    I haven’t read much of the King paper – it’s long, and the system of having to download one page at a time doesn’t help. But I’ve had more thoughts on where this all fits in.

    It goes back, in a way, to my first objection, that FM has led us down a blind alley. None of this theory is actual useful for, or used by, climate modelling. In fact, most climate people would be unfamiliar with it. And the Orstriker quote suggests that the planetary people too don’t think it is much use for Earth.

    The whole math theory in which H is related to B as a function of tau is a bit like the ray theory of light in microscopy. On a macro scale it works, when the geometry is on a scale many times the wavelength of light. You can see sharp edges. But as you get down to the wavelength of light, ray theory gradually breaks down, and microscopy does not work. Everything gets fuzzy. This is not sudden or absolute, but it is a problem.

    With this radiative theory, the corresponding length is unit optical density. On that scale, radiation travels significant distances before the interaction, on which the theory is based.

    We have discussed the boundary condition at TOA. A natural question is, where is TOA, in this grey-body model? We know there is no sharp line. In fact it is a distributed region, over a distance of order of OD 1. That means that even the ground has some TOA connection. The B corresponding to H can’t be attributed to a point in space – it’s an average over a big region.

    In a way, this is the problem M is trying to address, by using an alternative bc. But that has the same problem. In fact, the problem is in the use of the equation itself, on this scale. There is no simple fix for the failure of a light microscope at submicron ranges, and it’s the same here.

    The problem is much exacerbated by the use of grey-body approx (which no climate modeller would use). This describes a range from the IR window, with no absorption at all, to CO2 and water peaks, where IR is absorbed within a few metres. This is badly summarised by an average of a few KM.

    If you take out the window, the remaining IR has a slightly more meaningful, and larger, average OD. And the “ray theory” – radiative transfer equations – would work better. So the concept may still have some use. But a grey-body approx makes it useless for Earth.

    I might add that this fuzziness also makes the notion of a “discontinuity” meaningless.

    1. Nice discussion Nick. I benefited from it. Thanks for that.

      We have an analogous state of ‘fuzziness’ when approaching the (charged) surface of an amphoteric solid in contact with an ionic solution. This has led to numerous theories/viewpoints of the chemical and electrostatic nature of the ‘discontinuity’ between solid and liquid (which is critical to issues of reactivity and (chemo-)thermodynamic equilibrium).

      PS: I slowly read through the King paper last night. Sheesh!

      I might add I always suspected M’s approach of hanging an entire ‘theory’ on a fixed tau (putting aside all the other sub-theories) was nonsense and now the discovery that M’s ‘magic number’ tau is not even a true LW IR absorption tau anyway (except at perhaps in a pure clear sky Earth with no convection) but from then on must surreptiously include TOA-escaping emission from tops of clouds as well pretty much tipped M Theory into the dustbin for me.

      I find it hard to believe that M did that unconsciously i.e. as a genuine error, given his background and the mass of literature. What a strange person!

      1. Steve,
        Totally OT, but I was once very interested in the double layer theory you mention. Paper here.

      2. Also totally OT.

        Cripes Nick, 1976, talk about deja vu – you must be really, really old! I didn’t get into that stuff until going to Ansto in ’82 (after a character-building, de-hippiefying decade in industry). I’m more your grubby experimentalist (chemisorption studies, EXAFS of wet surfaces etc). Jim Davis of Triple Layer fame still a close personal friend – I was quaffing Aussie reds with him and Linda only about a month ago.

      3. Well, I prefer to think that then I was really really young. But gosh, it’s a small world. I used to work with those people at ANSTO too. Gary Pantelis, Paul Brown. Mostly on the early stages of Sulfidox.

    2. Agreed. With a simple equation like this, whether you apply a boundary condition at TOA or BOA seems to be a matter of assumptions. In M the TOA conditon is met ‘somehow’ just as the BOA discontinuity is dealt with ‘somehow’. That may be in M’s case, that the constraint is not actually at the solid surface, but slightly above, where latent and convection transfer has been effected already.

      Interesting you see M as a blind alley. O&O’s paper similar in gray body approximation, though interesting and natural doesn’t seem to have been elaborated (incorporating lattitudinal variations, empirical validation, etc). I also see close similarities of the O&O maximum entropy condition, to M’s problematic eqn. 7. They might be more similar that it looks at first.

      1. Actually, David, O&O96 has been elaborated in spades!

        The body of MEP in climate theory (and in other fields) is literally booming!

        I strongly suggest you get the Red Book and then follow up on the work of Kleidon, Dewar, Pauluis et al. and the big groups at e.g. Max Planck Institute for Meteorology in Hamburg and the Max Planck Institute for Biogeochemistry in Jena where Axel Kleidon is now.

        David Catling at University of Washington also has a group looking at biotic EP.

        Graham Farquhar at ANU is into it (sometimes with Paltridge).

        Check out: http://www.bgc-jena.mpg.de/bgc-theory/index.php/Research/2009-Thermodynamics

      2. Here’s a real gem:

        A Kleidon, K Fraedrich, E Kirk and F Lunkeit, 2006. Maximum entropy production and the strength of boundary layer exchange in an atmospheric general circulation model. Geophysical Research Letters, 33, L06706, doi: 10.1029/2005GL025373.

        “The difference in climate sensitivities of tropical and polar regions is at a minimum at a climatic state of MEP.”

        Look carefully and you will see this means, in effect, that any negative forcing at the tropics is a ‘bonus’ which has to result in an MEP-maximized meridional energy shift and hence is missing from any averaged-out pseudo-vertical model like M Theory.

        This takes all the way back to my “missing 20 – 25 W/m^2″ from M Theory which I posited occurred via lateral meridional energy/entropy flow leading Jan to want to do a “burn him at the stake” job on me.

        That is of course before Nick and I realised M’s tau simply wasn’t a real LW IR tau at all (for a global all sky model) and we could account for the missing 20 – 25 W/m^2 in that way.

        Ah, what a tangled web we weave, when first we venture to deceive.

      3. David,
        By blind alley, I meant specifically the Milne type radiative transfer theory for optically thick atmospheres.

      4. David ” That may be in M’s case, that the constraint is not actually at the solid surface, but slightly above, where latent and convection transfer has been effected already. ”

        The fact that the surface temperature is measured 1.5 – 2 m above the surface and complete optical occlusion doesn’t occur for several meters more makes that a given.

      5. I have big problems with this view. I think you are ignoring the demonstrable real world competition between viscosity and buoyancy of air.

        (1) The volume of warm rising air in a thermal passing through any arbitrary low altitude datum point e.g. 1000 feet (~330 m) or more per unit time is quite substantial (having thermalled my way up in them in a hang glider for about 8 years). Just watch pelicans, hawks and eagles closely and you will get some idea of the typical diameters of these columns.

        To get such a volume and still ‘have latent heat and convective transfer affected already only slightly above the surface’ requires he notion be maintained that this volume must have originated as a relatively thin ‘skin’ of air spread out over quite a substantial area and then it decided to all rush into a much smaller area before heading off upwards due to buoyancy?

        I doubt it.

        (2) I have ‘triggered’ off masses of still, near surface warm air on ground approach i.e. thermals, from heights as high up to about 30 feet i.e. 10 m. The pilots of light planes on final in training circuits i.e. having flown through the same space a couple of minutes before will frequently experience of the same effect. This is one of the reasons you are forced to go round and round practising landings. Trigger a thermal while on final (its not just coincidence you know) and not be prepared for it and you could dig a wing in, utterly pretzel the plane, yourself and loudly expostulating instructor all in just ‘the twinkling of an eye’.

        What is ‘given’ here?

      6. I guess what I am saying is that just as TOA is a fuzzy interface as Nick points out so is BOA. I don’t think anything is really ‘given’ about what happens at the BOA interface.

        Another example: I used to have to pump groundwater from shallow boreholes on an alluvial field in Switzerland in the middle of winter over extended periods (to pass the groundwater through special filtration and chemisorption equipment) such that I had to sleep in a uni van on the field through the night and camp there for about a week. The ground was covered in snow and it was bitterly cold by day and especially by night usually with strong katabatic winds down the valley. The only way I could warm my hands just enough just to work the valves, pumps etc (both day and night) was take my gloves off and frequently flush them with the actual groundwater. The groundwater was in thermal equilibrium with the ground which was significantly warmer than the air only centimetres above the ground. Strong discontinuity right down at ground level (or even slightly lower)!

        In other circumstances, as I note above, the actual discontinuity could be 10 m above ground level. This is simply because even though the ground is so warm it has heated a thicker layer of air for reasons of lack of wind and viscous drag that airmass is still ‘attached’ to the ground until some little disturbance e.g a dog, a sheep, a car, a hang glider or a landing plane causes just enough turbulence to lead to a chaotic breakdown of that attachment.

      7. Steve, Gliding does give one a greater appreciation of thermals. When paragliding in SoCal during high pressure periods, the thermals are so narrow and punchy you can hear them rushing nearby but not feel them, or one can hit on one wing, lean into it and catch the elevator , beep, beep, beep. So yes, the hot air punches through at a sharp rock or other trigger and drains a larger area. Then in a low pressure of course, the air rises in huge buoyant masses (and sinks over large areas too).

      8. Never flew there. Mainly Black Mt and La Jolla in San Diego, La Salinas in Baja, and up at Marshall’s in San Bernadino.

      9. Are you still paragliding? I’ve done the hills near the Salton Sea (forget their name), Torrey Pines i.e. most of the wussy coastal Cal sites. But I’ve also done weird stuff like Popacotapetl (= smoking volcano) and other Mexico freak out sites. I gave up the hang gliding late 90s after a decade or so but every time I see those sweet paragliders an itch to start again with them gets very strong (even though I’m now 60).

  65. Alex,I haven't read much of the King paper – it's long, and the system of having to download one page at a time doesn't help. But I've had more thoughts on where this all fits in.It goes back, in a way, to my first objection, that FM has led us down a blind alley. None of this theory is actual useful for, or used by, climate modelling. In fact, most climate people would be unfamiliar with it. And the Orstriker quote suggests that the planetary people too don't think it is much use for Earth.The whole math theory in which H is related to B as a function of tau is a bit like the ray theory of light in microscopy. On a macro scale it works, when the geometry is on a scale many times the wavelength of light. You can see sharp edges. But as you get down to the wavelength of light, ray theory gradually breaks down, and microscopy does not work. Everything gets fuzzy. This is not sudden or absolute, but it is a problem.With this radiative theory, the corresponding length is unit optical density. On that scale, radiation travels significant distances before the interaction, on which the theory is based.We have discussed the boundary condition at TOA. A natural question is, where is TOA, in this grey-body model? We know there is no sharp line. In fact it is a distributed region, over a distance of order of OD 1. That means that even the ground has some TOA connection. The B corresponding to H can't be attributed to a point in space – it's an average over a big region.In a way, this is the problem M is trying to address, by using an alternative bc. But that has the same problem. In fact, the problem is in the use of the equation itself, on this scale. There is no simple fix for the failure of a light microscope at submicron ranges, and it's the same here.The problem is much exacerbated by the use of grey-body approx (which no climate modeller would use). This describes a range from the IR window, with no absorption at all, to CO2 and water peaks, where IR is absorbed within a few metres. This is badly summarised by an average of a few KM.If you take out the window, the remaining IR has a slightly more meaningful, and larger, average OD. And the “ray theory” – radiative transfer equations – would work better. So the concept may still have some use. But a grey-body approx makes it useless for Earth.I might add that this fuzziness also makes the notion of a “discontinuity” meaningless.

  66. Nice discussion Nick. I benefited from it. Thanks for that. We have an analogous state of 'fuzziness' when approaching the (charged) surface of an amphoteric solid in contact with an ionic solution. This has led to numerous theories/viewpoints of the chemical and electrostatic nature of the 'discontinuity' between solid and liquid (which is critical to issues of reactivity and (chemo-)thermodynamic equilibrium).PS: I slowly read through the King paper last night. Sheesh!I might add I always suspected M's approach of hanging an entire 'theory' on a fixed tau (putting aside all the other sub-theories) was nonsense and now the discovery that M's 'magic number' tau is not even a true LW IR absorption tau anyway (except at perhaps in a pure clear sky Earth with no convection) but from then on must surreptiously include TOA-escaping emission from tops of clouds as well pretty much tipped M Theory into the dustbin for me. I find it hard to believe that M did that unconsciously i.e. as a genuine error, given his background and the mass of literature. What a strange person!

  67. Agreed. With a simple equation like this, whether you apply a boundary condition at TOA or BOA seems to be a matter of assumptions. In M the TOA conditon is met 'somehow' just as the BOA discontinuity is dealt with 'somehow'. That may be in M's case, that the constraint is not actually at the solid surface, but slightly above, where latent and convection transfer has been effected already. Interesting you see M as a blind alley. O&O's paper similar in gray body approximation, though interesting and natural doesn't seem to have been elaborated (incorporating lattitudinal variations, empirical validation, etc). I also see close similarities of the O&O maximum entropy condition, to M's problematic eqn. 7. They might be more similar that it looks at first.

  68. “FMs model despite the fact that he doesn't quantify convection or latent heat in the paper because it's beyond the scope of the paper is in fact a radiation/convection model.”Hi Jan. You must have extremely strong teeth (or a titanium denture)! To see you masticating that statement after all this time is quite a stunner, especially the little gem therein…. “because its beyond the scope of the paper” indeed!!!!What's that you were saying about flat earths and the like? Choke!

  69. Actually, David, O&O96 has been elaborated in spades! The body of MEP in climate theory (and in other fields) is literally booming! I strongly suggest you get the Red Book and then follow up on the work of Kleidon, Dewar, Pauluis et al. and the big groups at e.g. Max Planck Institute for Meteorology in Hamburg and the Max Planck Institute for Biogeochemistry in Jena where Axel Kleidon is now. David Catling at University of Washington also has a group looking at biotic EP. Graham Farquhar at ANU is into it (sometimes with Paltridge).Check out: http://www.bgc-jena.mpg.de/bgc-theory/index.php

  70. Here's a real gem:A Kleidon, K Fraedrich, E Kirk and F Lunkeit, 2006. Maximum entropy production and the strength of boundary layer exchange in an atmospheric general circulation model. Geophysical Research Letters, 33, L06706, doi: 10.1029/2005GL025373. “The difference in climate sensitivities of tropical and polar regions is at a minimum at a climatic state of MEP.”Look carefully and you will see this means, in effect, that any negative forcing at the tropics is a 'bonus' which has to result in an MEP-maximized meridional energy shift and hence is missing from any averaged-out pseudo-vertical model like M Theory.This takes all the way back to my “missing 20 – 25 W/m^2″ from M Theory which I posited occurred via lateral meridional energy/entropy flow leading Jan to want to do a “burn him at the stake” job on me. That is of course before Nick and I realised M's tau simply wasn't a real LW IR tau at all (for a global all sky model) and we could account for the missing 20 – 25 W/m^2 in that way.Ah, what a tangled web we weave, when first we venture to deceive.

  71. David,By blind alley, I meant specifically the Milne type radiative transfer theory for optically thick atmospheres.

  72. Also totally OT.Cripes Nick, 1976, talk about deja vu – you must be really, really old! I didn't get into that stuff until going to Ansto in '82 (after a character-building, de-hippiefying decade in industry). I'm more your grubby experimentalist (chemisorption studies, EXAFS of wet surfaces etc). Jim Davis of Triple Layer fame still a close personal friend – I was quaffing Aussie reds with him and Linda only about a month ago.

  73. David ” That may be in M's case, that the constraint is not actually at the solid surface, but slightly above, where latent and convection transfer has been effected already. “The fact that the surface temperature is measured 1.5 – 2 m above the surface and complete optical occlusion doesn't occur for several meters more makes that a given.

  74. Steve “Hi Jan. You must have extremely strong teeth (or a titanium denture)!”Yes they are quite strong but I can assure you they are quite natural. On the other hand if yours are falling out you might be ready for a visit by ACAT.(Your partner will know what that is)

  75. Well, I prefer to think that then I was really really young. But gosh, it's a small world. I used to work with those people at ANSTO too. Gary Pantelis, Paul Brown. Mostly on the early stages of Sulfidox.

  76. I have big problems with this view. I think you are ignoring the demonstrable real world competition between viscosity and buoyancy of air.(1) The volume of warm rising air in a thermal passing through any arbitrary low altitude datum point e.g. 1000 feet (~330 m) or more per unit time is quite substantial (having thermalled my way up in them in a hang glider for about 8 years). Just watch pelicans, hawks and eagles closely and you will get some idea of the typical diameters of these columns.To get such a volume and still 'have latent heat and convective transfer affected already only slightly above the surface' requires he notion be maintained that this volume must have originated as a relatively thin 'skin' of air spread out over quite a substantial area and then it decided to all rush into a much smaller area before heading off upwards due to buoyancy?I doubt it. (2) I have 'triggered' off masses of still, near surface warm air on ground approach i.e. thermals, from heights as high up to about 30 feet i.e. 10 m. The pilots of light planes on final in training circuits i.e. having flown through the same space a couple of minutes before will frequently experience of the same effect. This is one of the reasons you are forced to go round and round practising landings. Trigger a thermal while on final (its not just coincidence you know) and not be prepared for it and you could dig a wing in, utterly pretzel the plane, yourself and loudly expostulating instructor all in just 'the twinkling of an eye'.What is 'given' here?

  77. I guess what I am saying is that just as TOA is a fuzzy interface as Nick points out so is BOA. I don't think anything is really 'given' about what happens at the BOA interface. Another example: I used to have to pump groundwater from shallow boreholes on an alluvial field in Switzerland in the middle of winter over extended periods (to pass the groundwater through special filtration and chemisorption equipment) such that I had to sleep in a uni van on the field through the night and camp there for about a week. The ground was covered in snow and it was bitterly cold by day and especially by night usually with strong katabatic winds down the valley. The only way I could warm my hands just enough just to work the valves, pumps etc (both day and night) was take my gloves off and frequently flush them with the actual groundwater. The groundwater was in thermal equilibrium with the ground which was significantly warmer than the air only centimetres above the ground. Strong discontinuity right down at ground level (or even slightly lower)! In other circumstances, as I note above, the actual discontinuity could be 10 m above ground level. This is simply because even though the ground is so warm it has heated a thicker layer of air for reasons of lack of wind and viscous drag that airmass is still 'attached' to the ground until some little disturbance e.g a dog, a sheep, a car, a hang glider or a landing plane causes just enough turbulence to lead to a chaotic breakdown of that attachment.

  78. Steve, Gliding does give one a greater appreciation of thermals. When paragliding in SoCal during high pressure periods, the thermals are so narrow and punchy you can hear them rushing nearby but not feel them, or one can hit on one wing, lean into it and catch the elevator , beep, beep, beep. So yes, the hot air punches through at a sharp rock or other trigger and drains a larger area. Then in a low pressure of course, the air rises in huge buoyant masses (and sinks over large areas too).

  79. Never flew there. Mainly Black Mt and La Jolla in San Diego, La Salinas in Baja, and up at Marshall's in San Bernadino.

  80. Are you still paragliding? I've done the hills near the Salton Sea (forget their name), Torrey Pines i.e. most of the wussy coastal Cal sites. But I've also done weird stuff like Popacotapetl (= smoking volcano) and other Mexico freak out sites. I gave up the hang gliding late 90s after a decade or so but every time I see those sweet paragliders an itch to start again with them gets very strong (even though I'm now 60).

  81. The Kirchhoff radiative emissivity-absorptivity law is not strictly exactly to the point here. We are talking near it, but not strictly exactly on it.

    But let us note the Kirchhoff law anyway, just for interest. It is often cavalierly taken without meticulous regard to its precise range of applicability. Originally Kirchhoff’s Law was stated for the condition that the energy supply maintains thermodynamic equilibrium. But it is now known that the law applies more widely. But just how much more widely?

    The absorptivity and emissivity of a medium are determined by its chemical constitution, by the physical geometrical arrangement of the chemical constituents, and by the way that energy is supplied to the medium to support the emission and govern the absorption. The physical geometrical arrangement of the chemical constituents matters because it affects how the supplied energy is transported and distributed within the medium, and how the chemical constituents are exposed to the contiguous medium. Thus the difference between a powdery or spicule-textured surface and a polished one.

    Einstein 1917 originated the idea that the absorptivity is the sum of an obvious kind of absorptivity in which a chemical species is simply lifted to a higher energy level, and a very non-obvious kind of absorptivity, indeed negative absorptivity, in which an incoming ‘photon’ triggers the release of identical copy of itself in the same direction. This is sometimes called stimulated emission and Einstein needed it to make sense of the Planck law. It is governed not by the availability of the low energy form of the relevant molecules, but by the availability of their high energy form. On the other hand, empirically detectable, or spontaneous, emissivity is governed only by the availability of the high energy form of the relevant molecules. The absorptivity and emissivity are therefore governed by different dynamics and can thus be expected to behave differently depending on how the energy is supplied and distributed to the medium.

    The full meaning of this was not really widely and fully understood until conditions of energy supply to a medium were varied experimentally, and the negative kind of absorption was demonstrated in the laser. Under the conditions of energy supply that support lasing, the absorptivity is negative. The emissivity is always positive, and so we have a clear example in which Kirchhoff’s law does not apply because of the conditions of supply and distribution of energy to the medium. Kirchhoff (1858, 1860 in English) himself, in stating the conditions for his law to hold, was careful to exclude phosphorescence and fluorescence, but I feel sure he would have excluded lasing as well if he had known about it. But when we use his law we have the duty to specify the conditions of energy supply and distribution, to help justify our use of the law.

    Of course, many conditions of energy supply and distribution are near enough to those of thermodynamic equilibrium. This happens so often that people forget that they need to be specified, and that outside them, the law can be very far from applicability. People often implicitly assume that the thermodynamic equilibrium condition is sufficiently mimicked that they do not need to put on notice that it is being used. This is carefully explained in Mihalas and Mihalas 1984 at section 84 starting on page 386. Also Hottel and Sarofim 1967 gives a slight account of it. The Einstein A and B coefficient theory is set out for example in section 1.5 et seq. of R. Loudon 2000 ‘The Quantum Theory of Light’ 3rd edition, Oxford University Press.

    There are two things to think about in our present concern. One is the empirical data equality, Aa = Ed, regardless of how we might or might not explain it by some theory. The other is Miskolczi’s rather cavalier citation of the Kirchhoff law.

    I think that the empirical data equality deserves to be examined in its own right, without regard to any putative theoretical explanation. That is not the subject of this post.

    But here I would like to think a bit about the relevant physical theory. I will not directly focus on Kirchhoff’s Law, because I just want to focus on the presently relevant physics. The Aa = Ed formula is in question. It refers to non-window wavenumbers. This means that for the relevant wavenumbers, the atmosphere is entirely opaque. An opaque object has an emissivity, and for the atmosphere for these wavenumbers, the emissivity is very little less than 1, say for argumentative definiteness without prejudice, 0.98 if you like. The land-sea surface also at these wavenumbers has an emissivity not too far from 0.98. (The situation is of course entirely different for the window wavenumbers, for which the atmosphere is transparent, and the emissivity will be far far less than 1, indeed nearly zero. But that is not immediately here relevant to the non-window wavenumbers that presently concern us.)

    Then we have two opaque media in contact, the land-sea surface and the atmosphere. If they are at the same temperatures at the contact interface (recalling that they have the same emissivity), then they will exchange thermal radiation with a net radiative flux density vector of zero at the contact interface; there may be a temperature gradient that crosses the interface with no discontinuity, and then there can still be heat transfer by conduction according to the Fourier heat diffusion law. The Fourier heat diffusion law admits that the conduction of heat cannot be measured with exclusion of the intrinsic thermal radiation of the conductive media, meaning that there is a fully Stefan’s-law thermal level of radiative specific intensity throughout the opaque media, but the radiative heat transport vector, found by integrating that radiative intensity over the sphere, is zero, and the heat is transferred only by diffusion according to the Fourier law that depends on the temperature gradient, a quantity not given by the radiant intensity at a point, but requiring a spatial interval for its definition.

    The argument can then be put that the atmosphere in contact with and near the land-sea surface is in various states of motion. So, for that matter, is the sea, on the condensed medium side of the interface. According to the above physics, this will affect the emissivity and absorptivity: they are governed by the conditions of supply and distribution of energy to the media as noted above. Will these various states of motion be enough to take us away from the operation of the Kirchhoff law? Perhaps the atmosphere is acting like a laser? I think not. We have indeed already accepted that the emissivity is close to 0.98 on each side of the interface.

    What is really at stake here is the notion that the temperatures on either side of the interface are equal in the geometrically relevant ways. Mostly I think this is so because of the effects of strong convection in the lowest atmosphere and the presence of evaporation and condensation. The idea of coarse-graining is sometime invoked. Presumably there may be some small departures occasionally when the conditions of supply and distribution of energy are so extreme that Kirchhoff’s law does not apply locally on those occasions. Even less often perhaps the conditions of supply and distribution of heat will be extreme enough to lead to large temperature discontinuities at the interface; but I think such occasions will be rare, and will likely very often be averaged away in the climate space-time scale of description. Very largely, the temperature continuity and adequate conduction-evaporation-condensation-convection condition will be satisfied and the practically-thermal condition of energy supply and distribution will apply and, for the relevant wavenumbers, the Kirchhoff law will apply.

    We do not actually need that Law in full generality. All we need is (a) temperature near-continuity and (b) emissivity near 0.98 in both media for the non-window wavenumbers.

    For window wavenumbers in clear skies, the opacity condition is not fulfilled and the situation is entirely different, but that is not immediately directly relevant to the question of explaining why Aa = Ed in clear skies.

    Downward values of the specific radiant intensity function for window wavenumbers will be very different between cloudy and clear skies, because opaque clouds radiate strongly also in the window wavenumbers, quite in contrast to the clear sky atmosphere. Under opaque 1.8 km clouds, there will be something like Kirchhoff’s hohlraum condition, and Aa = Ed will prevail. The opaque 1.8 km clouds will radiate upwards at window wavenumbers. Their temperature of upwards emission will be much lower than the land-sea surface temperature, but they are radiating upwards into a medium of much less optical density because the water vapour content of the air above them is low. The temperature and opacity reduction effects nearly cancel, but Miskolczi’s HARTCODE calculations indicate that the effective St will in the event actually be slightly greater from the cloud tops than it would be from the land-sea surface. In effect, it turns out that the St has as it were simply been translated nearly unchanged upwards from the land-sea surface to the tops of the clouds.

    This is the explanation of the maintenance of the St at a bit more than 60 W m^-2 in all sky conditions. And the explanation of why Aa = Ed overall for Miskolczi’s TIGR data sample.

  82. The Kirchhoff radiative emissivity-absorptivity law is not strictly exactly to the point here. We are talking near it, but not strictly exactly on it.But let us note the Kirchhoff law anyway, just for interest. It is often cavalierly taken without meticulous regard to its precise range of applicability. Originally Kirchhoff's Law was stated for the condition that the energy supply maintains thermodynamic equilibrium. But it is now known that the law applies more widely. But just how much more widely?The absorptivity and emissivity of a medium are determined by its chemical constitution, by the physical geometrical arrangement of the chemical constituents, and by the way that energy is supplied to the medium to support the emission and govern the absorption. The physical geometrical arrangement of the chemical constituents matters because it affects how the supplied energy is transported and distributed within the medium, and how the chemical constituents are exposed to the contiguous medium. Thus the difference between a powdery or spicule-textured surface and a polished one.Einstein 1917 originated the idea that the absorptivity is the sum of an obvious kind of absorptivity in which a chemical species is simply lifted to a higher energy level, and a very non-obvious kind of absorptivity, indeed negative absorptivity, in which an incoming 'photon' triggers the release of identical copy of itself in the same direction. This is sometimes called stimulated emission and Einstein needed it to make sense of the Planck law. It is governed not by the availability of the low energy form of the relevant molecules, but by the availability of their high energy form. On the other hand, empirically detectable, or spontaneous, emissivity is governed only by the availability of the high energy form of the relevant molecules. The absorptivity and emissivity are therefore governed by different dynamics and can thus be expected to behave differently depending on how the energy is supplied and distributed to the medium.The full meaning of this was not really widely and fully understood until conditions of energy supply to a medium were varied experimentally, and the negative kind of absorption was demonstrated in the laser. Under the conditions of energy supply that support lasing, the absorptivity is negative. The emissivity is always positive, and so we have a clear example in which Kirchhoff's law does not apply because of the conditions of supply and distribution of energy to the medium. Kirchhoff (1858, 1860 in English) himself, in stating the conditions for his law to hold, was careful to exclude phosphorescence and fluorescence, but I feel sure he would have excluded lasing as well if he had known about it. But when we use his law we have the duty to specify the conditions of energy supply and distribution, to help justify our use of the law.Of course, many conditions of energy supply and distribution are near enough to those of thermodynamic equilibrium. This happens so often that people forget that they need to be specified, and that outside them, the law can be very far from applicability. People often implicitly assume that the thermodynamic equilibrium condition is sufficiently mimicked that they do not need to put on notice that it is being used. This is carefully explained in Mihalas and Mihalas 1984 at section 84 starting on page 386. Also Hottel and Sarofim 1967 gives a slight account of it. The Einstein A and B coefficient theory is set out for example in section 1.5 et seq. of R. Loudon 2000 'The Quantum Theory of Light' 3rd edition, Oxford University Press.There are two things to think about in our present concern. One is the empirical data equality, Aa = Ed, regardless of how we might or might not explain it by some theory. The other is Miskolczi's rather cavalier citation of the Kirchhoff law.I think that the empirical data equality deserves to be examined in its own right, without regard to any putative theoretical explanation. That is not the subject of this post.But here I would like to think a bit about the relevant physical theory. I will not directly focus on Kirchhoff's Law, because I just want to focus on the presently relevant physics. The Aa = Ed formula is in question. It refers to non-window wavenumbers. This means that for the relevant wavenumbers, the atmosphere is entirely opaque. An opaque object has an emissivity, and for the atmosphere for these wavenumbers, the emissivity is very little less than 1, say for argumentative definiteness without prejudice, 0.98 if you like. The land-sea surface also at these wavenumbers has an emissivity not too far from 0.98. (The situation is of course entirely different for the window wavenumbers, for which the atmosphere is transparent, and the emissivity will be far far less than 1, indeed nearly zero. But that is not immediately here relevant to the non-window wavenumbers that presently concern us.)Then we have two opaque media in contact, the land-sea surface and the atmosphere. If they are at the same temperatures at the contact interface (recalling that they have the same emissivity), then they will exchange thermal radiation with a net radiative flux density vector of zero at the contact interface; there may be a temperature gradient that crosses the interface with no discontinuity, and then there can still be heat transfer by conduction according to the Fourier heat diffusion law. The Fourier heat diffusion law admits that the conduction of heat cannot be measured with exclusion of the intrinsic thermal radiation of the conductive media, meaning that there is a fully Stefan's-law thermal level of radiative specific intensity throughout the opaque media, but the radiative heat transport vector, found by integrating that radiative intensity over the sphere, is zero, and the heat is transferred only by diffusion according to the Fourier law that depends on the temperature gradient, a quantity not given by the radiant intensity at a point, but requiring a spatial interval for its definition.The argument can then be put that the atmosphere in contact with and near the land-sea surface is in various states of motion. So, for that matter, is the sea, on the condensed medium side of the interface. According to the above physics, this will affect the emissivity and absorptivity: they are governed by the conditions of supply and distribution of energy to the media as noted above. Will these various states of motion be enough to take us away from the operation of the Kirchhoff law? Perhaps the atmosphere is acting like a laser? I think not. We have indeed already accepted that the emissivity is close to 0.98 on each side of the interface.What is really at stake here is the notion that the temperatures on either side of the interface are equal in the geometrically relevant ways. Mostly I think this is so because of the effects of strong convection in the lowest atmosphere and the presence of evaporation and condensation. The idea of coarse-graining is sometime invoked. Presumably there may be some small departures occasionally when the conditions of supply and distribution of energy are so extreme that Kirchhoff's law does not apply locally on those occasions. Even less often perhaps the conditions of supply and distribution of heat will be extreme enough to lead to large temperature discontinuities at the interface; but I think such occasions will be rare, and will likely very often be averaged away in the climate space-time scale of description. Very largely, the temperature continuity and adequate conduction-evaporation-condensation-convection condition will be satisfied and the practically-thermal condition of energy supply and distribution will apply and, for the relevant wavenumbers, the Kirchhoff law will apply.We do not actually need that Law in full generality. All we need is (a) temperature near-continuity and (b) emissivity near 0.98 in both media for the non-window wavenumbers.For window wavenumbers in
    clear skies, the opacity condition is not fulfilled and the situation is entirely different, but that is not immediately directly relevant to the question of explaining why Aa = Ed in clear skies.Downward values of the specific radiant intensity function for window wavenumbers will be very different between cloudy and clear skies, because opaque clouds radiate strongly also in the window wavenumbers, quite in contrast to the clear sky atmosphere. Under opaque 1.8 km clouds, there will be something like Kirchhoff's hohlraum condition, and Aa = Ed will prevail. The opaque 1.8 km clouds will radiate upwards at window wavenumbers. Their temperature of upwards emission will be much lower than the land-sea surface temperature, but they are radiating upwards into a medium of much less optical density because the water vapour content of the air above them is low. The temperature and opacity reduction effects nearly cancel, but Miskolczi's HARTCODE calculations indicate that the effective St will in the event actually be slightly greater from the cloud tops than it would be from the land-sea surface. In effect, it turns out that the St has as it were simply been translated nearly unchanged upwards from the land-sea surface to the tops of the clouds.This is the explanation of the maintenance of the St at a bit more than 60 W m^-2 in all sky conditions. And the explanation of why Aa = Ed overall for Miskolczi's TIGR data sample.

  83. The Kirchhoff radiative emissivity-absorptivity law is not strictly exactly to the point here. We are talking near it, but not strictly exactly on it.

    But let us note the Kirchhoff law anyway, just for interest. It is often cavalierly taken without meticulous regard to its precise range of applicability. Originally Kirchhoff’s Law was stated for the condition that the energy supply maintains thermodynamic equilibrium. But it is now known that the law applies more widely. But just how much more widely?

    The absorptivity and emissivity of a medium are determined by its chemical constitution, by the physical geometrical arrangement of the chemical constituents, and by the way that energy is supplied to the medium to support the emission and govern the absorption. The physical geometrical arrangement of the chemical constituents matters because it affects how the supplied energy is transported and distributed within the medium, and how the chemical constituents are exposed to the contiguous medium. Thus the difference between a powdery or spicule-textured surface and a polished one.

    Einstein 1917 originated the idea that the absorptivity is the sum of an obvious kind of absorptivity in which a chemical species is simply lifted to a higher energy level, and a very non-obvious kind of absorptivity, indeed negative absorptivity, in which an incoming ‘photon’ triggers the release of identical copy of itself in the same direction. This is sometimes called stimulated emission and Einstein needed it to make sense of the Planck law. It is governed not by the availability of the low energy form of the relevant molecules, but by the availability of their high energy form. On the other hand, empirically detectable, or spontaneous, emissivity is governed only by the availability of the high energy form of the relevant molecules. The absorptivity and emissivity are therefore governed by different dynamics and can thus be expected to behave differently depending on how the energy is supplied and distributed to the medium.

  84. The Kirchhoff radiative emissivity-absorptivity law is not strictly exactly to the point here. We are talking near it, but not strictly exactly on it.But let us note the Kirchhoff law anyway, just for interest. It is often cavalierly taken without meticulous regard to its precise range of applicability. Originally Kirchhoff's Law was stated for the condition that the energy supply maintains thermodynamic equilibrium. But it is now known that the law applies more widely. But just how much more widely?The absorptivity and emissivity of a medium are determined by its chemical constitution, by the physical geometrical arrangement of the chemical constituents, and by the way that energy is supplied to the medium to support the emission and govern the absorption. The physical geometrical arrangement of the chemical constituents matters because it affects how the supplied energy is transported and distributed within the medium, and how the chemical constituents are exposed to the contiguous medium. Thus the difference between a powdery or spicule-textured surface and a polished one.Einstein 1917 originated the idea that the absorptivity is the sum of an obvious kind of absorptivity in which a chemical species is simply lifted to a higher energy level, and a very non-obvious kind of absorptivity, indeed negative absorptivity, in which an incoming 'photon' triggers the release of identical copy of itself in the same direction. This is sometimes called stimulated emission and Einstein needed it to make sense of the Planck law. It is governed not by the availability of the low energy form of the relevant molecules, but by the availability of their high energy form. On the other hand, empirically detectable, or spontaneous, emissivity is governed only by the availability of the high energy form of the relevant molecules. The absorptivity and emissivity are therefore governed by different dynamics and can thus be expected to behave differently depending on how the energy is supplied and distributed to the medium.

  85. The full meaning of this was not really widely and fully understood until conditions of energy supply to a medium were varied experimentally, and the negative kind of absorption was demonstrated in the laser. Under the conditions of energy supply that support lasing, the absorptivity is negative. The emissivity is always positive, and so we have a clear example in which Kirchhoff’s law does not apply because of the conditions of supply and distribution of energy to the medium. Kirchhoff (1858, 1860 in English) himself, in stating the conditions for his law to hold, was careful to exclude phosphorescence and fluorescence, but I feel sure he would have excluded lasing as well if he had known about it. But when we use his law we have the duty to specify the conditions of energy supply and distribution, to help justify our use of the law.

    Of course, many conditions of energy supply and distribution are near enough to those of thermodynamic equilibrium. This happens so often that people forget that they need to be specified, and that outside them, the law can be very far from applicability. People often implicitly assume that the thermodynamic equilibrium condition is sufficiently mimicked that they do not need to put on notice that it is being used. This is carefully explained in Mihalas and Mihalas 1984 at section 84 starting on page 386. Also Hottel and Sarofim 1967 gives a slight account of it. The Einstein A and B coefficient theory is set out for example in section 1.5 et seq. of R. Loudon 2000 ‘The Quantum Theory of Light’ 3rd edition, Oxford University Press.

  86. The full meaning of this was not really widely and fully understood until conditions of energy supply to a medium were varied experimentally, and the negative kind of absorption was demonstrated in the laser. Under the conditions of energy supply that support lasing, the absorptivity is negative. The emissivity is always positive, and so we have a clear example in which Kirchhoff's law does not apply because of the conditions of supply and distribution of energy to the medium. Kirchhoff (1858, 1860 in English) himself, in stating the conditions for his law to hold, was careful to exclude phosphorescence and fluorescence, but I feel sure he would have excluded lasing as well if he had known about it. But when we use his law we have the duty to specify the conditions of energy supply and distribution, to help justify our use of the law.Of course, many conditions of energy supply and distribution are near enough to those of thermodynamic equilibrium. This happens so often that people forget that they need to be specified, and that outside them, the law can be very far from applicability. People often implicitly assume that the thermodynamic equilibrium condition is sufficiently mimicked that they do not need to put on notice that it is being used. This is carefully explained in Mihalas and Mihalas 1984 at section 84 starting on page 386. Also Hottel and Sarofim 1967 gives a slight account of it. The Einstein A and B coefficient theory is set out for example in section 1.5 et seq. of R. Loudon 2000 'The Quantum Theory of Light' 3rd edition, Oxford University Press.

  87. There are two things to think about in our present concern. One is the empirical data equality, Aa = Ed, regardless of how we might or might not explain it by some theory. The other is Miskolczi’s rather cavalier citation of the Kirchhoff law.

    I think that the empirical data equality deserves to be examined in its own right, without regard to any putative theoretical explanation. That is not the subject of this post.

    But here I would like to think a bit about the relevant physical theory. I will not directly focus on Kirchhoff’s Law, because I just want to focus on the presently relevant physics. The Aa = Ed formula is in question. It refers to non-window wavenumbers. This means that for the relevant wavenumbers, the atmosphere is entirely opaque. An opaque object has an emissivity, and for the atmosphere for these wavenumbers, the emissivity is very little less than 1, say for argumentative definiteness without prejudice, 0.98 if you like. The land-sea surface also at these wavenumbers has an emissivity not too far from 0.98. (The situation is of course entirely different for the window wavenumbers, for which the atmosphere is transparent, and the emissivity will be far far less than 1, indeed nearly zero. But that is not immediately here relevant to the non-window wavenumbers that presently concern us.)

    Then we have two opaque media in contact, the land-sea surface and the atmosphere. If they are at the same temperatures at the contact interface (recalling that they have the same emissivity), then they will exchange thermal radiation with a net radiative flux density vector of zero at the contact interface; there may be a temperature gradient that crosses the interface with no discontinuity, and then there can still be heat transfer by conduction according to the Fourier heat diffusion law. The Fourier heat diffusion law admits that the conduction of heat cannot be measured with exclusion of the intrinsic thermal radiation of the conductive media, meaning that there is a fully Stefan’s-law thermal level of radiative specific intensity throughout the opaque media, but the radiative heat transport vector, found by integrating that radiative intensity over the sphere, is zero, and the heat is transferred only by diffusion according to the Fourier law that depends on the temperature gradient, a quantity not given by the radiant intensity at a point, but requiring a spatial interval for its definition.

    The argument can then be put that the atmosphere in contact with and near the land-sea surface is in various states of motion. So, for that matter, is the sea, on the condensed medium side of the interface. According to the above physics, this will affect the emissivity and absorptivity: they are governed by the conditions of supply and distribution of energy to the media as noted above. Will these various states of motion be enough to take us away from the operation of the Kirchhoff law? Perhaps the atmosphere is acting like a laser? I think not. We have indeed already accepted that the emissivity is close to 0.98 on each side of the interface.

    1. Christopher,
      You are using an inadequate model here. It reduces the spectrum to a transparent window and a fully opaque region. And then it is true that Aa=Ed. But the real spectrum is well known, and it is different in important ways.
      I discussed real spectra here at CA. Scroll down a bit to see Figs 8.2a and 8.2b. You should study them carefully, comparing with the BB curves.
      Fig 8.2a shows the upgoing spectrum at 20km – TOA. The window section, near 11μ, emerges as if at ground temp (268K). But there is radiation at all frequencies, most at an apparent lower temp – down to about 220K. And this is an important part of outgoing power.
      Your simple model cannot explain this. The window is OK, but there would be no power available to radiate at other frequencies, because Aa=Ed implies no transmission.
      Fig 8.2b shows downwelling IR. Yes, the window near 11μ has almost zero (but not quite). And the region near 15μ has radiance as if from air at ground temp. But there is an important intermediate region, as if emitted from colder regions (which it is).
      Its importance relates to the above contradiction. That intermediate region conveys the power that is emitted at all non-window frequencies, including 15μ. It can do so because in that region, Ed < Aa. This discrepancy is a small part of the total but is vital and GHG-sensitive.

      1. Nick,

        Thank you for your kind and thoughtful reply. Sad to say I am not familiar with the technique for this blog and my post was mangled in transmission. I tried posting it in four parts, but only one made the distance. The part that I am now reading on my monitor, and that I suppose you are replying to is as it were page two of my essay. Naturally without pages 1, 3, and 4, my reasoning is not ready to be examined. I will now try to post page 1 in two parts, in case length is the problem. I think my argument is valid. Noting your comments I will before posting tidy up some of the original, but I may not be able to do that till later this afternoon. I understand your comments and I think they are addressed or will be in the tidied-up post. Christopher

  88. What is really at stake here is the notion that the temperatures on either side of the interface are equal in the geometrically relevant ways. Mostly I think this is so because of the effects of strong convection in the lowest atmosphere and the presence of evaporation and condensation. The idea of coarse-graining is sometime invoked. Presumably there may be some small departures occasionally when the conditions of supply and distribution of energy are so extreme that Kirchhoff’s law does not apply locally on those occasions. Even less often perhaps the conditions of supply and distribution of heat will be extreme enough to lead to large temperature discontinuities at the interface; but I think such occasions will be rare, and will likely very often be averaged away in the climate space-time scale of description. Very largely, the temperature continuity and adequate conduction-evaporation-condensation-convection condition will be satisfied and the practically-thermal condition of energy supply and distribution will apply and, for the relevant wavenumbers, the Kirchhoff law will apply.

    We do not actually need that Law in full generality. All we need is (a) temperature near-continuity and (b) emissivity near 0.98 in both media for the non-window wavenumbers.

    For window wavenumbers in clear skies, the opacity condition is not fulfilled and the situation is entirely different, but that is not immediately directly relevant to the question of explaining why Aa = Ed in clear skies.

    Downward values of the specific radiant intensity function for window wavenumbers will be very different between cloudy and clear skies, because opaque clouds radiate strongly also in the window wavenumbers, quite in contrast to the clear sky atmosphere. Under opaque 1.8 km clouds, there will be something like Kirchhoff’s hohlraum condition, and Aa = Ed will prevail. The opaque 1.8 km clouds will radiate upwards at window wavenumbers. Their temperature of upwards emission will be much lower than the land-sea surface temperature, but they are radiating upwards into a medium of much less optical density because the water vapour content of the air above them is low. The temperature and opacity reduction effects nearly cancel, but Miskolczi’s HARTCODE calculations indicate that the effective St will in the event actually be slightly greater from the cloud tops than it would be from the land-sea surface. In effect, it turns out that the St has as it were simply been translated nearly unchanged upwards from the land-sea surface to the tops of the clouds.

    This is the explanation of the maintenance of the St at a bit more than 60 W m^-2 in all sky conditions. And the explanation of why Aa = Ed overall for Miskolczi’s TIGR data sample.

  89. There are two things to think about in our present concern. One is the empirical data equality, Aa = Ed, regardless of how we might or might not explain it by some theory. The other is Miskolczi's rather cavalier citation of the Kirchhoff law.I think that the empirical data equality deserves to be examined in its own right, without regard to any putative theoretical explanation. That is not the subject of this post.But here I would like to think a bit about the relevant physical theory. I will not directly focus on Kirchhoff's Law, because I just want to focus on the presently relevant physics. The Aa = Ed formula is in question. It refers to non-window wavenumbers. This means that for the relevant wavenumbers, the atmosphere is entirely opaque. An opaque object has an emissivity, and for the atmosphere for these wavenumbers, the emissivity is very little less than 1, say for argumentative definiteness without prejudice, 0.98 if you like. The land-sea surface also at these wavenumbers has an emissivity not too far from 0.98. (The situation is of course entirely different for the window wavenumbers, for which the atmosphere is transparent, and the emissivity will be far far less than 1, indeed nearly zero. But that is not immediately here relevant to the non-window wavenumbers that presently concern us.)Then we have two opaque media in contact, the land-sea surface and the atmosphere. If they are at the same temperatures at the contact interface (recalling that they have the same emissivity), then they will exchange thermal radiation with a net radiative flux density vector of zero at the contact interface; there may be a temperature gradient that crosses the interface with no discontinuity, and then there can still be heat transfer by conduction according to the Fourier heat diffusion law. The Fourier heat diffusion law admits that the conduction of heat cannot be measured with exclusion of the intrinsic thermal radiation of the conductive media, meaning that there is a fully Stefan's-law thermal level of radiative specific intensity throughout the opaque media, but the radiative heat transport vector, found by integrating that radiative intensity over the sphere, is zero, and the heat is transferred only by diffusion according to the Fourier law that depends on the temperature gradient, a quantity not given by the radiant intensity at a point, but requiring a spatial interval for its definition.The argument can then be put that the atmosphere in contact with and near the land-sea surface is in various states of motion. So, for that matter, is the sea, on the condensed medium side of the interface. According to the above physics, this will affect the emissivity and absorptivity: they are governed by the conditions of supply and distribution of energy to the media as noted above. Will these various states of motion be enough to take us away from the operation of the Kirchhoff law? Perhaps the atmosphere is acting like a laser? I think not. We have indeed already accepted that the emissivity is close to 0.98 on each side of the interface.

  90. What is really at stake here is the notion that the temperatures on either side of the interface are equal in the geometrically relevant ways. Mostly I think this is so because of the effects of strong convection in the lowest atmosphere and the presence of evaporation and condensation. The idea of coarse-graining is sometime invoked. Presumably there may be some small departures occasionally when the conditions of supply and distribution of energy are so extreme that Kirchhoff's law does not apply locally on those occasions. Even less often perhaps the conditions of supply and distribution of heat will be extreme enough to lead to large temperature discontinuities at the interface; but I think such occasions will be rare, and will likely very often be averaged away in the climate space-time scale of description. Very largely, the temperature continuity and adequate conduction-evaporation-condensation-convection condition will be satisfied and the practically-thermal condition of energy supply and distribution will apply and, for the relevant wavenumbers, the Kirchhoff law will apply.We do not actually need that Law in full generality. All we need is (a) temperature near-continuity and (b) emissivity near 0.98 in both media for the non-window wavenumbers.For window wavenumbers in clear skies, the opacity condition is not fulfilled and the situation is entirely different, but that is not immediately directly relevant to the question of explaining why Aa = Ed in clear skies.Downward values of the specific radiant intensity function for window wavenumbers will be very different between cloudy and clear skies, because opaque clouds radiate strongly also in the window wavenumbers, quite in contrast to the clear sky atmosphere. Under opaque 1.8 km clouds, there will be something like Kirchhoff's hohlraum condition, and Aa = Ed will prevail. The opaque 1.8 km clouds will radiate upwards at window wavenumbers. Their temperature of upwards emission will be much lower than the land-sea surface temperature, but they are radiating upwards into a medium of much less optical density because the water vapour content of the air above them is low. The temperature and opacity reduction effects nearly cancel, but Miskolczi's HARTCODE calculations indicate that the effective St will in the event actually be slightly greater from the cloud tops than it would be from the land-sea surface. In effect, it turns out that the St has as it were simply been translated nearly unchanged upwards from the land-sea surface to the tops of the clouds.This is the explanation of the maintenance of the St at a bit more than 60 W m^-2 in all sky conditions. And the explanation of why Aa = Ed overall for Miskolczi's TIGR data sample.

  91. Steve and Miklos,
    I came somewhat late to the discussion but enjoyed it. Reading through M’s paper it intrigued me that he seemed to be the first one, who used finite semi-transparent boundary conditions to solve the Schwarzschild equation within earth atmosphere.
    Is there any reason why nobody has ever done this? I did not find an argument yet that makes the generally applied semi-infinite boundary conditions preferable other than they are easier to solve.
    Reading this in all the books about climate change that deal with radiative transfer, I had the feeling of: Going through the motions without careful consideration. So I liked M’s new approach.
    It seems to me that the semi-transparent boundary conditions indeed provide an avenue to avoid the temperature discontinuity at the ground obtained by applying the semi-infinite boundary conditions.
    Despite all the critics I read in the internet, isn’t that finding or application together with an experimental test a worthy contribution to the scientific community. I guess it would be in physics.
    By the way: reading through the books, I found the following quote about Kirchhoff’s law in Goody and Yung’s Atmospheric Radiation, Theoretical Basis 2nd edition on page 3:
    “Since clouds, ground, and atmosphere do not differ greatly in temperature, it follows from Kirchhoff’s law that emission and absorption are approximately equal to each other.”
    I thought I cite that, since there are sites on the Internet and contributions that like to discredit Dr. Miskolczi and his paper on a similar quote. But this seems to me typical for the AGW discussion, always looking for small splinters in the others eye.
    Best regards
    Guenter

    1. Guenter,
      I did not find an argument yet that makes the generally applied semi-infinite boundary conditions preferable other than they are easier to solve.
      There is a very simple argument that you have ignored. The solution of Milne and others (which is not semi-infinite) yields a flux which matches OLR at TOA. M’s does not.

    2. “By the way: reading through the books, I found the following quote about Kirchhoff’s law in Goody and Yung’s Atmospheric Radiation, Theoretical Basis 2nd edition on page 3:
      “Since clouds, ground, and atmosphere do not differ greatly in temperature, it follows from Kirchhoff’s law that emission and absorption are approximately equal to each other.”
      I thought I cite that, since there are sites on the Internet and contributions that like to discredit Dr. Miskolczi and his paper on a similar quote. But this seems to me typical for the AGW discussion, always looking for small splinters in the others eye.”

      The great flaw in this argument of Guenter’s course is that, of the fraction of LW IR leaving at TOA which Miskolczi ‘lumps’ into his S_T which is comprised of LW IR emitted from the tops of clouds (due to release of latent heat during condensation), that fraction was never ‘absorbed’ by the clouds in the first place. That fraction came from Evapotranspiration (ET) from the surface i.e. its origin is non-radiative.

      The fact that Miskolczi ‘chooses’ to add that fraction of ET which is radiated upwards to TOA from the tops of the clouds to the true S_T which is transmitted from BOA to TOA (escaping absorption along the way) is purely an idiosyncrasy of Miskolczi. Whether on considers the creation of S_T as 2-component ‘lumped parameter’ to be justified or not, the fact remains that depending upon the %cloud present a significant fraction of Miskolczi’s S_T has a a non-radiative i.e. convective origin. Thus Kirchoff (or whatever)-type arguments are irrelevant to it.

      1. It isn’t just radiation from clouds. All of the 100 or so W/m2 making up LH and convection goes to heat the atmosphere, much of it at low levels. This heat adds to E_D and E_U, and as you say, a bit to S_T, but did not come from A_A.

      2. Agreed. I like your phrase ‘much of it at low levels’.

        That is why I (initially at least) set the fraction of A_A returning to BOA which contributes to E_D, and the fraction of DT returning to BOA which contributes to E_D to be 0.625 (62.5%) by analogy with the fraction of ET for a 1st pass estimate of E_D in my (crude) little spreadsheet model.

        If one fits the (60% cloud cover case) for A_A and E_D as per T,F&K09 then the fraction actually works out to be 0.66 (66%) for A_A if the fraction of DT (a minor component anyway) stays at 0.625 (62.5%). To me this suggests slightly more of A_A returns to contribute to E_D than of ET but it is close. This is as to be expected because most LW IR from BOA is absorbed below the mean cloud layer level.

        What I find intriguing about this crude spreadsheet approach is that it is very hard to see how a reduction in OLR (positive forcing) can arise from the situation where %cloud is greater than the global average of ~60%.

        In this sense I see where Coho (Anthony) is coming from and tend to agree with him.

        The only possible conclusion if T&F09 are correct is that as %cloud cover rises above ~60% the fraction of latent heat which is radiated through TOA falls off dramatically in a non-linear way.

        I haven’t got T&F09 yet but if they can’t prove that point then their contention won’t get up with me.

        If we stop and think about where high %cloud cover commonly exists it is in the equatorial band, over the gyres and over places like the Amazon and Congo. These are all places where highly energetic cu-nim storm lift cloud right up to the tropopause and are characterized by high precipitation rates. I have spent a fair bit of time in the Torres, PNG, New Caledonia etc and seen these storms for myself both from the surface and from the air numerous times.

        To assert that the fraction of ET which departs TOA at ET_U under such circumstances is proportionately lower than for the average cloud cover situation (temperate latitudes) is implausible to me. They are called ‘temperate’ for that very reason.

        So far, I’m with Lindzen and Spencer et al (and Coho) on this.

  92. Steve and Miklos,I came somewhat late to the discussion but enjoyed it. Reading through M's paper it intrigued me that he seemed to be the first one, who used finite semi-transparent boundary conditions to solve the Schwarzschild equation within earth atmosphere. Is there any reason why nobody has ever done this? I did not find an argument yet that makes the generally applied semi-infinite boundary conditions preferable other than they are easier to solve. Reading this in all the books about climate change that deal with radiative transfer, I had the feeling of: Going through the motions without careful consideration. So I liked M's new approach.It seems to me that the semi-transparent boundary conditions indeed provide an avenue to avoid the temperature discontinuity at the ground obtained by applying the semi-infinite boundary conditions.Despite all the critics I read in the internet, isn’t that finding or application together with an experimental test a worthy contribution to the scientific community. I guess it would be in physics.By the way: reading through the books, I found the following quote about Kirchhoff’s law in Goody and Yung’s Atmospheric Radiation, Theoretical Basis 2nd edition on page 3:“Since clouds, ground, and atmosphere do not differ greatly in temperature, it follows from Kirchhoff’s law that emission and absorption are approximately equal to each other.”I thought I cite that, since there are sites on the Internet and contributions that like to discredit Dr. Miskolczi and his paper on a similar quote. But this seems to me typical for the AGW discussion, always looking for small splinters in the others eye. Best regardsGuenter

  93. Christopher,You are using an inadequate model here. It reduces the spectrum to a transparent window and a fully opaque region. And then it is true that Aa=Ed. But the real spectrum is well known, and it is different in important ways.I discussed real spectra here at CA. Scroll down a bit to see Figs 8.2a and 8.2b. You should study them carefully, comparing with the BB curves.Fig 8.2a shows the upgoing spectrum at 20km – TOA. The window section, near 11μ, emerges as if at ground temp (268K). But there is radiation at all frequencies, most at an apparent lower temp – down to about 220K. And this is an important part of outgoing power.Your simple model cannot explain this. The window is OK, but there would be no power available to radiate at other frequencies, because Aa=Ed implies no transmission.Fig 8.2b shows downwelling IR. Yes, the window near 11μ has almost zero (but not quite). And the region near 15μ has radiance as if from air at ground temp. But there is an important intermediate region, as if emitted from colder regions (which it is).Its importance relates to the above contradiction. That intermediate region conveys the power that is emitted at all non-window frequencies, including 15μ. It can do so because in that region, Ed < Aa. This discrepancy is a small part of the total but is vital and GHG-sensitive.

  94. Guenter,I did not find an argument yet that makes the generally applied semi-infinite boundary conditions preferable other than they are easier to solve.There is a very simple argument that you have ignored. The solution of Milne and others (which is not semi-infinite) yields a flux which matches OLR at TOA. M's does not.

  95. Nick,
    could you educate me, which is the radiative transfer equation that applies in this context and how is it defined?. What is the reason for choosing one boundary condition over the other?
    If you are right it would be easy to refute Miskolczi’s equation.
    I have not seen all the mathematical experts that are in climate science doing that.
    Milne’s equations I see usually used in the context of radiative transfer problems dealing with scattering.
    Especially two cases:
    Firstly the limit of a scattering problem with an embedded source and absorption that goes towards zero. The source term is zero in this case.
    Secondly, diffuse reflection problems with partly reflecting boundaries.
    Of course you are always free to choose your boundary conditions.
    I didn’t want to ignore Milne’s work and apologize.
    I just have not come across a paper or a book that used the semi – transparent boundary conditions to derive the relationship between surface air temperature, ground temperature and optical depth in the infrared region. In hindsight I found it interesting that Dr. Miskolczi did that and compare the results. This is why I was looking for an argument, not to use them since you always can apply different boundary conditions, get the solutions and compare afterwards. Moreover it is good practice of an open minded scientist to do so. That is what Miskolczi did. My feeling when I came across the blogs about his work that he is discredited because of political reasons.
    But what he did is very valuable; he brought in a new perspective that is what keeps the science going.
    Can I asked a stupid question. Maybe I missed something in the discussion, perhaps you can repeat it for me: why is Miskolczi not matching OLR at TOA?
    What to you mean the solution of Milne and others yield a flux that match OLR at TOA. Isn’t that a circular argument, since you proscribe the boundary condition matching OLR at TOA to get the solutions.

    Best regards
    Guenter

    1. Guenther,
      M has defined the equations. he’s solving eq (12), which is a reduced version of (11). In fact, it is just an indefinite integral of a constant.

      The bc for TOA matching is stated and applied in the discussion preceding Eq 15. This eq includes the condition, and you can recover it by putting tau=0.

      As I said above, you can also put tau=0 in eq 21, M’s eq, to see that it does not give the same condition. The first task of any model like this is to conserve energy. That means that the outgoing energy must match the energy generated. M’s does not do that. Milne’s does.M does not seem to have noticed. Otherwise, at least a comment would be in order. He did not “compare afterwards”.

      Refutation? This paper was firmly rejected by a large number of reputable physics and climate journals. The referees gave their reasons, some of which FM quoted.

    2. Guenter “Maybe I missed something in the discussion, perhaps you can repeat it for me: why is Miskolczi not matching OLR at TOA?”

      Actually he is he is not “choosing” boundary conditions they fall out of the solution as derived in appendix b. If you evaluate eqn 20 for tau = 0 i.e. for a transparent (or even non-existent atmosphere) you get Bo = Bg as we would expect. BTW I don’t think anyone expects the Bg to be the same for a transparent atmosphere as for an opaque or semi transparent one apart from being treated as such in the classical solution. Nick is not being quite honest (perhaps he genuinely does not know) but B(tau) really looks more like those integrals in Eqn B3 than a simple indefinite integral of a constant. From your first post I expect that you are already aware that to get from B3 to something nice and simple like Eqn 15 one needs to evaluate the integrals from 0 -> infinity.

  96. Nick,could you educate me, which is the radiative transfer equation that applies in this context and how is it defined?. What is the reason for choosing one boundary condition over the other?If you are right it would be easy to refute Miskolczi's equation.I have not seen all the mathematical experts that are in climate science doing that.Milne’s equations I see usually used in the context of radiative transfer problems dealing with scattering. Especially two cases: Firstly the limit of a scattering problem with an embedded source and absorption that goes towards zero. The source term is zero in this case. Secondly, diffuse reflection problems with partly reflecting boundaries.Of course you are always free to choose your boundary conditions.I didn’t want to ignore Milne’s work and apologize. I just have not come across a paper or a book that used the semi – transparent boundary conditions to derive the relationship between surface air temperature, ground temperature and optical depth in the infrared region. In hindsight I found it interesting that Dr. Miskolczi did that and compare the results. This is why I was looking for an argument, not to use them since you always can apply different boundary conditions, get the solutions and compare afterwards. Moreover it is good practice of an open minded scientist to do so. That is what Miskolczi did. My feeling when I came across the blogs about his work that he is discredited because of political reasons.But what he did is very valuable; he brought in a new perspective that is what keeps the science going. Can I asked a stupid question. Maybe I missed something in the discussion, perhaps you can repeat it for me: why is Miskolczi not matching OLR at TOA?What to you mean the solution of Milne and others yield a flux that match OLR at TOA. Isn’t that a circular argument, since you proscribe the boundary condition matching OLR at TOA to get the solutions.Best regardsGuenter

  97. Crikey Nick, I’m in seventh heaven reading you boffins go about your business; ignorance is bliss as they say; but a slight query; you say the Milne model incorporated into the K&T efforts has successfully matched OLR at TOA; that, as Lindzen would say, is problematic; as to the semi-infinite moniker which you object to; doesn’t the Milne model state that as extra CO2 is put into the atmosphere the tropopause rises as the stratosphere cools and contracts due to the rise in the final emission level; given this, as CO2 levels continue to rise isn’t it part of the Milne model that there is no upper limit as to how high the tropopause will rise; and given this, is it not appropriate for that to be called semi-infinite?

    On another tack, it is a bit unfair to pillory M theory because it only applies to clear-sky conditions; that is, after all, what his paper set out to do; to establish a non-semi-infinite description or semi-transparent explanation for clear-sky radiative flux. Steve’s solution for all-sky conditions is worthy of further elaboration but I can’t help but feel it extends the M clear-sky explanation into the ‘real-world'; and if Steve is right then the Milne ‘semi-infinite’ [or whatever term you want to use Nick] model is really in a pickle because Steve’s values show that, overall, water/clouds in the atmosphere are a negative feedback [or as Spencer and Braswell note, cause]; this is at loggerheads with the orthodoxy which relies on water/clouds being a positive feedback; see, for example the new paper by Trenberth and Fasullo which asserts this revolutionary effect;

    “While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor [as a feedback], this is offset to a large degree by decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures.”

    On the face of it that is rather amazing.

    1. Coho,
      The Milne model is in no way used for K&T, or any other climate study.

      The Milne model says nothing about CO2 or tropopauses. It’s a general planetary model, which actually has very limited applicability to Earth, for the reasons I’ve given above.

      And again, the Milne model says nothing about clouds or feedbacks. You just don’t see things for what they are.

  98. Crikey Nick, I'm in seventh heaven reading you boffins go about your business; ignorance is bliss as they say; but a slight query; you say the Milne model incorporated into the K&T efforts has successfully matched OLR at TOA; that, as Lindzen would say, is problematic; as to the semi-infinite moniker which you object to; doesn't the Milne model state that as extra CO2 is put into the atmosphere the tropopause rises as the stratosphere cools and contracts due to the rise in the final emission level; given this, as CO2 levels continue to rise isn't it part of the Milne model that there is no upper limit as to how high the tropopause will rise; and given this, is it not appropriate for that to be called semi-infinite?On another tack, it is a bit unfair to pillory M theory because it only applies to clear-sky conditions; that is, after all, what his paper set out to do; to establish a non-semi-infinite description or semi-transparent explanation for clear-sky radiative flux. Steve's solution for all-sky conditions is worthy of further elaboration but I can't help but feel it extends the M clear-sky explanation into the 'real-world'; and if Steve is right then the Milne 'semi-infinite' [or whatever term you want to use Nick] model is really in a pickle because Steve's values show that, overall, water/clouds in the atmosphere are a negative feedback [or as Spencer and Braswell note, cause]; this is at loggerheads with the orthodoxy which relies on water/clouds being a positive feedback; see, for example the new paper by Trenberth and Fasullo which asserts this revolutionary effect;”While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor [as a feedback], this is offset to a large degree by decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures.”On the face of it that is rather amazing.

  99. Nick,Thank you for your kind and thoughtful reply. Sad to say I am not familiar with the technique for this blog and my post was mangled in transmission. I tried posting it in four parts, but only one made the distance. The part that I am now reading on my monitor, and that I suppose you are replying to is as it were page two of my essay. Naturally without pages 1, 3, and 4, my reasoning is not ready to be examined. I will now try to post page 1 in two parts, in case length is the problem. I think my argument is valid. Noting your comments I will before posting tidy up some of the original, but I may not be able to do that till later this afternoon. I understand your comments and I think they are addressed or will be in the tidied-up post. Christopher

  100. Coho,The Milne model is in no way used for K&T, or any other climate study.The Milne model says nothing about CO2 or tropopauses. It's a general planetary model, which actually has very limited applicability to Earth, for the reasons I've given above.And again, the Milne model says nothing about clouds or feedbacks. You just don't see things for what they are.

  101. “By the way: reading through the books, I found the following quote about Kirchhoff’s law in Goody and Yung’s Atmospheric Radiation, Theoretical Basis 2nd edition on page 3:“Since clouds, ground, and atmosphere do not differ greatly in temperature, it follows from Kirchhoff’s law that emission and absorption are approximately equal to each other.”I thought I cite that, since there are sites on the Internet and contributions that like to discredit Dr. Miskolczi and his paper on a similar quote. But this seems to me typical for the AGW discussion, always looking for small splinters in the others eye.”The great flaw in this argument of Guenter's course is that, of the fraction of LW IR leaving at TOA which Miskolczi 'lumps' into his S_T which is comprised of LW IR emitted from the tops of clouds (due to release of latent heat during condensation), that fraction was never 'absorbed' by the clouds in the first place. That fraction came from Evapotranspiration (ET) from the surface i.e. its origin is non-radiative.The fact that Miskolczi 'chooses' to add that fraction of ET which is radiated upwards to TOA from the tops of the clouds to the true S_T which is transmitted from BOA to TOA (escaping absorption along the way) is purely an idiosyncrasy of Miskolczi. Whether on considers the creation of S_T as 2-component 'lumped parameter' to be justified or not, the fact remains that depending upon the %cloud present a significant fraction of Miskolczi's S_T has a a non-radiative i.e. convective origin. Thus Kirchoff (or whatever)-type arguments are irrelevant to it.

  102. Guenther,M has defined the equations. he's solving eq (12), which is a reduced version of (11). In fact, it is just an indefinite integral of a constant. The bc for TOA matching is stated and applied in the discussion preceding Eq 15. This eq includes the condition, and you can recover it by putting tau=0.As I said above, you can also put tau=0 in eq 21, M's eq, to see that it does not give the same condition. The first task of any model like this is to conserve energy. That means that the outgoing energy must match the energy generated. M's does not do that. Milne's does.M does not seem to have noticed. Otherwise, at least a comment would be in order. He did not “compare afterwards”.Refutation? This paper was firmly rejected by a large number of reputable physics and climate journals. The referees gave their reasons, some of which FM quoted.

  103. It isn't just radiation from clouds. All of the 100 or so W/m2 making up LH and convection goes to heat the atmosphere, much of it at low levels. This heat adds to E_D and E_U, and as you say, a bit to S_T, but did not come from A_A.

  104. Agreed. I like your phrase 'much of it at low levels'.That is why I (initially at least) set the fraction of A_A returning to BOA which contributes to E_D, and the fraction of DT returning to BOA which contributes to E_D to be 0.625 (62.5%) by analogy with the fraction of ET for a 1st pass estimate of E_D in my (crude) little spreadsheet model. If one fits the (60% cloud cover case) for A_A and E_D as per T,F&K09 then the fraction actually works out to be 0.66 (66%) for A_A if the fraction of DT (a minor component anyway) stays at 0.625 (62.5%). To me this suggests slightly more of A_A returns to contribute to E_D than of ET but it is close. This is as to be expected because most LW IR from BOA is absorbed below the mean cloud layer level.What I find intriguing about this crude spreadsheet approach is that it is very hard to see how a reduction in OLR (positive forcing) can arise from the situation where %cloud is greater than the global average of ~60%. In this sense I see where Coho (Anthony) is coming from and tend to agree with him.The only possible conclusion if T&F09 are correct is that as %cloud cover rises above ~60% the fraction of latent heat which is radiated through TOA falls off dramatically in a non-linear way. I haven't got T&F09 yet but if they can't prove that point then their contention won't get up with me.If we stop and think about where high %cloud cover commonly exists it is in the equatorial band, over the gyres and over places like the Amazon and Congo. These are all places where highly energetic cu-nim storm lift cloud right up to the tropopause and are characterized by high precipitation rates. I have spent a fair bit of time in the Torres, PNG, New Caledonia etc and seen these storms for myself both from the surface and from the air numerous times. To assert that the fraction of ET which departs TOA at ET_U under such circumstances is proportionately lower than for the average cloud cover situation (temperate latitudes) is implausible to me. They are called 'temperate' for that very reason. So far, I'm with Lindzen and Spencer et al (and Coho) on this.

  105. Alright Nick, what atmospheric model does AGW rely on then? Does it have a discontinuity between the surface and the immediate atmosphere or BOA; and does it rely on a rising tropopause due to an increase in atmospheric CO2 levels? My point here is what atmospheric processes does AGW rely on? The goal-posts seem to be a bit fuzzy.

    Steve; wouldn’t ET_U continually be reducing [the] surface/BOA discontinuity until the feedback from the cloud build-up from increasing temperatures readjusted the equilibrium back to the pre-perturbation [through CO2 increase] state? Or am I confusing heat transfer with radiative fluxes? Even so, wouldn’t the result be the same at TOA and BOA?

    The abstract of that new Trenberth and Fasullo paper is here;

    http://www.agu.org/pubs/crossref/2009/2009GL037527.shtml

    1. “There is an increase in net radiation absorbed, but not in ways commonly assumed.”

      Damming really. Love to read the paper. Now what came first, the clouds or the heat?

      1. The Abstract

        Global climate models used in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) are examined for the top‐of‐atmosphere radiation changes as carbon dioxide and other greenhouse gases build up from 1950 to 2100. There is an increase in net radiation absorbed, but not in ways commonly assumed. While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor (as a feedback), this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures. Instead the main warming from an energy budget standpoint comes from increases in absorbed solar radiation that stem directly from the decreasing cloud amounts. These findings underscore the need to ascertain the credibility of the model changes, especially insofar as changes in clouds are concerned.

        Well, this puts T&F09 firmly in the position that increasing Greenhouse should lead to a reduction in %cloud amount i.e. cloud cover should be trending downwards. While there is some evidence as noted previously that that may be the case it is controversial.

        Let me put in my 5 cents worth:

        Increased Greenhouse by MEP should increases the ~50% of EP processes i.e. the hydrologic cycle (O&O96, many MEP papers since). Thus an increase in lower troposphere temperatures should increases ET, hence cloud cover and also decrease upper troposphere humidity (i.e. Pauluis’ ‘dehumidifier’). The latter is observed but is also controversial. It should also increase the polewards heat/entropy flux i.e. ocean winds increase. The latter is observed but continental winds are also decreasing by about the same amount.

        Even if temperature at BOA initially stays the same and hence S_U stays the same increasing cloud above 60% should increase OLR, decreasing cloud below 60% should decrease OLR (see my little spreadsheet results above). This accords with Lindzen/Spencer/Braswell/Cristy etc.

        David is correct the chicken and egg are clouds and heat.

        Cohenite:

        “Steve; wouldn’t ET_U continually be reducing [the] surface/BOA discontinuity until the feedback from the cloud build-up from increasing temperatures readjusted the equilibrium back to the pre-perturbation [through CO2 increase] state? Or am I confusing heat transfer with radiative fluxes? Even so, wouldn’t the result be the same at TOA and BOA?”

        I think I understand what you are saying (it is not really clear). If so the answer (I think) is roughly yes. But as well as the difference between heat content per se (not heat transfer) and LW IR radiative fluxes we need to consider heat diffusion into the ocean and heat diffusion into the (solid) ground.

        Clearly these is a lot of testing yet to be done, as follows:

        (1) Is oceanic cloud cover increasing or decreasing? yes/no

        (2) Is continental cloud cover increasing or decreasing? yes/no

        (3) Is the OHC increasing (yes but controversial?) yes/no

        (4) Is oceanic ET increasing? yes/no

        (5) Is global precipitation increasing? yes/no

        (6) Is continental heat content increasing yes/no (noting evaporation from pans is decreasing but supposedly due to decreasing winds!). Remember continental ET is driven by warming of both air and the ground down to about 5 m due to its effect on plants.

        Seems to me that we either don’t have the answers to the above question or where we do there are a significant number of contra-indicated trends.

        Ergo, we don’t really know comprehensively what we are talking about and/or we are looking at a system subtly adjusting itself in a number of directions i.e. there is massive homeostasis due to system dynamical complexity. I vote yes to both.

      2. “…..this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover…..”

        How to subtly turn a completely unproven effect into an integral part of your overall ‘paradigm’.

        Where is the (body of literature) consensual PROOF that increasing cloud cover increases the greenhouse effect?

    2. Coho,
      no, your goalposts are totally imagined. The reality is much simpler. The models are numerical. That is, they divide the atmosphere (and ocean, generally) up into a huge number of little boxes. There is turbulent flow and transport of heat, gas constituents etc. Radiation is modelled with full spectrum, and again with layers. Diffreential equations are thus solved.

      There is some use of global modelling of things where satisfactory de’s can’t be used. Clouds and rainfall are notable. But mostly the need of this kind of modelling is not for ready-made global de solutions of the Milne type, but for models of what happens on a small scale, where complexity can’t be represented by just box averages. Turbulence (eddies) is the most prominent, but there are others, especially near the land/ocean surfaces.

  106. Alright Nick, what atmospheric model does AGW rely on then? Does it have a discontinuity between the surface and the immediate atmosphere or BOA; and does it rely on a rising tropopause due to an increase in atmospheric CO2 levels? My point here is what atmospheric processes does AGW rely on? The goal-posts seem to be a bit fuzzy.Steve; wouldn't ET_U continually be reducing [the] surface/BOA discontinuity until the feedback from the cloud build-up from increasing temperatures readjusted the equilibrium back to the pre-perturbation [through CO2 increase] state? Or am I confusing heat transfer with radiative fluxes? Even so, wouldn't the result be the same at TOA and BOA?The abstract of that new Trenberth and Fasullo paper is here;http://www.agu.org/pubs/crossref/2009/2009GL037

  107. “There is an increase in net radiation absorbed, but not in ways commonly assumed.”Damming really. Love to read the paper. Now what came first, the clouds or the heat?

  108. The AbstractGlobal climate models used in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) are examined for the top‐of‐atmosphere radiation changes as carbon dioxide and other greenhouse gases build up from 1950 to 2100. There is an increase in net radiation absorbed, but not in ways commonly assumed. While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor (as a feedback), this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures. Instead the main warming from an energy budget standpoint comes from increases in absorbed solar radiation that stem directly from the decreasing cloud amounts. These findings underscore the need to ascertain the credibility of the model changes, especially insofar as changes in clouds are concerned. Well, this puts T&F09 firmly in the position that increasing Greenhouse should lead to a reduction in %cloud amount i.e. cloud cover should be trending downwards. While there is some evidence as noted previously that that may be the case it is controversial.Let me put in my 5 cents worth:Increased Greenhouse by MEP should increases the ~50% of EP processes i.e. the hydrologic cycle (O&O96, many MEP papers since). Thus an increase in lower troposphere temperatures should increases ET, hence cloud cover and also decrease upper troposphere humidity (i.e. Pauluis' 'dehumidifier'). The latter is observed but is also controversial. It should also increase the polewards heat/entropy flux i.e. ocean winds increase. The latter is observed but continental winds are also decreasing by about the same amount.Even if temperature at BOA initially stays the same and hence S_U stays the same increasing cloud above 60% should increase OLR, decreasing cloud below 60% should decrease OLR (see my little spreadsheet results above). This accords with Lindzen/Spencer/Braswell/Cristy etc.David is correct the chicken and egg are clouds and heat.Cohenite:”Steve; wouldn't ET_U continually be reducing [the] surface/BOA discontinuity until the feedback from the cloud build-up from increasing temperatures readjusted the equilibrium back to the pre-perturbation [through CO2 increase] state? Or am I confusing heat transfer with radiative fluxes? Even so, wouldn't the result be the same at TOA and BOA?”I think I understand what you are saying (it is not really clear). If so the answer (I think) is roughly yes. But as well as the difference between heat content per se (not heat transfer) and LW IR radiative fluxes we need to consider heat diffusion into the ocean and heat diffusion into the (solid) ground.Clearly these is a lot of testing yet to be done, as follows:(1) Is oceanic cloud cover increasing or decreasing? yes/no(2) Is continental cloud cover increasing or decreasing? yes/no(3) Is the OHC increasing (yes but controversial?) yes/no(4) Is oceanic ET increasing? yes/no(5) Is global precipitation increasing? yes/no(6) Is continental heat content increasing yes/no (noting evaporation from pans is decreasing but supposedly due to decreasing winds!). Remember continental ET is driven by warming of both air and the ground down to about 5 m due to its effect on plants.Seems to me that we either don't have the answers to the above question or where we do there are a significant number of contra-indicated trends. Ergo, we don't really know comprehensively what we are talking about and/or we are looking at a system subtly adjusting itself in a number of directions i.e. there is massive homeostasis due to system dynamical complexity. I vote yes to both.

  109. “…..this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover…..”How to subtly turn a completely unproven effect into an integral part of your overall 'paradigm'.Where is the (body of literature) consensual PROOF that increasing cloud cover increases the greenhouse effect?

  110. Coho,no, your goalposts are totally imagined. The reality is much simpler. The models are numerical. That is, they divide the atmosphere (and ocean, generally) up into a huge number of little boxes. There is turbulent flow and transport of heat, gas constituents etc. Radiation is modelled with full spectrum, and again with layers. Diffreential equations are thus solved. There is some use of global modelling of things where satisfactory de's can't be used. Clouds and rainfall are notable. But mostly the need of this kind of modelling is not for ready-made global de solutions of the Milne type, but for models of what happens on a small scale, where complexity can't be represented by just box averages. Turbulence (eddies) is the most prominent, but there are others, especially near the land/ocean surfaces.

  111. Guenter “Maybe I missed something in the discussion, perhaps you can repeat it for me: why is Miskolczi not matching OLR at TOA?”Actually he is he is not “choosing” boundary conditions they fall out of the solution as derived in appendix b. If you evaluate eqn 20 for tau = 0 i.e. for a transparent (or even non-existent atmosphere) you get Bo = Bg as we would expect. BTW I don't think anyone expects the Bg to be the same for a transparent atmosphere as for an opaque or semi transparent one apart from being treated as such in the classical solution. Nick is not being quite honest (perhaps he genuinely does not know) but B(tau) really looks more like those integrals in Eqn B3 than a simple indefinite integral of a constant. From your first post I expect that you are already aware that to get from B3 to something nice and simple like Eqn 15 one needs to evaluate the integrals from 0 -> infinity.

    1. Coho,
      Are your referring to the Trenberth and Fasullo paper? Have you got that right? They sayInstead the main warming from an energy budget standpoint comes from increases in absorbed solar radiation that stem directly from the decreasing cloud amounts.

  112. Coho,Are your referring to the Trenberth and Fasullo paper? Have you got that right? They sayInstead the main warming from an energy budget standpoint comes from increases in absorbed solar radiation that stem directly from the decreasing cloud amounts.

  113. Eh? Trenberth says;

    “this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover”

    You’re a hard man to nail down.

    1. Well, OK, but I don’t have to resolve the divergence. T&F do it. Their sentence I quoted is what Kump and Pollard are saying. T&F say that, yes, in the models that effect dominates the reduction in greenhouse effect (blocking IR) by clouds.

      1. No Nick. You do have to resolve a divergence.

        I posted the entire abstract of T&F09 just above. It seems like you missed that altogether, viz:

        “…While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor (as a feedback), this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures.”

        So T&F09 don’t say what you said (above)!

        They say: a decreasing greenhouse effect results from decreasing cloud cover.

        Ergo: an increasing greenhouse effect (would) result from increasing cloud cover.

        Let’s not descend into Pompe-speak. That way madness lies (or at least an inability to get to bed ;-).

      2. Thanks Anthony. I love K&P BTW – exactly what I have been on about for several years (even pre K&P) – CCN from biological productivity.

        Kump and Pollard:

        The extreme warmth of particular intervals of geologic history cannot be simulated with climate models, which are constrained by the geologic proxy record to relatively modest increases in atmospheric carbon dioxide levels. Recent recognition that biological productivity controls the abundance of cloud condensation nuclei (CCN) in the unpolluted atmosphere provides a solution to this problem. Our climate simulations show that reduced biological productivity (low CCN abundance) provides a substantial amplification of CO2-induced warming by reducing cloud lifetimes and reflectivity. If the stress of elevated temperatures did indeed suppress marine and terrestrial ecosystems during these times, this long-standing climate enigma may be solved.

        To repeat: “…..a substantial amplification of CO2-induced warming by reducing cloud lifetimes and reflectivity.”

        Coho gotcha there, Nick.

      3. Steve,
        I don’t think I missed anything. K&F have it covered. If clouds reduce, there are two effects:
        LW effect (your quote) – GE drops, more IR emitted – cooler
        SW effect – less SW reflected, more absorbed – warmer.
        The SW effect is the K&P one, and T&F say that it is bigger. Resolved.

      4. Well, Steve, I guess your Empiricism perfectly explains the cooling of the last fifty years.

      5. You said that. I didn’t. IMO it is a relatively facile comment which does your obvious intelligence no credit.

        I would only say that my ‘Empiricism’ certainly helps to explain the evidence for a relatively low CO2 sensitivity which the observations of the last 50 years e.g. a missing OHC, an increasing oceanic cloud cover, a decreasing continental pan evaporation etc indicate is more likely to be the case.

        As I said before, even for the view that increasing cloud cover MUST increase positive forcing to stand, it also means BY DEFINITION that an increasingly smaller percentage of clouds MUST actually precipitate at higher cloud covers.

        Tell me: just how do your GCMs factor that required effect (=> reducing precipitation rate per unit cloud cover) in?

        Especially in the context that every single study shows continental plant biomass and oceanic cyanobacterial biomass (and hence lower troposphere CCN density) increases with increasing CO2.

        I know you’d really love to ‘have your cake and eat it too’, but so far, it is just not possible (as I see it).

      6. Steve,
        Yes, it’s facile, but draws attention to the fact that you’re just listing a few of the things (on one side) that determine temperature. They all point to cooling, but it has been warming. There’s a lot else (eg CO2).

        T&F rightly emphasise the different pulls. SW and clouds – warming. LW and clouds – cooling. But, as they say, it isn’t just clouds. There’s the gas GE. And then there’s your CCN story. You have to add it all up.

      7. Nick I haven’t got a copy of the whole T&F paper yet so I’ll have to suspend judgment.

        However, I’d make the following comments.

        If ALL clouds were removed albedo would fall to about 15% and the amount of SW available to warm warm the atmosphere/surface would increase from 239 W/m^2 to about 288 W/m^2. However the LW OLR would increase to about 266 W/m^2 compared to 238 – 239 at present. The net effect of complete cloud removal would there be an increase in net radiation of about 22 W/m^2.

        So the 60% cloud cover has a net cooling effect of about 22 W/m^2 even though (I acknowledge) the net effects of high altitude and low altitude clouds is essentially opposite.

        Again intuitively I would expect increased CO2 to increase the mean elevation of clouds (stronger convection and stronger meridional winds) hence increase the negative forcing effect (from higher clouds).

        As an incorrigible bookworm (to the despair of my good lady) it intrigues me that the annual peak in total moisture content right up through the atmosphere occurs in August/September. This is when NH ET is at its peak. Why NH only, why not two annual peaks?

        NH has more land, more anthropogenic CO2 sources, more agriculture and more N & P pollution.

      8. Hi Steve and all,

        I’ve seen a lot of discussion of GCMs here, what assumptions they contain, what assumptions they don’t contain, what they do, what they don’t do.

        For anyone interested the ECHAM5 model (and also its predecessors the ECHAM3 and ECHAM4) is documented freely and you can download a description of it here:

        http://www.mpimet.mpg.de/fileadmin/models/echam/mpi_report_349.pdf

        It makes very interesting reading, just to see all of its assumptions and approximations laid bare, and also to note how much of it is based on theory that is 10, 20, 30 years old (well it’s supposed to be “state of the art” whatever that means).

        By the way, I can’t actually see any evidence of a built-in temperature discontinuity although it’s possible you need to follow back through the references on the radiation scheme to see that.

        After reading through it I am afraid I just can’t bring myself anywhere near a place that I could take seriously the predictions of one of these programs.

      9. Nick,
        As I said, I can’t see anything about a temperature discontinuity in the paper, but one would have to follow it all the back through the references to be sure.

        I suggest you read the introduction and the section on radiation — it’s only short.

        Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough (1997), Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave, J. Geophys. Res., 102(D14), 16,663–16,682.

        The radiation scheme apparently is “new” (where new means 1997) and leads to this paper here:

        A rapid and accurate radiative transfer model (RRTM) for climate applications has been developed and the results extensively evaluated. The current version of RRTM calculates fluxes and cooling rates for the longwave spectral region (10–3000 cm−1) for an arbitrary clear atmosphere. The molecular species treated in the model are water vapor, carbon dioxide, ozone, methane, nitrous oxide, and the common halocarbons. The radiative transfer in RRTM is performed using the correlated-k method: the k distributions are attained directly from the LBLRTM line-by-line model, which connects the absorption coefficients used by RRTM to high-resolution radiance validations done with observations. Refined methods have been developed for treating bands containing gases with overlapping absorption, for the determination of values of the Planck function appropriate for use in the correlated-k approach, and for the inclusion of minor absorbing species in a band. The flux and cooling rate results of RRTM are linked to measurement through the use of LBLRTM, which has been substantially validated with observations. Validations of RRTM using LBLRTM have been performed for the midlatitude summer, tropical, midlatitude winter, subarctic winter, and four atmospheres from the Spectral Radiance Experiment campaign. On the basis of these validations the longwave accuracy of RRTM for any atmosphere is as follows: 0.6 W m−2 (relative to LBLRTM) for net flux in each band at all altitudes, with a total (10–3000 cm−1) error of less than 1.0 W m−2 at any altitude; 0.07 K d−1 for total cooling rate error in the troposphere and lower stratosphere, and 0.75 K d−1 in the upper stratosphere and above. Other comparisons have been performed on RRTM using LBLRTM to gauge its sensitivity to changes in the abundance of specific species, including the halocarbons and carbon dioxide. The radiative forcing due to doubling the concentration of carbon dioxide is attained with an accuracy of 0.24 W m−2, an error of less than 5%. The speed of execution of RRTM compares favorably with that of other rapid radiation models, indicating that the model is suitable for use in general circulation models.

      10. Yes I was hoping the software would get cleverer and cleverer as we got the edge of the screen here — but it’s not!

        Let’s see where this one goes.

    2. Well now. Silence. Time for a new thought?

      Since the “consensus” hypothesis on the GHE effect is certainly suspect, given all the relevant information: i.e., no temperature increases for 12 years and absolutely no other empirical or theoretical evidence to support said nonsense. And since the Miskolczi hypothesis has been discredited by the experts here, maybe we should bo back to my simpleton hypothesis that the “greenhouse effect” is nothing more than the ability of the Planet to store heat from one day to the next. And the corollary that IR radiation doesn’t have a damn thing to do with it. It now looks like this is as good an hypothesis as any other. LOL.

  114. Eh? Trenberth says;”this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover”You're a hard man to nail down.

  115. Well, OK, but I don't have to resolve the divergence. T&F do it. Their sentence I quoted is what Kump and Pollard are saying. T&F say that, yes, in the models that effect dominates the reduction in greenhouse effect (blocking IR) by clouds.

  116. No Nick. You do have to resolve a divergence.I posted the entire abstract of T&F09 just above. It seems like you missed that altogether, viz:”…While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor (as a feedback), this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures.”So T&F09 don't say what you said (above)! They say: a decreasing greenhouse effect results from decreasing cloud cover.Ergo: an increasing greenhouse effect (would) result from increasing cloud cover.Let's not descend into Pompe-speak. That way madness lies (or at least an inability to get to bed ;-).

  117. As I understand K&P they say the unavailability of condensation nuclei produced a drastic reduction in clouds and as a result of fewer clouds super greenhouse conditions.

    T&F say that decreased cloud both reduces greenhouse conditions and increases temperature as a result of increased absorbed solar radiation [ASR]; T&F also distinguish vapor as a +ve feedback from ASR; however both increased vapor and increased ASR increase OLR while increased low-level clouds do not; T&F note that observations disprove Lindzen’s Iris and that the increase in convective clouds has a +ve feedback [which would suggest that Lindzen’s Iris may hold true ‘if’ there were a decrease in high convective cloud], although an increase in low cloud through reduced SW cloud radiative forcing has a negative feedback as temperatures rise and also a decrease in optical depth. Oddly enough all the models studied by T&F still show warming even if there is no change in cloud cover.

    T&F don’t refer to Dessler at all.

    1. Yep, you can do practically anything you like with “a fully coupled GCM” e.g.:

      Lunt et al. 2008

      “….Although the popular conception is that geoengineering can re-establish a ‘natural’ pre-industrial climate, such a scheme would itself inevitably lead to climate change, due to the different temporal and spatial forcing of increased CO2 compared to reduced solar radiation. We investigate the magnitude and nature of this climate change for the first time within a fully coupled General Circulation Model. We find significant cooling of the tropics, warming of high latitudes and related sea ice reduction, a reduction in intensity of the hydrological cycle, reduced ENSO variability, and an increase in Atlantic overturning. However, the changes are small relative to those associated with an unmitigated rise in CO2 emissions.”

      In fact I’m so mightily impressed with GCMs I’m thinking of fitting one to my Hilux 4WD to boost the turbo….

    2. I have been hitting the literature hard on cloud effects. This is where I have got my little spreadsheet model to (see below).

      I have partly stuck with Miskolczi parameter terminology only because this is very, very familiar to most of us here (possibly not a lurking Christopher Game from over at JM though, haha ;-). The remaining terminology I’ve explained before and/or is self explanatory.

      Each (%cloud cover) row energy balances (I think) and all parameters are interlinked by relatively simple and empirically justifiable algorithms. Most major parameter values can be found somewhere (or a value very close) in the mainstream literature – going all the way back to Hartmann, 1994.

      The ‘Virial Rule ‘ does reasonably well but Kirchoff falls over. S_U/OLR ranges from about 1.5 (clear sky) to about 1.75 (100% cloud) but seems to be close to 1.66 around 60% cloud cover). Interestingly the Miskolczi so-called ‘tau’ decreases with increasing cloud! Miskolczi’s ‘magic tau’ (=ET_U + real Tau = ET_U + rTau) has value of 1.87 somewhere around 10±10% cloud cover i.e. it approximates clear sky (as seems obvious in retrospect) but is a useless parameter in every respect.

      I think a little spreadsheet like this has considerable value for getting our heads clear on what affects what, why and roughly by how much – particularly in respect of the all-important clouds.

      %Cloud,Albedo(A),Fo,Fo(1-A),F,rTau,S_T,ET,ET_U,DT,rE_U,A_A,E_D,oE_U,S_U/oE_U,A_A/E_D,S_U,OLR,S_U/OLR,M-Tau
      100,0.40,341,205,67,2.70,26,133,50,1,145,360,353,195,1.98,1.02,386,221,1.75,1.63
      80,0.35,341,222,72,2.47,33,107,40,9,157,358,343,197,1.98,1.04,391,230,1.70,1.68
      60,0.30,341,239,78,2.29,40,80,30,17,169,356,333,199,1.99,1.07,396,239,1.66,1.73
      40,0.25,341,255,83,2.14,47,53,20,24,181,354,322,201,2.00,1.10,401,248,1.62,1.79
      20,0.20,341,272,89,2.02,54,27,10,33,193,352,312,203,2.00,1.13,406,257,1.58,1.85
      0,0.15,341,288,94,1.91,61,0,0,41,205,349,301,205,2.00,1.16,410,266,1.54,1.91

  118. As I understand K&P they say the unavailability of condensation nuclei produced a drastic reduction in clouds and as a result of fewer clouds super greenhouse conditions. T&F say that decreased cloud both reduces greenhouse conditions and increases temperature as a result of increased absorbed solar radiation [ASR]; T&F also distinguish vapor as a +ve feedback from ASR; however both increased vapor and increased ASR increase OLR while increased low-level clouds do not; T&F note that observations disprove Lindzen's Iris and that the increase in convective clouds has a +ve feedback [which would suggest that Lindzen's Iris may hold true 'if' there were a decrease in high convective cloud], although an increase in low cloud through reduced SW cloud radiative forcing has a negative feedback as temperatures rise and also a decrease in optical depth. Oddly enough all the models studied by T&F still show warming even if there is no change in cloud cover. T&F don't refer to Dessler at all.

  119. Thanks Anthony. I love K&P BTW – exactly what I have been on about for several years (even pre K&P) – CCN from biological productivity.Kump and Pollard:The extreme warmth of particular intervals of geologic history cannot be simulated with climate models, which are constrained by the geologic proxy record to relatively modest increases in atmospheric carbon dioxide levels. Recent recognition that biological productivity controls the abundance of cloud condensation nuclei (CCN) in the unpolluted atmosphere provides a solution to this problem. Our climate simulations show that reduced biological productivity (low CCN abundance) provides a substantial amplification of CO2-induced warming by reducing cloud lifetimes and reflectivity. If the stress of elevated temperatures did indeed suppress marine and terrestrial ecosystems during these times, this long-standing climate enigma may be solved.To repeat: “…..a substantial amplification of CO2-induced warming by reducing cloud lifetimes and reflectivity.”Coho gotcha there, Nick.

  120. Yep, you can do practically anything you like with “a fully coupled GCM” e.g.:Lunt et al. 2008″….Although the popular conception is that geoengineering can re-establish a ‘natural’ pre-industrial climate, such a scheme would itself inevitably lead to climate change, due to the different temporal and spatial forcing of increased CO2 compared to reduced solar radiation. We investigate the magnitude and nature of this climate change for the first time within a fully coupled General Circulation Model. We find significant cooling of the tropics, warming of high latitudes and related sea ice reduction, a reduction in intensity of the hydrological cycle, reduced ENSO variability, and an increase in Atlantic overturning. However, the changes are small relative to those associated with an unmitigated rise in CO2 emissions.”In fact I'm so mightily impressed with GCMs I'm thinking of fitting one to my Hilux 4WD to boost the turbo….

  121. Steve,I don't think I missed anything. K&F have it covered. If clouds reduce, there are two effects:LW effect (your quote) – GE drops, more IR emitted – coolerSW effect – less SW reflected, more absorbed – warmer.The SW effect is the K&P one, and T&F say that it is bigger. Resolved.

  122. Phooey.More biomass, more CCN. More CCN, more cloud. It should be getting cooler due to increasing oceanic cloud cover (over 60%):http://bobtisdale.blogspot.com/2008/12/ocean-cl…SW effect – more SW reflected, less absorbed – cooler.More cloud = more precipitation.LW effect – more latent heat release, more LW escapes TOA – coolerResolvedA cool four of Empiricism beats a GCM hot flush any day.

  123. You said that. I didn't. IMO it is a relatively facile comment which does your obvious intelligence no credit.I would only say that my 'Empiricism' certainly helps to explain the evidence for a relatively low CO2 sensitivity which the observations of the last 50 years e.g. a missing OHC, an increasing oceanic cloud cover, a decreasing continental pan evaporation etc indicate is more likely to be the case.As I said before, even for the view that increasing cloud cover MUST increase positive forcing to stand, it also means BY DEFINITION that an increasingly smaller percentage of clouds MUST actually precipitate at higher cloud covers.Tell me: just how do your GCMs factor that required effect (=> reducing precipitation rate per unit cloud cover) in? Especially in the context that every single study shows continental plant biomass and oceanic cyanobacterial biomass (and hence lower troposphere CCN density) increases with increasing CO2.I know you'd really love to 'have your cake and eat it too', but so far, it is just not possible (as I see it).

  124. Steve,Yes, it's facile, but draws attention to the fact that you're just listing a few of the things (on one side) that determine temperature. They all point to cooling, but it has been warming. There's a lot else (eg CO2).T&F rightly emphasise the different pulls. SW and clouds – warming. LW and clouds – cooling. But, as they say, it isn't just clouds. There's the gas GE. And then there's your CCN story. You have to add it all up.

  125. Nick I haven't got a copy of the whole T&F paper yet so I'll have to suspend judgment.However, I'd make the following comments.If ALL clouds were removed albedo would fall to about 15% and the amount of SW available to warm warm the atmosphere/surface would increase from 239 W/m^2 to about 288 W/m^2. However the LW OLR would increase to about 266 W/m^2 compared to 238 – 239 at present. The net effect of complete cloud removal would there be an increase in net radiation of about 22 W/m^2.So the 60% cloud cover has a net cooling effect of about 22 W/m^2 even though (I acknowledge) the net effects of high altitude and low altitude clouds is essentially opposite.Again intuitively I would expect increased CO2 to increase the mean elevation of clouds (stronger convection and stronger meridional winds) hence increase the negative forcing effect (from higher clouds).As an incorrigible bookworm (to the despair of my good lady) it intrigues me that the annual peak in total moisture content right up through the atmosphere occurs in August/September. This is when NH ET is at its peak. Why NH only, why not two annual peaks?NH has more land, more anthropogenic CO2 sources, more agriculture and more N & P pollution.

  126. Hi Steve and all,I've seen a lot of discussion of GCMs here, what assumptions they contain, what assumptions they don't contain, what they do, what they don't do.For anyone interested the ECHAM5 model (and also its predecessors the ECHAM3 and ECHAM4) is documented freely and you can download a description of it here:http://www.mpimet.mpg.de/fileadmin/models/echam…It makes very interesting reading, just to see all of its assumptions and approximations laid bare, and also to note how much of it is based on theory that is 10, 20, 30 years old (well it's supposed to be “state of the art” whatever that means).By the way, I can't actually see any evidence of a built-in temperature discontinuity although it's possible you need to follow back through the references on the radiation scheme to see that.After reading through it I am afraid I just can't bring myself anywhere near a place that I could take seriously the predictions of one of these programs.

  127. Nick,As I said, I can't see anything about a temperature discontinuity in the paper, but one would have to follow it all the back through the references to be sure.I suggest you read the introduction and the section on radiation — it's only short.Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough (1997), Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave, J. Geophys. Res., 102(D14), 16,663–16,682.

    The radiation scheme apparently is “new” (where new means 1997) and leads to this paper here:A rapid and accurate radiative transfer model (RRTM) for climate applications has been developed and the results extensively evaluated. The current version of RRTM calculates fluxes and cooling rates for the longwave spectral region (10–3000 cm−1) for an arbitrary clear atmosphere. The molecular species treated in the model are water vapor, carbon dioxide, ozone, methane, nitrous oxide, and the common halocarbons. The radiative transfer in RRTM is performed using the correlated-k method: the k distributions are attained directly from the LBLRTM line-by-line model, which connects the absorption coefficients used by RRTM to high-resolution radiance validations done with observations. Refined methods have been developed for treating bands containing gases with overlapping absorption, for the determination of values of the Planck function appropriate for use in the correlated-k approach, and for the inclusion of minor absorbing species in a band. The flux and cooling rate results of RRTM are linked to measurement through the use of LBLRTM, which has been substantially validated with observations. Validations of RRTM using LBLRTM have been performed for the midlatitude summer, tropical, midlatitude winter, subarctic winter, and four atmospheres from the Spectral Radiance Experiment campaign. On the basis of these validations the longwave accuracy of RRTM for any atmosphere is as follows: 0.6 W m−2 (relative to LBLRTM) for net flux in each band at all altitudes, with a total (10–3000 cm−1) error of less than 1.0 W m−2 at any altitude; 0.07 K d−1 for total cooling rate error in the troposphere and lower stratosphere, and 0.75 K d−1 in the upper stratosphere and above. Other comparisons have been performed on RRTM using LBLRTM to gauge its sensitivity to changes in the abundance of specific species, including the halocarbons and carbon dioxide. The radiative forcing due to doubling the concentration of carbon dioxide is attained with an accuracy of 0.24 W m−2, an error of less than 5%. The speed of execution of RRTM compares favorably with that of other rapid radiation models, indicating that the model is suitable for use in general circulation models.

  128. Yes I was hoping the software would get cleverer and cleverer as we got the edge of the screen here — but it's not!Let's see where this one goes.

  129. I have been hitting the literature hard on cloud effects. This is where I have got my little spreadsheet model to (see below).I have partly stuck with Miskolczi parameter terminology only because this is very, very familiar to most of us here (possibly not a lurking Christopher Game from over at JM though, haha ;-). The remaining terminology I've explained before and/or is self explanatory. Each (%cloud cover) row energy balances (I think) and all parameters are interlinked by relatively simple and empirically justifiable algorithms. Most major parameter values can be found somewhere (or a value very close) in the mainstream literature – going all the way back to Hartmann, 1994. The 'Virial Rule ' does reasonably well but Kirchoff falls over. S_U/OLR ranges from about 1.5 (clear sky) to about 1.75 (100% cloud) but seems to be close to 1.66 around 60% cloud cover). Interestingly the Miskolczi so-called 'tau' decreases with increasing cloud! Miskolczi's 'magic tau' (=ET_U + real Tau = ET_U + rTau) has value of 1.87 somewhere around 10±10% cloud cover i.e. it approximates clear sky (as seems obvious in retrospect) but is a useless parameter in every respect.I think a little spreadsheet like this has considerable value for getting our heads clear on what affects what, why and roughly by how much – particularly in respect of the all-important clouds.%Cloud,Albedo(A),Fo,Fo(1-A),F,rTau,S_T,ET,ET_U,DT,rE_U,A_A,E_D,oE_U,S_U/oE_U,A_A/E_D,S_U,OLR,S_U/OLR,M-Tau100,0.40,341,205,67,2.70,26,133,50,1,145,360,353,195,1.98,1.02,386,221,1.75,1.6380,0.35,341,222,72,2.47,33,107,40,9,157,358,343,197,1.98,1.04,391,230,1.70,1.6860,0.30,341,239,78,2.29,40,80,30,17,169,356,333,199,1.99,1.07,396,239,1.66,1.7340,0.25,341,255,83,2.14,47,53,20,24,181,354,322,201,2.00,1.10,401,248,1.62,1.7920,0.20,341,272,89,2.02,54,27,10,33,193,352,312,203,2.00,1.13,406,257,1.58,1.850,0.15,341,288,94,1.91,61,0,0,41,205,349,301,205,2.00,1.16,410,266,1.54,1.91

  130. I am going to stick my neck out and hope that it will be chopped off. Because, if it is choppped off, it will demonstrate that what I am about to say is already understood. That will be good.

    The lengthy posts about F.M. and his theory are confusing to a scientist like me, with a chemistry major and a spectroscopy background for part of my work. They are confusing because they seem to leap from one concept to another too often. Some of these concepts include the use of temperature as a proxy for heat, when heat is the fundamental parameter; then confusing the statatics and dynamics and equilibria and rates and shapes of equations and powers within them, the mixing of conduction, convection, radiation, the immense complexity of heat flow through a heterogeneous atmosphere where light has a very complex absorption spectrum, the extrapolation of physics from known situations into relatively unknown (like spectral absorption of gas mixtures at very low pressures and temperatures), like assumptions about the extension of optical density from lab measurements into regions that are opague or transparent or unknown degrees between………. and overshadowing all this, the impression that much remains to be learned about the true action of water vapour, the most effective GHG.

    This is before we even start on the complex maths problems of DEs and bounded or unbounded cases, of using assumed maths to show that physical effects must exist – before their acceptance by measurement. Let alone the problems of grid cell scale for some effects, of unsolved equations like Navier Stokes, of a completely unreliable but much relied upon reconstruction of “average global temperature”.

    For example, what is the accepted figure for the heat generated from the friction of air sliding past a rotating rough earth? What assumtions are made to derive this figure?

    Illustration, not picking on anyone, just taking an example that happens to be from Steve Short above:

    “The only way I could warm my hands just enough just to work the valves, pumps etc (both day and night) was take my gloves off and frequently flush them with the actual groundwater. The groundwater was in thermal equilibrium with the ground which was significantly warmer than the air only centimetres above the ground. Strong discontinuity right down at ground level (or even slightly lower)! ”

    There was no pressing need to explain the science of this precisely on this blog, but you need to take into consideration factors such as: the velocity and relative humidity of the wind helping evaporate water from your hands; the thermal conductivity of your hands modelled from the skin to the depth where blood flow controls temperature; the rate of heat flow throught this region, which might vary if you are fat or thin; the rate of warming/cooling of hands given a temperature step change through glove removal, by circulating body blood; the long-term ability of body blood to maintain its steadiness under these conditions; actual measurement to show that the ground water was in equilibrium with the rock around it; if not, then the same heat flow problems; and so on. You see, while you survived working with your body in the air, you might get hypothermia and die if you went to lie down in the “warmer” stream of groundwater.

    Then, IIRC, the many model discontinuity has the air warmer than the ground, not the reverse as Steve describes.

    The discussion is too unstructured. You guys need to pick one quibble point at a time and thrash it to death. Please, as a favour, don’t include CO2 in the first half dozen quibbles, I’m sick to death of its magic, ephemeral trace gas properties. Maybe you could start with one like this: Is there indeed a heat discontinuity between the near earth atmosphere and the ground? (Under what assumptions, and of what magnitude? Whose measurements? If there is, is there a measurable heat flow from one to the other? Does it equilibrate? At what value?)

    So hack me to death. I’m a lousy typist too.

    1. (a) I think there usually is a discontinuity (somewhere).
      (b) It may be quite sharp or quite gradual (depending upon circumstances).

      I can suspend a pH/EC/temperature probe into shallow groundwater in winter and get a temperature significantly higher that the temperature probe records in air just outside the borehole stem in zero wind conditions.

      I can suspend a pH/EC/temperature probe into shallow groundwater in summer and get a temperature significantly lower that the temperature probe records in air just outside the borehole stem in zero wind conditions.

      This effect is well known to hydrogeologists.

      I can monitor the pH/EC/disslved oxygen/temperature (etc) profile of a lake or reservoir throughout the year, monitor its thermocline and watch it ‘turnover’ as slight changes in salinity (between top and bottom) overcome or are overcome by slight changes in temperature as a consequence of the effects of these two parameters on water density.

      This effect is well known to limnologists.

      Put these in yer pipe and puff on ’em.

      1. Steve,

        you seem to think you have provided us with empirical evidence of a temperature discontinuity. How about the heat flow and gradient, if any, for this discontinuity??

        It would require a perfect insulator to maintain a discontinuity. Or, am I misunderstanding what is meant by a discontinuity here?? There would HAVE to be a flux if there is a difference in potential without this insulator.

        I believe what you are telling us is that the ground has low conductivity and huge capacity compared to the atmospheres higher conductivity and low capacity, therefore maintaining a more even temperature below the immediate surface??

        Let’s not forget that water is a better conductor. It would also be thermally connected to water that is typically deeper in the ground where the temperature is “average” compared to the surface.

        So what is so special about this?? How does it show a discontinuity?????

        Please stop confusing a steep gradient due to known conditions with a discontinuity.

      2. “Please stop confusing a steep gradient due to known conditions with a discontinuity.”

        Huh? Are you kidding? How steep do you want it to be? LOL.

        (1) It is commonly accepted that shallow groundwater is in temperature equilibrium with the ground in which it resides. Remember shallow aquifers typically have an effective porosity of 1 – 20% (depending upon whether the lithology is consolidated or unconsolidated).

        (2) If the air temperature immediately above that ground is significantly different even under zero wind conditions then we have the closest thing to a discontinuity I can possibly imagine.

        Actually, the order of heat conductivity in the absence of convective heat transfer (which cannot typically occur in an aquifer) is typically solid rock>water>air.

        You are having yourself on here.

      3. I agree with kuhnkat here. The discussion about discontinuity is misconceived in its origin, because it was inspired by a math approximation extended beyond its region of applicability.
        But a true discontinuity in fluid is impossible. The temp difference across a region is the heat flux times the width and divided by the conductivity, or heat transfer coef, or whatever you want to call it. So finite difference over zero width – then either infinite flux or zero conductivity.
        For a very big temp change over a small distance you need either a big flux or a good insulator. A big heat flux usually can’t be sustained, because you run out of heat. That leaves a good insulator.
        Now the air is generally moving, and turbulent diffusion is effective. It’s only when you get very still air that it becomes a fairly good insulator. That’s why you can get a near discontinuity on a frosty night. That means a gradient of a deg C or two per metre, total difference maybe 3-4C (frost on grass etc).
        You can’t have a steep gradient the other way (hot at the bottom). It’s convectively unstable.

      4. “You can’t have a steep gradient the other way (hot at the bottom). It’s convectively unstable.”

        Non-inverted lapse rate.

        Frosts occur when there is a temperature inversion cold at the bottom. On frosty nights you can have gradients of 10k K/m over a few millimitres.

        Get out there with a thermometer some time Nick.

      5. Life is filled with profound ironies. I’m with Jan on his reply. The obvious reply to Nick’s naive comments was that one simple word : inversion.

        “For a very big temp change over a small distance you need either a big flux or a good insulator. A big heat flux usually can’t be sustained, because you run out of heat. That leaves a good insulator.
        Now the air is generally moving, and turbulent diffusion is effective.”

        Not invariably true.

        FYI, there is a popular phenomenon in hang gliding, particularly in the US, called ‘magic air’. What happens is that in dense pine forests in the bases of valley the air heats up throughout the day and forms an inversion, sticking close to ground level due to a (yes) viscous attachment to the trees.

        Surprise, surprise, fluids do have viscosity and sometimes that is sufficient to overcome even temperature gradients.

        In other words an inversion forms. It is not until late afternoon when cold katabatic air flow occurs down tributary gullies, thereby entering the valley floor basal forests, getting under the warm air, does the inversion then start ‘lifting off’.

        One can then hang glide for about 2 -3 hours late afternoon to early evening, 100 – 300 feet AGL in air which is constantly and gently rising at about 2- 4 foot/min up thus counteracting the sink rate of the glider. I have done this many times. The sensation is delicious.

        One of the thrilling side effects of this phenomenon was that I could cruise around over small lakes created by beaver dams, almost hands off the A frame, photographing beavers as they scudded backwards and forwards across their dams. Having never seen beavers in my life before this was a big, big thrill.

        Magic air indeed.

        We need to remember that solid ground is not a fluid. Groundwater within solid ground hardly behaves as a fluid. Air above the ground will not necessarily behave as a simple fluid, free to instantly convect.

        Having spent 12 years in an Australian Fed. Govt. research organisation and 3 in a Swiss one, I love the freedom and intellectual delights that come from doing pure research. I also know what it is like to live in academia. But such places are not the font of absolute truth.

        Next time, if you are out there, just for curiosity try sticking a thermometer onto hot tarmac perhaps which a fat goanna has just vacated.

        Or watch carefully and this time notice that jackrabbit who ‘flicked off’ a dust devil as he scampered across a gibber plain.

        There is an even greater wisdom which comes from just getting out there and actually experiencing the fantastic variety and subtleties of what Mother Nature ‘has to throw at us’ than we will ever find inside a laboratory or an office.

        Call them very steep temperature gradients if you wish, rather than discontinuities, but don’t ever be so very, very foolish as to claim they don’t exist.

      6. No, you’re not with Jan. I am, though he doesn’t know it. The frost situation I was talking about is inversion, in a lapse rate sense. The bottom air is cold, and the temp rises as you go up.

        What you are talking about, though, is not inversion. It’s stability in the face of a super lapse rate, which would normally be convectively unstable. The undoing of this by the katabatic winds restores the instability, which creates the updrafts that you float on.

        So much for the formal argument. I hesitate to cause a distraction, but I don’t believe your viscosity story. And I spent some years in the fluids lab at Highett, where they experiment with natural convection, so it isn’t just math theory. But gas viscosity doesn’t work like that. Air is always free to convect, if the temp gradient is there.

      7. “It’s stability in the face of a super lapse rate, which would normally be convectively unstable. The undoing of this by the katabatic winds restores the instability, which creates the updrafts that you float on.”

        Huh?

        I couldn’t give a tinkers cuss if you don’t believe my viscosity story because there is clearly negligible convective rise from the bases of the valleys during the day. Its been tried. It is simply not possible to thermal off the bottoms of the valleys during most of the day. The only significant thermals are forming off slopes higher up.

        So how come therefore this stability in the face of a super lapse rate builds up over the better part of the day? The evening ‘magic air’ even feels warm!

        Be careful you don’t ‘super lapse ‘ into instability of rationality in your argument.

        I also don’t care how many years you spent in how many labs experimenting with whatever. Been there, done that.

        Those who spend their lives in glass houses shouldn’t stow thrones – they can never successfully sit on ’em anyway.

      8. In the forest (along with Little Red Riding Hood, the wolf, woodpeckers etc – presumably we can forget the beavers – don’t want to blow my argument ‘wide open’).

        Have you perchance considered the properties of pine forests under daily irradiation?

      9. The temperature inversion in the lower part of the canopy is a typical feature of daytime temperature profiles in tall crop and forest canopies….

        Introduction to Micrometeorology
        S. Pal Arya 2001

      10. Here, we examine whether sub-canopy flow through a small gully in the vicinity of the flux tower was thermotopographically driven, and was linked to the flow divergence found above canopy. While flow in the gully was frequently aligned with the mean wind aloft, indicating dynamic coupling, there were periods when the wind in the gully appeared to be decoupled from the flow aloft and was consistent with thermotopographic flow forcings (including geometry, temperature gradient, and net radiation). During the leaf-off season, these episodes exhibited a classic thermotopographic pattern, with down-gully nighttime flow and up-gully daytime flow. However, during the leaf-on season, the pattern was reversed: during the daytime, flow was down-gully consistent with inversion conditions occurring below the dense leaf canopy; at night, flow was up-gully, consistent with below-canopy lapse conditions. The thermotopographic flow during the leaf-on season suggests horizontal flow convergence at night and divergence during the day, and is shown to be decoupled from the flow aloft. While this research focuses only on flow patterns and not explicitly on CO2 gradients or fluxes, these findings suggest that inferences about drainage flow/advection and corrections to flux measurements based on above-canopy conditions alone may be inappropriate.

        Froelich and Schmid, 2006.

      11. I think there is no discontinuity, but often a VERY steep gradient over a VERY thin layer. In humid warm areas (15-30 C), the air directly over the surface has an AVERAGE absolute humidity that is 70-80 percent of the saturation level. It seems to me that this could occur only if there were a layer of water at the surface that is very close in temperature to the air above. Mabe a very thin layer.

    2. “The discussion is too unstructured. You guys need to pick one quibble point at a time and thrash it to death. ”

      That’s a bit rich! What do you think we have been doing with Miskolczi’s (magic) LW IR ‘tau’?

      Miskolczi says its tau = -ln(S_T/S_U)

      I say Miskolczi has been having us on and its actually tau = -ln ((ET_U + S_T)/S_U) where ET_U = that portion of ET (or M’s K if you will) which radiates from the top of precipitating cloud (release of latent heat) as LW IR and escapes the TOA. Hence Miskolci’s tau is not the ‘regular’ tau in the commonly accepted meaning of the word.

      Both Zagoni and Miskolczi have had more than ample time to respond and have failed to do so.

      It is not enough to accuse the science establishment of bad science (as M&Z have done) but refuse to respond to those who point out a significant flaw in Miskolczi Theory – in this case just ONE SINGLE POINT, Geoff!

      I don’t like science which is conducted by globe trotting oratory to the largely unschooled in science, perpetually ignoring empirical, theoretical or mathematical difficulties. Isn’t this what we sceptics accuse the AGW bandwagon of doing?

      I have difficulty feeling solidarity with an Antipodean sceptical movement which persists in de facto endorsing Miskolczi in the face of quite genuine technical difficulties with his theory as evidenced e.g. by neither he nor Zagoni getting invited back to the 2nd Heartland Conference.

  131. I am going to stick my neck out and hope that it will be chopped off. Because, if it is choppped off, it will demonstrate that what I am about to say is already understood. That will be good.The lengthy posts about F.M. and his theory are confusing to a scientist like me, with a chemistry major and a spectroscopy background for part of my work. They are confusing because they seem to leap from one concept to another too often. Some of these concepts include the use of temperature as a proxy for heat, when heat is the fundamental parameter; then confusing the statatics and dynamics and equilibria and rates and shapes of equations and powers within them, the mixing of conduction, convection, radiation, the immense complexity of heat flow through a heterogeneous atmosphere where light has a very complex absorption spectrum, the extrapolation of physics from known situations into relatively unknown (like spectral absorption of gas mixtures at very low pressures and temperatures), like assumptions about the extension of optical density from lab measurements into regions that are opague or transparent or unknown degrees between………. and overshadowing all this, the impression that much remains to be learned about the true action of water vapour, the most effective GHG.This is before we even start on the complex maths problems of DEs and bounded or unbounded cases, of using assumed maths to show that physical effects must exist – before their acceptance by measurement. Let alone the problems of grid cell scale for some effects, of unsolved equations like Navier Stokes, of a completely unreliable but much relied upon reconstruction of “average global temperature”. For example, what is the accepted figure for the heat generated from the friction of air sliding past a rotating rough earth? What assumtions are made to derive this figure?Illustration, not picking on anyone, just taking an example that happens to be from Steve Short above:”The only way I could warm my hands just enough just to work the valves, pumps etc (both day and night) was take my gloves off and frequently flush them with the actual groundwater. The groundwater was in thermal equilibrium with the ground which was significantly warmer than the air only centimetres above the ground. Strong discontinuity right down at ground level (or even slightly lower)! “There was no pressing need to explain the science of this precisely on this blog, but you need to take into consideration factors such as: the velocity and relative humidity of the wind helping evaporate water from your hands; the thermal conductivity of your hands modelled from the skin to the depth where blood flow controls temperature; the rate of heat flow throught this region, which might vary if you are fat or thin; the rate of warming/cooling of hands given a temperature step change through glove removal, by circulating body blood; the long-term ability of body blood to maintain its steadiness under these conditions; actual measurement to show that the ground water was in equilibrium with the rock around it; if not, then the same heat flow problems; and so on. You see, while you survived working with your body in the air, you might get hypothermia and die if you went to lie down in the “warmer” stream of groundwater.Then, IIRC, the many model discontinuity has the air warmer than the ground, not the reverse as Steve describes.The discussion is too unstructured. You guys need to pick one quibble point at a time and thrash it to death. Please, as a favour, don't include CO2 in the first half dozen quibbles, I'm sick to death of its magic, ephemeral trace gas properties. Maybe you could start with one like this: Is there indeed a heat discontinuity between the near earth atmosphere and the ground? (Under what assumptions, and of what magnitude? Whose measurements? If there is, is there a measurable heat flow from one to the other? Does it equilibrate? At what value?)So hack me to death. I'm a lousy typist too.

  132. (a) I think there usually is a discontinuity (somewhere).(b) It may be quite sharp or quite gradual (depending upon circumstances).I can suspend a pH/EC/temperature probe into shallow groundwater in winter and get a temperature significantly higher that the temperature probe records in air just outside the borehole stem in zero wind conditions.I can suspend a pH/EC/temperature probe into shallow groundwater in summer and get a temperature significantly lower that the temperature probe records in air just outside the borehole stem in zero wind conditions.This effect is well known to hydrogeologists.I can monitor the pH/EC/disslved oxygen/temperature (etc) profile of a lake or reservoir throughout the year, monitor its thermocline and watch it 'turnover' as slight changes in salinity (between top and bottom) overcome or are overcome by slight changes in temperature as a consequence of the effects of these two parameters on water density.This effect is well known to limnologists.Put these in yer pipe and puff on 'em.

  133. “The discussion is too unstructured. You guys need to pick one quibble point at a time and thrash it to death. “That's a bit rich! What do you think we have been doing with Miskolczi's (magic) LW IR 'tau'?Miskolczi says its tau = -ln(S_T/S_U)I say Miskolczi has been having us on and its actually tau = -ln ((ET_U + S_T)/S_U) where ET_U = that portion of ET (or M's K if you will) which radiates from the top of precipitating cloud (release of latent heat) as LW IR and escapes the TOA. Hence Miskolci's tau is not the 'regular' tau in the commonly accepted meaning of the word.Both Zagoni and Miskolczi have had more than ample time to respond and have failed to do so. It is not enough to accuse the science establishment of bad science (as M&Z have done) but refuse to respond to those who point out a significant flaw in Miskolczi Theory – in this case just ONE SINGLE POINT, Geoff!I don't like science which is conducted by globe trotting oratory to the largely unschooled in science, perpetually ignoring empirical, theoretical or mathematical difficulties. Isn't this what we sceptics accuse the AGW bandwagon of doing?I have difficulty feeling solidarity with an Antipodean sceptical movement which persists in de facto endorsing Miskolczi in the face of quite genuine technical difficulties with his theory as evidenced e.g. by neither he nor Zagoni getting invited back to the 2nd Heartland Conference.

  134. Steve,you seem to think you have provided us with empirical evidence of a temperature discontinuity. How about the heat flow and gradient, if any, for this discontinuity??It would require a perfect insulator to maintain a discontinuity. Or, am I misunderstanding what is meant by a discontinuity here?? There would HAVE to be a flux if there is a difference in potential without this insulator.I believe what you are telling us is that the ground has low conductivity and huge capacity compared to the atmospheres higher conductivity and low capacity, therefore maintaining a more even temperature below the immediate surface??Let's not forget that water is a better conductor. It would also be thermally connected to water that is typically deeper in the ground where the temperature is “average” compared to the surface.So what is so special about this?? How does it show a discontinuity?????Please stop confusing a steep gradient due to known conditions with a discontinuity.

  135. “Please stop confusing a steep gradient due to known conditions with a discontinuity.”Huh? Are you kidding? How steep do you want it to be? LOL.(1) It is commonly accepted that shallow groundwater is in temperature equilibrium with the ground in which it resides. Remember shallow aquifers typically have an effective porosity of 1 – 20% (depending upon whether the lithology is consolidated or unconsolidated).(2) If the air temperature immediately above that ground is significantly different even under zero wind conditions then we have the closest thing to a discontinuity I can possibly imagine. Actually, the order of heat conductivity in the absence of convective heat transfer (which cannot typically occur in an aquifer) is typically solid rock>water>air. You are having yourself on here.

  136. The much discussed Miskolczi equation (7)SU – (F0 + Po) + ED – EU = OLRseems to be mest easily conceptualised by a person standing near the Equator, where the direction of incoming sunlight is roughly perpendicular to the ground surface around noon. Even the terminolgy suggests this.Consider now that you are standing near th enorth pole in the middle of winter. Some of the incoming sunlight will pass through the atmosphere above you without ever going near the earth's surface. So, the definition of “Incoming” varies with latitude. One can start to correct for this with trig functions, but what about the trig functions that reach zero or infinity at some angles found in Nature? In my brief reading, I have only seen the spherical earth converted to a planar disc for sunlight interception geometry, then halved to cope with night time.The next complication is that most of the earth's atmospheric CO2 is close to the surface. Even on Mauna Loa the concentration at sea level can be several times that reported at the observatory. So what happens to equation 7 when incoming radiation near the north pole does not even get close to the ground in Winter, (hence misses the main CO2 there), unless it is scattered? Is there a correction that adjusts the atmosphereic temperature when it is in high sunlight and another for when there is low-altitude darkness?I find it hard to dissect an equation like (7) unless I know that such effects are in the solution or in the analysis that follows. Are they? This is the point of my post and my previous one.Re Steve Sort, thanks for the reply that limnologists know all about. But please don't be hostile, I'm asking in an attempt to learn, not to criticise. However, your reply does underscore my point about use of temperature as a proxy for heat. I carry no torch, either personal or national, for Miskolczi or Zagoni. But I do find it encouraging that such people are thinking laterally and not obediently. I'n not knocking you, but I think many others would agree that you guys are jumping all over the place and are hard to follow. To the extent that you are concentrating on tau, that is good. But, do you have brackets around all the parameters that contribute to it and good values to cover the globe? Seems to me that if you can drop off half the RHS of the original eqn and say “this is better”, there might be a loose definition of what is “better”. Structure it. Set you objectives, make your sub-hypotheses, design your tests, write down standards for pass/fail, then proceed. That's often better than following many a forking trail.

  137. The much discussed Miskolczi equation (7)

    SU – (F0 + Po) + ED – EU = OLR

    seems to be mest easily conceptualised by a person standing near the Equator, where the direction of incoming sunlight is roughly perpendicular to the ground surface around noon. Even the terminolgy suggests this.

    Consider now that you are standing near th enorth pole in the middle of winter. Some of the incoming sunlight will pass through the atmosphere above you without ever going near the earth’s surface. So, the definition of “Incoming” varies with latitude. One can start to correct for this with trig functions, but what about the trig functions that reach zero or infinity at some angles found in Nature? In my brief reading, I have only seen the spherical earth converted to a planar disc for sunlight interception geometry, then halved to cope with night time.

    The next complication is that most of the earth’s atmospheric CO2 is close to the surface. Even on Mauna Loa the concentration at sea level can be several times that reported at the observatory. So what happens to equation 7 when incoming radiation near the north pole does not even get close to the ground in Winter, (hence misses the main CO2 there), unless it is scattered? Is there a correction that adjusts the atmosphereic temperature when it is in high sunlight and another for when there is low-altitude darkness?

    I find it hard to dissect an equation like (7) unless I know that such effects are in the solution or in the analysis that follows. Are they? This is the point of my post and my previous one.

    Re Steve Sort, thanks for the reply that limnologists know all about. But please don’t be hostile, I’m asking in an attempt to learn, not to criticise. However, your reply does underscore my point about use of temperature as a proxy for heat.

    I carry no torch, either personal or national, for Miskolczi or Zagoni. But I do find it encouraging that such people are thinking laterally and not obediently. I’n not knocking you, but I think many others would agree that you guys are jumping all over the place and are hard to follow. To the extent that you are concentrating on tau, that is good. But, do you have brackets around all the parameters that contribute to it and good values to cover the globe? Seems to me that if you can drop off half the RHS of the original eqn and say “this is better”, there might be a loose definition of what is “better”.

    Structure it. Set you objectives, make your sub-hypotheses, design your tests, write down standards for pass/fail, then proceed. That’s often better than following many a forking trail.

    1. Geoff “To the extent that you are concentrating on tau, that is good.”

      Consider this tau = -ln(S_T/S_U) is pretty much the 150+ YO definition of the optical depth (tau)

      This tau = -ln ((ET_U + S_T)/S_U) is not.

      As I am sure you are aware.

      Now try to imagine why some of us think it’s a waste of time discussing this with the person that makes the latter claim.

      That person also keeps also keeps quoting figures from FT&K 08 & K&T 97 who claim to have obtained their numbers (esp OLR) from the CERES project. The person who calibrated those instruments and briefed the NASA on their use does not understand how K&T and later FT&K obtained their numbers from it and until they give an explanation as to how as requested there is little point in continuing with it.

      1. When you say “The person who calibrated those instruments and briefed the NASA on their use does not understand how K&T and later FT&K obtained their numbers from it and until they give an explanation as to how as requested there is little point in continuing with it.” you are presumably referring to Miskolczi and a claim made by him and him alone!

        As far as I can find there is not one single independent competent person (including Gupta!) in the field of atmospheric radiation who supports Miskolczi’s claims.

        I have actually bothered to get most of the references quoted in F,T&K09 and it seems to me (as a non-expert) that it is a reasonable review and summary of the findings and data which can be found in a relatively large numbers of papers. We are not talking about numbers that are hard to understand here. This is not particularly obscure stuff.

        I’ll hazard a guess you have not read a single one of the many papers cited by F,T&K09 and tried to check these data out for yourself.

        To be specific:

        (1) Outside of M&M04 and M07 I cannot find anywhere data which independently supports a mean global all sky S_T of the order of 60 – 65 W/m^2 as claimed by Miskolczi and I’ve expended a lot of effort to find it.

        (2) I also cannot find anywhere data which independently supports a mean global all sky tau of the order of 1.87 as claimed by Miskolczi.

        Where is the independent verification that NONE of the numerous authors who wrote the papers which I cited at the start of this thread (and which appeared as references in F,T&K09) know what they are writing about (i.e. that they are incompetent)?

        Where is the independent evidence for Miskolczi’s (and by proxy yourself) assertions that K&T97 and F,T&L09 (and hey let’s not forget NASA too) are all wrong?

        So you hold up to ridicule my suggestion that the only way a so-called global all sky S_T of 60 – 65 W/m^2 could conceivably be claimed by Miskolczi is to add in another LW IR component to S_T e.g. the emission by clouds which escapes TOA (there possibly could be something else but I can’t find it) on the basis of one lone maverick who has not found honest support for his claims anywhere, either when they were originally made or indeed since is simply ridiculous.

        You have no bona fides to do that. It only shows you up as another lone, stubborn and idiosyncratic maverick like Miskolczi.

        But hey, you are not an authority or scientist in this field.

        As I pointed out above, IF Miskolczi did have serious claims AND they could be even partly verified by prominent sceptics such as Lindzen, Spencer, McIntyre etc., then surely Miskolci would have reappeared at the 2nd Heartland Conference and his ‘Theory’ would today actually be going somewhere – rather than still just blowing out of your Antipodean backside?

        Or is there a conspiracy against Miskolczi in the ‘Sceptical Establishment ‘ as well…..?

      2. The latest report from Anthony Watts on the real state of the US Historical Climate Network (USHCN) contains the following text:

        “We found stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat.”

        “Check out the site survey photographs showing temperature stations next to brick and concrete walls, sited on or next to concrete, …..”

        “Check out the photographs from heat cameras showing concrete more than 10 degrees warmer than the air temperature.”

        What’s all this Kuhnkat and Nick Stokes were telling us about the lack of near-surface temperature discontinuities?

        Clearly not a notion that Anthony Watts ‘warms to’!

      3. OK, I saw lots of infrared photos, indicating that there are big instantaneous temperature differences of different surfaces, measured by infrared thermometry. Especially in a built or somewhat industrial environment, which relates to the point that AW is trying to make. But that doesn’t imply a discontinuity. All it says is that you have surface heterogeneity, like hot pavement in the sun vs lawn. And that means you must have variation on a smaller spatial scale, over short periods of time.

        None of this relates to the original argument, which tried to stretch an approximate solution of the radiative transfer equations to claim a general temperature discontinuity between surface and atmosphere. Oddly, the original fallacious argument was advanced to try to discredit the Milne solution by saying that such a discontinuity is impossible, but now seems to have morphed into the credulous thinking that the inferred discontinuity is real.

      4. Steve Short,

        please state your definition of DISCONTINUITY in the physical world.

        Your examples have nothing to do with what I, and probably the others seem to think it is.

      5. If temperature from the ground up actually passes through a significant maximum (or a minimum) over height ranges which may be quite small e.g. just within the elevations of crops or trees or lower parts of valleys, urban canyons etc etc then that is mathematically equivalent to a discontinuity since, as must be obvious, the temperature then does NOT change monotonically moving upwards from the surface and cannot be modeled as such. If you need me to explain what monotonically means you shouldn’t be trying to start an argument on this subject. All the Miskolczi math falls over if such low level inversions occur. Even the conventional treatment ignores all such micrometeorological situations.

        As you can see I already proved Nick has no idea how/why/when such low level inversions occur as evidenced by his deathly silence after I quoted a 2001 micrometeorology textbook and a 2006 paper on the very situation I had raised (and he was getting all high and mighty about).

        If that is what he was trying to say, then Jan was correct that inversions are functionally equivalent to discontinuities for the mathematical purposes we were considering.

        It may be worthwhile you talking to those who fly sailplanes, hang gliders, paragliders or even crop dusting planes. Those communities tend to have real good hands on knowledge, based on often hair-raising experience of the realities of low level atmospheric inversions and other sharp discontinuities e.g. shears.

        As you well know there’s just no subject for going out and experiencing reality no matter how nicely gold-plated or pre-heated the toilet seat from which one normally likes to pontificate (I’m quoting my long deceased old Dad again – an amateur philosopher out there with the best and worst of ’em).

      6. Steve,
        A discontinuity is a cliff, not a hill. Certainly a deviation from monotonicity does not imply discontinuity.

        I didn’t reply earlier because if a local heat gradient under trees proves something, then I’ve lost track of what it is.

      7. This discussion has descended into semantic farce.

        It’s all a question of the scale at which you look at something. Magnify any cliff and you will find a series of steep slopes (not to mention all sort of roughness). So, if you like, I may label a ‘discontinuity’ a relatively steep gradient depending on the scale one chooses to or can look at (and maybe deal mathematically with at bulk scale). For many problems that may well be a practical scale. Then we might even be able to make it/call it a boundary condition (right Jan?).

        As an expert in hydrodynamics Nick knows full well that at the end of the day it is all a question of the scale you employ. Look in fine scale and you will find a host of steep gradients or discontinuities even right down to complete fractality.

        Nick knows all this full well – he just wants to play a silly game of ‘lets catch the other guy out’. I don’t buy into it.

        For many practical purposes Nature is ubiquitously littered with steep gradients/discontinuities.

        For example, circle up in a good large thermal in a sailplane and you could easily be rising at (say) 10 foot/sec up. You may even be able to log the lapse rate as you do so. But just slip outside the margins of the thermal by about 10 or 20 foot and you will probably be in air descending at (say) 10 foot/sec. Sharp gradient? Discontinuity? I would say yes (for all practical purposes).

        A forester may have a tower in the middle of a forest logging vertical gradients of temperature, relative humidity, insolation penetration etc. Will he sometimes find some very steep gradients in there? You betcha!

        As for Kuhnkat – well, I doubt he could always find the right end of a monkey wrench, that big ol’ kuhnskin cap regularly slips down over his eyes and ears so much.

        Yep.

  138. Geoff “To the extent that you are concentrating on tau, that is good.”Consider this tau = -ln(S_T/S_U) is pretty much the 150+ YO definition of the optical depth (tau)This tau = -ln ((ET_U + S_T)/S_U) is not.As I am sure you are aware.Now try to imagine why some of us think it's a waste of time discussing this with the person that makes the latter claim. That person also keeps also keeps quoting figures from FT&K 08 & K&T 97 who claim to have obtained their numbers (esp OLR) from the CERES project. The person who calibrated those instruments and briefed the NASA on their use does not understand how K&T and later FT&K obtained their numbers from it and until they give an explanation as to how as requested there is little point in continuing with it.

  139. When you say “The person who calibrated those instruments and briefed the NASA on their use does not understand how K&T and later FT&K obtained their numbers from it and until they give an explanation as to how as requested there is little point in continuing with it.” you are presumably referring to Miskolczi and a claim made by him and him alone!As far as I can find there is not one single independent competent person (including Gupta!) in the field of atmospheric radiation who supports Miskolczi's claims.I have actually bothered to get most of the references quoted in F,T&K09 and it seems to me (as a non-expert) that it is a reasonable review and summary of the findings and data which can be found in a relatively large numbers of papers. We are not talking about numbers that are hard to understand here. This is not particularly obscure stuff.I'll hazard a guess you have not read a single one of the many papers cited by F,T&K09 and tried to check these data out for yourself.To be specific:(1) Outside of M&M04 and M07 I cannot find anywhere data which independently supports a mean global all sky S_T of the order of 60 – 65 W/m^2 as claimed by Miskolczi and I've expended a lot of effort to find it.(2) I also cannot find anywhere data which independently supports a mean global all sky tau of the order of 1.87 as claimed by Miskolczi.Where is the independent verification that NONE of the numerous authors who wrote the papers which I cited at the start of this thread (and which appeared as references in F,T&K09) know what they are writing about (i.e. that they are incompetent)?Where is the independent evidence for Miskolczi's (and by proxy yourself) assertions that K&T97 and F,T&L09 (and hey let's not forget NASA too) are all wrong?So you hold up to ridicule my suggestion that the only way a so-called global all sky S_T of 60 – 65 W/m^2 could conceivably be claimed by Miskolczi is to add in another LW IR component to S_T e.g. the emission by clouds which escapes TOA (there possibly could be something else but I can't find it) on the basis of one lone maverick who has not found honest support for his claims anywhere, either when they were originally made or indeed since is simply ridiculous.You have no bona fides to do that. It only shows you up as another lone, stubborn and idiosyncratic maverick like Miskolczi. But hey, you are not an authority or scientist in this field.As I pointed out above, IF Miskolczi did have serious claims AND they could be even partly verified by prominent sceptics such as Lindzen, Spencer, McIntyre etc., then surely Miskolci would have reappeared at the 2nd Heartland Conference and his 'Theory' would today actually be going somewhere – rather than still just blowing out of your Antipodean backside?Or is there a conspiracy against Miskolczi in the 'Sceptical Establishment ' as well…..?

  140. I agree with kuhnkat here. The discussion about discontinuity is misconceived in its origin, because it was inspired by a math approximation extended beyond its region of applicability. But a true discontinuity in fluid is impossible. The temp difference across a region is the heat flux times the width and divided by the conductivity, or heat transfer coef, or whatever you want to call it. So finite difference over zero width – then either infinite flux or zero conductivity.For a very big temp change over a small distance you need either a big flux or a good insulator. A big heat flux usually can't be sustained, because you run out of heat. That leaves a good insulator.Now the air is generally moving, and turbulent diffusion is effective. It's only when you get very still air that it becomes a fairly good insulator. That's why you can get a near discontinuity on a frosty night. That means a gradient of a deg C or two per metre, total difference maybe 3-4C (frost on grass etc).You can't have a steep gradient the other way (hot at the bottom). It's convectively unstable.

  141. “You can't have a steep gradient the other way (hot at the bottom). It's convectively unstable.”Non-inverted lapse rate. Frosts occur when there is a temperature inversion cold at the bottom. On frosty nights you can have gradients of 10k K/m over a few millimitres.Get out there with a thermometer some time Nick.

  142. Life is filled with profound ironies. I'm with Jan on his reply. The obvious reply to Nick's naive comments was that one simple word : inversion.”For a very big temp change over a small distance you need either a big flux or a good insulator. A big heat flux usually can't be sustained, because you run out of heat. That leaves a good insulator.Now the air is generally moving, and turbulent diffusion is effective.”Not invariably true.FYI, there is a popular phenomenon in hang gliding, particularly in the US, called 'magic air'. What happens is that in dense pine forests in the bases of valley the air heats up throughout the day and forms an inversion, sticking close to ground level due to a (yes) viscous attachment to the trees. Surprise, surprise, fluids do have viscosity and sometimes that is sufficient to overcome even temperature gradients. In other words an inversion forms. It is not until late afternoon when cold katabatic air flow occurs down tributary gullies, thereby entering the valley floor basal forests, getting under the warm air, does the inversion then start 'lifting off'.One can then hang glide for about 2 -3 hours late afternoon to early evening, 100 – 300 feet AGL in air which is constantly and gently rising at about 2- 4 foot/min up thus counteracting the sink rate of the glider. I have done this many times. The sensation is delicious.One of the thrilling side effects of this phenomenon was that I could cruise around over small lakes created by beaver dams, almost hands off the A frame, photographing beavers as they scudded backwards and forwards across their dams. Having never seen beavers in my life before this was a big, big thrill.Magic air indeed.We need to remember that solid ground is not a fluid. Groundwater within solid ground hardly behaves as a fluid. Air above the ground will not necessarily behave as a simple fluid, free to instantly convect. Having spent 12 years in an Australian Fed. Govt. research organisation and 3 in a Swiss one, I love the freedom and intellectual delights that come from doing pure research. I also know what it is like to live in academia. But such places are not the font of absolute truth.Next time, if you are out there, just for curiosity try sticking a thermometer onto hot tarmac perhaps which a fat goanna has just vacated. Or watch carefully and this time notice that jackrabbit who 'flicked off' a dust devil as he scampered across a gibber plain.There is an even greater wisdom which comes from just getting out there and actually experiencing the fantastic variety and subtleties of what Mother Nature 'has to throw at us' than we will ever find inside a laboratory or an office.Call them very steep temperature gradients if you wish, rather than discontinuities, but don't ever be so very, very foolish as to claim they don't exist.

  143. No, you're not with Jan. I am, though he doesn't know it. The frost situation I was talking about is inversion, in a lapse rate sense. The bottom air is cold, and the temp rises as you go up. What you are talking about, though, is not inversion. It's stability in the face of a super lapse rate, which would normally be convectively unstable. The undoing of this by the katabatic winds restores the instability, which creates the updrafts that you float on.So much for the formal argument. I hesitate to cause a distraction, but I don't believe your viscosity story. And I spent some years in the fluids lab at Highett, where they experiment with natural convection, so it isn't just math theory. But gas viscosity doesn't work like that. Air is always free to convect, if the temp gradient is there.

  144. “It's stability in the face of a super lapse rate, which would normally be convectively unstable. The undoing of this by the katabatic winds restores the instability, which creates the updrafts that you float on.”Huh?I couldn't give a tinkers cuss if you don't believe my viscosity story because there is clearly negligible convective rise from the bases of the valleys during the day. Its been tried. It is simply not possible to thermal off the bottoms of the valleys during most of the day. The only significant thermals are forming off slopes higher up. So how come therefore this stability in the face of a super lapse rate builds up over the better part of the day? The evening 'magic air' even feels warm!Be careful you don't 'super lapse ' into instability of rationality in your argument. I also don't care how many years you spent in how many labs experimenting with whatever. Been there, done that. Those who spend their lives in glass houses shouldn't stow thrones – they can never successfully sit on 'em anyway.

  145. In the forest (along with Little Red Riding Hood, the wolf, woodpeckers etc – presumably we can forget the beavers – don't want to blow my argument 'wide open').Have you perchance considered the properties of pine forests under daily irradiation?

  146. The temperature inversion in the lower part of the canopy is a typical feature of daytime temperature profiles in tall crop and forest canopies….Introduction to MicrometeorologyS. Pal Arya 2001

  147. Here, we examine whether sub-canopy flow through a small gully in the vicinity of the flux tower was thermotopographically driven, and was linked to the flow divergence found above canopy. While flow in the gully was frequently aligned with the mean wind aloft, indicating dynamic coupling, there were periods when the wind in the gully appeared to be decoupled from the flow aloft and was consistent with thermotopographic flow forcings (including geometry, temperature gradient, and net radiation). During the leaf-off season, these episodes exhibited a classic thermotopographic pattern, with down-gully nighttime flow and up-gully daytime flow. However, during the leaf-on season, the pattern was reversed: during the daytime, flow was down-gully consistent with inversion conditions occurring below the dense leaf canopy; at night, flow was up-gully, consistent with below-canopy lapse conditions. The thermotopographic flow during the leaf-on season suggests horizontal flow convergence at night and divergence during the day, and is shown to be decoupled from the flow aloft. While this research focuses only on flow patterns and not explicitly on CO2 gradients or fluxes, these findings suggest that inferences about drainage flow/advection and corrections to flux measurements based on above-canopy conditions alone may be inappropriate.Froelich and Schmid, 2006.

  148. Well now. Silence. Time for a new thought? Since the “consensus” hypothesis on the GHE effect is certainly suspect, given all the relevant information: i.e., no temperature increases for 12 years and absolutely no other empirical or theoretical evidence to support said nonsense. And since the Miskolczi hypothesis has been discredited by the experts here, maybe we should bo back to my simpleton hypothesis that the “greenhouse effect” is nothing more than the ability of the Planet to store heat from one day to the next. And the corollary that IR radiation doesn't have a damn thing to do with it. It now looks like this is as good an hypothesis as any other. LOL.

  149. The latest report from Anthony Watts on the real state of the US Historical Climate Network (USHCN) contains the following text:”We found stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat.””Check out the site survey photographs showing temperature stations next to brick and concrete walls, sited on or next to concrete, …..””Check out the photographs from heat cameras showing concrete more than 10 degrees warmer than the air temperature.”What's all this Kuhnkat and Nick Stokes were telling us about the lack of near-surface temperature discontinuities?Clearly not a notion that Anthony Watts 'warms to'!

  150. OK, I saw lots of infrared photos, indicating that there are big instantaneous temperature differences of different surfaces, measured by infrared thermometry. Especially in a built or somewhat industrial environment, which relates to the point that AW is trying to make. But that doesn't imply a discontinuity. All it says is that you have surface heterogeneity, like hot pavement in the sun vs lawn. And that means you must have variation on a smaller spatial scale, over short periods of time.None of this relates to the original argument, which tried to stretch an approximate solution of the radiative transfer equations to claim a general temperature discontinuity between surface and atmosphere. Oddly, the original fallacious argument was advanced to try to discredit the Milne solution by saying that such a discontinuity is impossible, but now seems to have morphed into the credulous thinking that the inferred discontinuity is real.

  151. Steve Short,please state your definition of DISCONTINUITY in the physical world.Your examples have nothing to do with what I, and probably the others seem to think it is.

  152. If temperature from the ground up actually passes through a significant maximum (or a minimum) over height ranges which may be quite small e.g. just within the elevations of crops or trees or lower parts of valleys, urban canyons etc etc then that is mathematically equivalent to a discontinuity since, as must be obvious, the temperature then does NOT change monotonically moving upwards from the surface and cannot be modeled as such. If you need me to explain what monotonically means you shouldn't be trying to start an argument on this subject. All the Miskolczi math falls over if such low level inversions occur. Even the conventional treatment ignores all such micrometeorological situations. As you can see I already proved Nick has no idea how/why/when such low level inversions occur as evidenced by his deathly silence after I quoted a 2001 micrometeorology textbook and a 2006 paper on the very situation I had raised (and he was getting all high and mighty about).If that is what he was trying to say, then Jan was correct that inversions are functionally equivalent to discontinuities for the mathematical purposes we were considering.It may be worthwhile you talking to those who fly sailplanes, hang gliders, paragliders or even crop dusting planes. Those communities tend to have real good hands on knowledge, based on often hair-raising experience of the realities of low level atmospheric inversions and other sharp discontinuities e.g. shears. As you well know there's just no subject for going out and experiencing reality no matter how nicely gold-plated or pre-heated the toilet seat from which one normally likes to pontificate (I'm quoting my long deceased old Dad again – an amateur philosopher out there with the best and worst of 'em).

  153. Steve,A discontinuity is a cliff, not a hill. Certainly a deviation from monotonicity does not imply discontinuity.I didn't reply earlier because if a local heat gradient under trees proves something, then I've lost track of what it is.

  154. This discussion has descended into semantic farce. It's all a question of the scale at which you look at something. Magnify any cliff and you will find a series of steep slopes (not to mention all sort of roughness). So, if you like, I may label a 'discontinuity' a relatively steep gradient depending on the scale one chooses to or can look at (and maybe deal mathematically with at bulk scale). For many problems that may well be a practical scale. Then we might even be able to make it/call it a boundary condition (right Jan?). As an expert in hydrodynamics Nick knows full well that at the end of the day it is all a question of the scale you employ. Look in fine scale and you will find a host of steep gradients or discontinuities even right down to complete fractality. Nick knows all this full well – he just wants to play a silly game of 'lets catch the other guy out'. I don't buy into it.For many practical purposes Nature is ubiquitously littered with steep gradients/discontinuities. For example, circle up in a good large thermal in a sailplane and you could easily be rising at (say) 10 foot/sec up. You may even be able to log the lapse rate as you do so. But just slip outside the margins of the thermal by about 10 or 20 foot and you will probably be in air descending at (say) 10 foot/sec. Sharp gradient? Discontinuity? I would say yes (for all practical purposes).A forester may have a tower in the middle of a forest logging vertical gradients of temperature, relative humidity, insolation penetration etc. Will he sometimes find some very steep gradients in there? You betcha!As for Kuhnkat – well, I doubt he could always find the right end of a monkey wrench, that big ol' kuhnskin cap regularly slips down over his eyes and ears so much. Yep.

  155. This whole discussion about whether a temperature discontinuity is observed at the Earth’s surface or not (and it seems trivial to know that there either is or there isn’t depending on what you actually mean by ‘temperature discontinuity’) seems to me to be entirely beside the point. The temperature discontinuity that Miskolczi is talking about is the ‘radiative equilibrium temperature discontinuity’ and it’s not meant to be observed at the Earth’s surface in either the classical theory OR the Miskolczi theory — because the assumption of radiative equilibrium is supposed to break down at the convective surface of Earth in the classical theory.

    By the way, I believe that the earliest reference to the temperature discontinuity in the English literature in connection with the modern era of GCM modelling is in the seminal paper by Manabe and Möller 1961, “On the radiative equilibrium and heat balance of the atmosphere”, Monthy Weather Review, 89, 12, 503-532.

    http://docs.lib.noaa.gov/rescue/mwr/089/mwr-089-12-0503.pdf

    After explaining how they assumed rather than calculating the Earth’s surface temperature they go on to describe their earlier result (only available in German). I quote from pp. 518-519:

    In the computation by the matrix method, we allowed the possibility of a temperature discontinuity at the earth’s surface as was first obtained by Emden 1913, whereas in the present computation we did not. The magnitude of temperature discontinuity was very small, about 0.06 C. This is much smaller than the discontinuity of about 20 C which Emden obtained in his computation of radiative equilibrium based upon the assumption of gray radiation. Accordingly, no large error would be introduced by the neglect of this temperature jump at the earth’s surface, which we did in the present computation.

    Later in Manabe and Strickler 1964 (“Thermal Equilibrium of the Atmosphere with a Convective Adjustment”, J. Atmos. Sci., 21(4), pp. 361–385, p. 362):

    http://ams.allenpress.com/perlserv/?request=res-loc&uri=urn%3Aap%3Apdf%3Adoi%3A10.1175%2F1520-0469%281964%29021%3C0361%3ATEOTAW%3E2.0.CO%3B2

    Section 2a. Pure Radiative Equilibrium. … In the course of the computation, the temperature jump which theoretically exists between the atmosphere and the earth’s surface is smoothed out by the vertical finite difference representation of the equations of radiative transfer. Fortunately, the magnitude of the theoretical temperature jump is much smaller than would be the case if a gray assumption were made for the absorption and emission of radiation. This is due partly to the very strong absorption near the line centers, and also to the upward radiation from the earth’s surface through the nearly transparent regions in the line wings and through the window region of water vapor, which compensates for most of the net downward solar radiation at the earth’s surface. Accordingly, the condition of no heat storage at the earth’s surface could be satisfied radiatively by a temperature jump which is much smaller than that for a gray absorber. In the previous study […the above-mentioned German language paper], this temperature jump at the surface turned out to be less than 1 C, depending on the amount of water vapor and other parameters. Thus, the neglect of the temperature jump would not produce a serious error in the results.

    Then in Manabe and Wetherald 1967 and more or less from then on it seems that it was assumed in GCM modelling that this radiative equilibrium temperature jump should be set to 0 and treated as empirically unobservable:

    Manabe and Wetherald 1967 “Thermal equilibrium of the atmosphere with a given distribution of relative humidity”, J. Atmos. Sci, 24(3), pp. 241-259:

    http://ams.allenpress.com/perlserv/?request=res-loc&uri=urn%3Aap%3Apdf%3Adoi%3A10.1175%2F1520-0469%281967%29024%3C0241%3ATEOTAW%3E2.0.CO%3B2

    …the radiative convective equilibrium of the atmosphere…should satisfy the following requirements: … (2) No temperature discontinuity should exist

    I should add that I am yet to find a single reference to Milne 1922 or indeed to anything to do with Milne in all of this literature; it’s widely known in the literature that the temperature discontinuity originated with Emden in 1913. It’s unclear that it can have had any effect in any GCM as all of them have assumed the temperature discontinuity would be negligible, even in the case of radiative equilibrium, which doesn’t seem to be the case of the earth’s surface.

  156. This whole discussion about whether a temperature discontinuity is observed at the Earth's surface or not (and it seems trivial to know that there either is or there isn't depending on what you actually mean by 'temperature discontinuity') seems to me to be entirely beside the point. The temperature discontinuity that Miskolczi is talking about is the 'radiative equilibrium temperature discontinuity' and it's not meant to be observed at the Earth's surface in either the classical theory OR the Miskolczi theory — because the assumption of radiative equilibrium is supposed to break down at the convective surface of Earth in the classical theory.By the way, I believe that the earliest reference to the temperature discontinuity in the English literature in connection with the modern era of GCM modelling is in the seminal paper by Manabe and Möller 1961, “On the radiative equilibrium and heat balance of the atmosphere”, Monthy Weather Review, 89, 12, 503-532.http://docs.lib.noaa.gov/rescue/mwr/089/mwr-089…After explaining how they assumed rather than calculating the Earth's surface temperature they go on to describe their earlier result (only available in German). I quote from pp. 518-519:

    In the computation by the matrix method, we allowed the possibility of a temperature discontinuity at the earth's surface as was first obtained by Emden 1913, whereas in the present computation we did not. The magnitude of temperature discontinuity was very small, about 0.06 C. This is much smaller than the discontinuity of about 20 C which Emden obtained in his computation of radiative equilibrium based upon the assumption of gray radiation. Accordingly, no large error would be introduced by the neglect of this temperature jump at the earth's surface, which we did in the present computation.

    Later in Manabe and Strickler 1964 (“Thermal Equilibrium of the Atmosphere with a Convective Adjustment”, J. Atmos. Sci., 21(4), pp. 361–385, p. 362):http://ams.allenpress.com/perlserv/?request=res

    Section 2a. Pure Radiative Equilibrium. … In the course of the computation, the temperature jump which theoretically exists between the atmosphere and the earth's surface is smoothed out by the vertical finite difference representation of the equations of radiative transfer. Fortunately, the magnitude of the theoretical temperature jump is much smaller than would be the case if a gray assumption were made for the absorption and emission of radiation. This is due partly to the very strong absorption near the line centers, and also to the upward radiation from the earth's surface through the nearly transparent regions in the line wings and through the window region of water vapor, which compensates for most of the net downward solar radiation at the earth's surface. Accordingly, the condition of no heat storage at the earth's surface could be satisfied radiatively by a temperature jump which is much smaller than that for a gray absorber. In the previous study […the above-mentioned German language paper], this temperature jump at the surface turned out to be less than 1 C, depending on the amount of water vapor and other parameters. Thus, the neglect of the temperature jump would not produce a serious error in the results.

    Then in Manabe and Wetherald 1967 and more or less from then on it seems that it was assumed in GCM modelling that this radiative equilibrium temperature jump should be set to 0 and treated as empirically unobservable:Manabe and Wetherald 1967 “Thermal equilibrium of the atmosphere with a given distribution of relative humidity”, J. Atmos. Sci, 24(3), pp. 241-259:http://ams.allenpress.com/perlserv/?request=res

    …the radiative convective equilibrium of the atmosphere…should satisfy the following requirements: … (2) No temperature discontinuity should exist

    I should add that I am yet to find a single reference to Milne 1922 or indeed to anything to do with Milne in all of this literature; it's widely known in the literature that the temperature discontinuity originated with Emden in 1913. It's unclear that it can have had any effect in any GCM as all of them have assumed the temperature discontinuity would be negligible, even in the case of radiative equilibrium, which doesn't seem to be the case of the earth's surface.

  157. Doesn’t the surface discontinuity or not depend on time rather than cliffs and hills; for instance the air adjacent to the surface is heated by the surface but will be compressed by the air above it until it becomes warmer than that higher air; the most the surface can warm the adjacent air is to its own temperature at which time the adjacent air must be warmer than the higher air and will convectively ascend to be replaced by the cooler descending air; thus at ‘lift-off’ there is no discontinuity but at the other ‘refueling’ stages there is.

    1. Hi cohenite

      Nick might say some things that are in my opinion silly but hes not quite that silly. Nick was being metaphorical the cliff refers to a singularity or if you prefer an infinite gradient that really does not lend itself to be divided into “hills” by altering the scale.

      1. Er, thank you Jan. Quite so, But the “hill” refers to Steve’s odd belief that an inversion layer is a discontinuity (hill – hilltop – change of slope – inversion).

      2. No, I don’t think you are silly either Nick.

        But putting Jan’s uncharacteristic little bout of empathy aside (is this not the winter of our discontent….), you can get really silly when you pretend you know what goes on inside the canopies of forests.

        I strongly suggest you brief yourself on the relationships between aerodynamic resistance, canopy resistance, Vapor Pressure Deficit and Bowen Ratio etc. throughout the day for deciduous forests etc., viz:

        http://books.google.com.au/books?id=KaJHBv9FbYIC&pg=PA280&lpg=PA280&dq=dry+thermal+flux+magnitude&source=bl&ots=2vSmroL0nZ&sig=oXVdKWhAANxZRoUf2fLB7B-Ndt4&hl=en&ei=4aQjSruzGoyBkQW5kfWDBQ&sa=X&oi=book_result&ct=result&resnum=2#PPA281,M1

      3. Steve, I don’t think I’ve claimed knowledge of what goes on in canopies (although I could – in CSIRO my first job was a four year stint working for John Philip – known for the soil plant atmosphere continuum).
        I fully accept that during the day, when sunlight is intercepted by the canopy, heat must flow downwards from canopy to Earth. I just can’t see what it’s relevant to.
        My objection is just the meaning of words. An inversion (change of gradient) is not a discontinuity.

  158. Doesn't the surface discontinuity or not depend on time rather than cliffs and hills; for instance the air adjacent to the surface is heated by the surface but will be compressed by the air above it until it becomes warmer than that higher air; the most the surface can warm the adjacent air is to its own temperature at which time the adjacent air must be warmer than the higher air and will convectively ascend to be replaced by the cooler descending air; thus at 'lift-off' there is no discontinuity but at the other 'refueling' stages there is.

  159. Guys:

    The film of air in contact with a solid surface is always at the temperature of the solid surface – no exceptions. The temperature gradient next to the surface may be extreme (glowing electric hotplate) or zero (thermal equilibrium). There are ways of calculating this gradient, but all are empirical and not based on first principles.

    The temperature discontinuity arises in math-model-land when calculations divide the atmosphere up into layers and one calculates from the top down making no a priori assumptions about conditions at the surface. If you end up with Aa != Ed you have a problem, that is, an unphysical temperature discontinuity. What you should do when you discover this is go fix your model.

    There is no such thing as a temperature discontinuity in nature – only in mathematical models.

  160. Guys:The film of air in contact with a solid surface is always at the temperature of the solid surface – no exceptions. The temperature gradient next to the surface may be extreme (glowing electric hotplate) or zero (thermal equilibrium). There are ways of calculating this gradient, but all are empirical and not based on first principles.The temperature discontinuity arises in math-model-land when calculations divide the atmosphere up into layers and one calculates from the top down making no a priori assumptions about conditions at the surface. If you end up with Aa != Ed you have a problem, that is, an unphysical temperature discontinuity. What you should do when you discover this is go fix your model.There is no such thing as a temperature discontinuity in nature – only in mathematical models.

  161. Hi cohenite Nick might say some things that are in my opinion silly but hes not quite that silly. Nick was being metaphorical the cliff refers to a singularity or if you prefer an infinite gradient that really does not lend itself to be divided into “hills” by altering the scale.

  162. Er, thank you Jan. Quite so, But the “hill” refers to Steve's odd belief that an inversion layer is a discontinuity (hill – hilltop – change of slope – inversion).

  163. Pochas,

    Very well, so whose mathematical model actually has a temperature discontinuity?

    Here is Lindzen in 1994:

    As was noted long ago by Emden 1913, radiative equilibrium profiles are intrinsically impossible since they lead to large decreases in temperature with height with respect to buoyant convection.

    http://www-eaps.mit.edu/faculty/lindzen/191_ach.pdf

    Now the thing is, everyone knows and agrees that the convective adjustment was just a hack, and that even the newest parameterisations of convection are also hacks.

    If this is all Miskolczi is telling us, then the theory is not original.

    Or, if he is telling us something else, then what is it? Please tell me what the actual problem is that Miskolczi has solved in his paper? Miklos posted up quotations at his website from Milne and Eddington and we were all led to believe that Milne had misinterpreted Eddington 1916 and Schwarzschild 1906, and that all climatologists since have lacked the brains and initiative to go back and check over Milne’s assumptions.

    This is unquestionably wrong now, so then what is the error and who made it? Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake. Now, that’s absurd. I know for a fact that none of us have read Emden 1913. So what, was it Schwarzschild who made the mistake?

    If no one knows the answer, can’t we all just let this go & start focusing on what the real errors are in AGW theory?

    1. “Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake.”

      That’s another real hoot just like the one where I heard someone say we merely had to wait around for all of NASA to bow down, admit they had all been idiots, and that only Miskolczi was the ‘one true prophet’ of LBL radiation codes.

      Pigs WILL fly – and into space too!

    2. “Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake. Now, that’s absurd.”

      Well Alex I didn’t realise that you had misunderstood what I said so completely, perhaps I should say nothing at all to you.

      People keep making the same mistake because the keep on kow-towing to those they think are giants instead of standing on their shoulders, and since you are so knowledgeable perhaps you can explain what both Pat Cassen and Nick Stokes have dodged and that is how we can get two boundary conditions for a first order differential equation where the variable in question is unbounded.

      I await your answer with bated breath.

      1. Jan, this is tiresome. I’ve spelt it out many times. Your turn. What is the ode with 2 bc? What are they? What variable is unbounded? Give a proper argument instead of muddled allusions.

      2. “Jan, this is tiresome.”

        I agree Nick.

        here first definition
        http://www.answers.com/topic/semi-infinite

        semi-infinite unbounded in one direction or dimension.

        Second one
        http://en.wikipedia.org/wiki/Laplace_transform#Formal_definition

        You will no doubt notice that little ‘8’ on it’s side that is the symbol for infinity which means that the integral is unbounded in one direction, in this case the variable is time and that is unbounded i.e. it goes on forever.

        You will no doubt notice that the bilateral Laplace transform is unbounded in both directions.

        In the the radiation transport equations the variable is not time but tau or the optical depth. In the classical solution it is this variable that is unbounded i.e. infinite.

        Now the two boundary conditions are the two values implied for the surface temperature for a finite tau where tau was assumed infinite in equations 15 & 16 in Miskolczi’s paper or 1 & 2 in the Lorenz and McKay for differential equations solved for the semi-infinite case.

        This is pretty elementary stuff and should not be a problem at all for someone who has a PhD in control mathematics.

      3. Jan,
        Again, you’re just not giving a connected argument. Yes, I know what semi-infinite means, and what a Laplace transform is. But where are they used in this theory?
        FM says he’s using a “semi-infinite” solution in Eq 15. But he isn’t. tau is there in the equation, and seems perfectly finite. He even, in the leadup to 16, defines a value of tau, tau_C, at the surface.

        L&M say nothing about their model being semi-infinite. Again tau is there in the equation, and they define tau_0 as the value at the surface. And they include it in their equations in the normal way.

        And you still haven’t addressed this nonsense about two boundary conditions.

      4. “Yes, I know what semi-infinite means, and what a Laplace transform is.”

        Which what I would expect.

        “L&M say nothing about their model being semi-infinite.”

        I don’t expect they should have to for someone who knows “know what semi-infinite means, and what a Laplace transform is” to know that it is.

      5. Again, these are useless answers. You’ve said L&M assume tau semi-infinite. Back it up!

      6. “Again, these are useless answers. You’ve said L&M assume tau semi-infinite. Back it up!”

        I don’t understand Nick I thought you had a PhD in control mathematics I only did one semester of of control mathematics and I can see that the two L&M equations 1 & 2 as well as the two Goody and Yung ones Miskolczi quote (16 & 17) are solutions of equation 12 assuming tau varies between e>0 to infinity.

        How is it that you can’t? At the very least L&M eqn 1 & 2 and M 16 & 17 should have an exp(-tau) term if finite tau was assumed.

      7. I can’t see it (and don’t believe it) and you’re not helping. Where does this exp(-tau) come from? Both L&M and FM specify an explicit finite tau range – 0 to tau_0 for L&M and 0 to tau_C for FM.

      8. Tell me Nick did you notice what happened the equation when Miskolczi specified a finite tau in appendix B do you notice the difference between his transfer function and say either of those in eqn 16 & 17 of the same paper. Anyone who knows ” what semi-infinite means, and what a Laplace transform is” should be able to tell us straight away. Also you can use your superior PhD in control mathematics training what equations B4 B5 B6 B7 B8 B9
        B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity.

      9. Again, in Eq 16 he has a finite value tau_A at ground. The diff in appendix B is that he’s applying his one BC at the ground rather than at TOA. There’s no issue of finiteness there.
        OF course, his problem then is that you can’t get it right in both places. Right at ground – wrong at TOA. Big problem.

      10. That is how it has seemed to me too.
        BTW for the statement: “At the very least L&M eqn 1 & 2 and M 16 & 17 should have an exp(-tau) term if finite tau was assumed.” –
        I can’t see that bunging in exp(-tau) has any math logic to it. What am I missing?
        Please explain. Just the math.

  164. Pochas,Very well, so whose mathematical model actually has a temperature discontinuity?Here is Lindzen in 1994:

    As was noted long ago by Emden 1913, radiative equilibrium profiles are intrinsically impossible since they lead to large decreases in temperature with height with respect to buoyant convection.

    http://www-eaps.mit.edu/faculty/lindzen/191_ach…Now the thing is, everyone knows and agrees that the convective adjustment was just a hack, and that even the newest parameterisations of convection are also hacks.If this is all Miskolczi is telling us, then the theory is not original.Or, if he is telling us something else, then what is it? Please tell me what the actual problem is that Miskolczi has solved in his paper? Miklos posted up quotations at his website from Milne and Eddington and we were all led to believe that Milne had misinterpreted Eddington 1916 and Schwarzschild 1906, and that all climatologists since have lacked the brains and initiative to go back and check over Milne's assumptions.This is unquestionably wrong now, so then what is the error and who made it? Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake. Now, that's absurd. I know for a fact that none of us have read Emden 1913. So what, was it Schwarzschild who made the mistake? If no one knows the answer, can't we all just let this go & start focusing on what the real errors are in AGW theory?

  165. No, I don't think you are silly either Nick.But putting Jan's uncharacteristic little bout of empathy aside (is this not the winter of our discontent….), you can get really silly when you pretend you know what goes on inside the canopies of forests. I strongly suggest you brief yourself on the relationships between aerodynamic resistance, canopy resistance, Vapor Pressure Deficit and Bowen Ratio etc. throughout the day for deciduous forests etc., viz:http://books.google.com.au/books?id=KaJHBv9FbYI

  166. “Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake.”That's another real hoot just like the one where I heard someone say we merely had to wait around for all of NASA to bow down, admit they had all been idiots, and that only Miskolczi was the 'one true prophet' of LBL radiation codes.Pigs WILL fly – and into space too!

  167. Steve, I don't think I've claimed knowledge of what goes on in canopies (although I could – in CSIRO my first job was a four year stint working for John Philip – known for the soil plant atmosphere continuum).I fully accept that during the day, when sunlight is intercepted by the canopy, heat must flow downwards from canopy to Earth. I just can't see what it's relevant to.My objection is just the meaning of words. An inversion (change of gradient) is not a discontinuity.

  168. “Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake. Now, that's absurd.”Well Alex I didn't realise that you had misunderstood what I said so completely, perhaps I should say nothing at all to you.People keep making the same mistake because the keep on kow-towing to those they think are giants instead of standing on their shoulders, and since you are so knowledgeable perhaps you can explain what both Pat Cassen and Nick Stokes have dodged and that is how we can get two boundary conditions for a first order differential equation where the variable in question is unbounded. I await your answer with bated breath.

  169. Jan,

    Sorry, I may have misunderstood you, but if not to suppose that all astrophysicists and climatologists alike have made the same error, then how else could you answer this? That one great astrophysicist (Milne) made a mistake, and that others took his word for it simply because no one ever imagined he’d make such a basic error — that was a believable story. So what is the story now?

    Pochas obviously doesn’t think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven’s sake, doesn’t this deserve some kind of explanation?

  170. Jan,Sorry, I may have misunderstood you, but if not to suppose that all astrophysicists and climatologists alike have made the same error, then how else could you answer this? That one great astrophysicist (Milne) made a mistake, and that others took his word for it simply because no one ever imagined he'd make such a basic error — that was a believable story. So what is the story now?Pochas obviously doesn't think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven's sake, doesn't this deserve some kind of explanation?

  171. Jan,

    Sorry, I may have misunderstood you, but if not to suppose that all astrophysicists and climatologists alike have made the same error, then how else could you answer this? That one great astrophysicist (Milne) made a mistake, and that others took his word for it simply because no one ever imagined he’d make such a basic error — that was a believable story. So what is the story now?

    Pochas obviously doesn’t think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven’s sake, doesn’t this deserve some kind of explanation?

    1. Alex,

      Milne made no error. He was working with Solar type atmospheres and the computational shortcut worked fine there. The fact that early atmospheric types made the mistake of ASSUMING that this shortcut would be close enough for gubmint work here on earth and other planetary atmospheres is another issue entirely.

    2. Hi ALex,

      You have the story more or less straight now what I objected to was “has independently ” I didn’t think there was anything independent about it, but more a case of people going along with it because it was convenient. How come it could go on so long?

      It’s what people wanted to hear as Miskolczi quoted from Milne’s paper
      “Assumption of infinite thickness involves little or no loss of generality” Then goes on to provide the two stream fudge. Leaving no room for the IR window (or transmission through the atmosphere of IR) which was in fact discovered some time later as more work was done on absorption coefficients.

      1. Jan,

        You can’t have it both ways.

        Either astrophysicists/climatologists have independently made the same mistake (and Milne 1922 does seem to be largely independent of Emden 1913) or one person made the mistake first (Schwarzschild??) and others copied (i.e. there was some kind of dependency).

        So which was it: who went along with whom because it was convenient? By bet is, you don’t know the answer to this question, which proves that you like everyone else here have taken this whole thing on faith because it says so in M’s 2007 paper.

      2. Alex Excuse me?

        Where dose Emden or his brother in law say:

        “Assumption of infinite thickness involves little or no loss of generality”

        or anything like it?

        Do you honestly believe that anyone could read through that 1922 paper of Milne’s find that quote and the fudge, that everyone subsequently has used, without having heard of Emden, Swchwarzschild and Gold as you suggested to me earlier?

        Now kindly look at the equations from which Emden deduced his temperature discontinuity here and take a close look at what Milne comes up with and then compare it with the one quote from Goody and Yung and you’ll soon see (I have a vain hope still) that you will see why Milne and not Emden was cited as the source of the error.

  172. Jan,Sorry, I may have misunderstood you, but if not to suppose that all astrophysicists and climatologists alike have made the same error, then how else could you answer this? That one great astrophysicist (Milne) made a mistake, and that others took his word for it simply because no one ever imagined he'd make such a basic error — that was a believable story. So what is the story now?Pochas obviously doesn't think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven's sake, doesn't this deserve some kind of explanation?

  173. Jan:
    “Pochas obviously doesn’t think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven’s sake, doesn’t this deserve some kind of explanation?”

    pochas:
    Another mistake that appears to have a life of its own is the constant relative humidity assumption that is behind this whole AGW scare. Spencer, Lindzen and Miskolczi have all argued against this, Lindzen and Spencer with data and analysis, Miskolczi with the hypothesis being discussed here. Will this get into the models? Not while they are funded with US government AGW study grants. (I know, RH is now allowed to vary a little.) This mistake puts the temperature discontinuity mistake in the shade.

    These mistakes have wasted billions of dollars, but because of funding considerations they are bullet proof.

    Alex:
    “Now the thing is, everyone knows and agrees that the convective adjustment was just a hack, and that even the newest parameterisations of convection are also hacks.

    If this is all Miskolczi is telling us, then the theory is not original.

    Or, if he is telling us something else, then what is it?”

    pochas:
    It seems as though “this is nothing new” is often heard as the culprit covers his tracks. What M has done is to write a paper that presents a method, with constant tau and surface temperature equilibrium (lets put Kirchoff to bed), which, if it stands, is a strong refutation of alarmist AGW theory.

    I don’t think its necessary to debate eq (7) or the Virial rule any further. They don’t really matter.

    1. pochas:
      Another mistake that appears to have a life of its own is the constant relative humidity assumption that is behind this whole AGW scare. Spencer, Lindzen and Miskolczi have all argued against this, Lindzen and Spencer with data and analysis, Miskolczi with the hypothesis being discussed here. Will this get into the models?

      This silly furphy certainly seems to have a life of its own. There is no constant relative humidity assumption behind AGW. I know of no Lindzen/Spencer argument on this. Miskolczi doesn’t mention relative humidity anywhere.
      Please provide some evidence before propagating this nonsense.

      1. Hey, Nicko: From NASA:

        “In climate modeling, scientists have assumed that the relative humidity of the atmosphere will stay the same regardless of how the climate changes. In other words, they assume that even though air will be able to hold more moisture as the temperature goes up, proportionally more water vapor will be evaporated from the ocean surface and carried through the atmosphere so that the percentage of water in the air remains constant. Climate models that assume that future relative humidity will remain constant predict greater increases in the Earth’s temperature in response to increased carbon dioxide than models that allow relative humidity to change. The constant-relative-humidity assumption places extra water in the equation, which increases the heating.”

        http://earthobservatory.nasa.gov/Features/WaterVapor/water_vapor3.php

        Now, it’s your turn to provide a linky.

      2. OK, Jae
        We’ve seen this one before. It’s a science ed site, and they just got it wrong. Here is my linky. It is GISS Model E output. It’s interactive, so you have to set map type to trend, and quantity to relative humidity. You can play with different time periods. The results are interesting, but RH is definitely not constant.

        You can of course go here and look at the code. I’ve done that. The transport model for water is quite conventional. No sign of any constant RH assumption.

      3. Nick,

        you “proved” that the GISS model E does not assume, or output, a constant humidity.

        I DID notice that the only place it showed decreasing humidity was in the troposphere from 1980-2008 and excluding the high latitudes and the equator..

        Since I think most people agree that the humidity went UP in the trop, but, definitely did not go DOWN, the model is still WRONG!!!!

      4. Well, the goalposts are moving. But you’d better sort out which kind of humidity you are talking about. Check out specific humidity.

      5. If I could move the earth, I would!!!

        HAHAHAHAHAHAHAHAHA

        Unfortunately for your model the Specific Humidity is rising in the strat. Again, doesn’t match the earth.

        How about some arm waving for the reduction of water vapor by increase in CO2?? I ran across this guy looking for other things:

        http://www.geocities.com/profadrian/ScienceOfGlobalWarming.html

        Scan down to the Forcing Concept section. Sounds almost too simple to be real!!

        The way I understand what he is saying is that increase in ANY gas to the atmosphere would tend to reduce water vapor!!

        Cheers!!!

      6. Nick,
        C’mon…. do some reading…. the assumption of constant relative humidity, unlike temperature discontinuity, is all throughout the modern literature on GCM modelling, starting with the Manabe & Wetherald paper I pasted above. Lindzen’s arguments against it are in the last Lindzen paper I posted. If you then look in the ECHAM5 manual I posted, and followed that to Tiedtke 1989 cited therein, you can see for yourself that there is still a very unphysical hack in models for dealing with convection that goes back at least as far as 1989 (ironically, Lindzen himself seems to have had a significant role in creating the new hack so he’s eminently qualified to comment on its shortcomings).

      7. Alex,
        Yes, early models like M&W did make assumptions of that kind. But from the mid 70’s, models used a world grid and solved the transport equations directly. Then assumptions about RH were not only unnecessary, but unfeasible, since they would override conservation of mass.

        I saw nothing in the Lindzen 1999 chapter about RH in modern GCM’s. Convection adjustments are something different. Again his reference here seems to be to 60’s papers.

      8. Moved up to avoid too much thinning

        “wrong at TOA. Big problem.”

        Fo mathematicians maybe. That the equations don’t hold where there are no absorbers or air fro that matter are no problem at all just exclude to point at tau=0 like they do here

        Now you have evaded the question: What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity.

        i don’t know about anyone else but I don’t see any problem with a temperature discontinuity between the finite source and an infinite sink at the TOA.

      9. It’s not a temperature equation, it’s a flux equation. And getting the outgoing flux wrong is a problem.
        In App B he’s applying conditions at ground. If tau is infinite, there is no ground. But you’re dodging the questions – L&M and FM both specify tau at ground (and for FM, not just in App B). Where’s the semi-infinite assumption?

      10. “And getting the outgoing flux wrong is a problem.”

        OLR = f * Sg i.e. OLR = 2Sg/(1+tau + exp(-tau)) = 2/Sg*1/2 when tau = 0

        so when tau is 0 OLR = Su what a surprise.

        Now again: What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity?

      11. By the bye Nick have you by any chance noticed that when tau = 0 the TOA IS the ground and roughly the same conditions apply as on the moon?

      12. Nonsense. tau is just an altitude measure, like pressure. It starts at tau=0 at TOA – any atmosphere.
        And (prev comment) at OLR, no, the equation you’ve written involves tau_A, the tau at ground. You can’t set that to 0. What you have to do is put tau=0 in eq 21.
        And again, the B eqs would be nonsense if he set tau to infinity. He didn’t, and for this purpose (BC at ground) he can’t.

      13. “tau is just an altitude measure, like pressure.”

        you can take it that way if you want but you will be wrong. tau is optical depth which does vary with height but tau-tilde is the average whole atmosphere tau. tilde-tau_A is the optical depth of the entire column of the atmosphere not just a layer of it. Regardless of what you think the mathematical meaning of the integrals are I have just given you the physical meaning. You did notice that he had different dummy variable in the integral I hope.

        Equation 20 and figure 3 are an equation and a graph of the effect on OLR and Bg of the average tau = tilde-tau_A

        I had thought you had realised this by now.

      14. Quite wrong. Look at what FM says after eq 15:
        “where tilde-τ is the flux optical depth”
        “At the upper boundary tilde-τ = 0″
        Nothing whole atmosphere about that. He subscripts A to show ground values, which then means whole atmosphere.

      15. “And again, the B eqs would be nonsense if he set tau to infinity.”

        It didn’t stop Milne. Page 897 of his 1922 paper

        “Assumption of infinite thickness involves little or no loss of generality; we could if we liked, consider a mass of finite thickness with an inner boundary consisting of a black radiating surface, but since our results will only involve the optical thickness, we need only suppose the absorption coefficient or the density to become very suddenly large at the assigned depth in order to deduce the case of an inner boundary from the solution for an infinitely thick slab of material”

      16. There’s no indication this quote is relevant. I don’t think I have Milne’s paper, and Zagoni isn’t pushing it any more, but as I recall, he was treating radiation incident on a planet, not coming from the surface. You’ll have to do better than that.

      17. “I don’t think I have Milne’s paper,”

        Then get it google is your friend.

        You’ll have to do better than that.

        No Nick you do first read the paper and then answer what you have been evading.

        What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity?

        Answer that and the problem with Milne will become clear to you. It’s really immaterial whether we are talking about inbound or out bound the atmosphere of the earth is nowhere infinitely thick and cannot be sensibly modelled as an “infinitely thick slab of material”

    2. “I don’t think its necessary to debate eq (7) or the Virial rule any further. They don’t really matter.”

      I agree. I think it is all about the effects of rising CO2 and nutrient pollution on biota, on humidity, on aerosols, on CCN, on clouds, on albedo, Bowen Ratio (B), on Evaporative Fraction (EF), on latent heat (LH), on latent heat escaping TOA (LH_U), and on sensible heat (SH) fluxes etc., etc.

      Have some fun:

      https://download.yousendit.com/U0d4K2VqMGN1YlBIRGc9PQ

  174. Jan:”Pochas obviously doesn't think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven's sake, doesn't this deserve some kind of explanation?”pochas:Another mistake that appears to have a life of its own is the constant relative humidity assumption that is behind this whole AGW scare. Spencer, Lindzen and Miskolczi have all argued against this, Lindzen and Spencer with data and analysis, Miskolczi with the hypothesis being discussed here. Will this get into the models? Not while they are funded with US government AGW study grants. (I know, RH is now allowed to vary a little.) This mistake puts the temperature discontinuity mistake in the shade.These mistakes have wasted billions of dollars, but because of funding considerations they are bullet proof.Alex:”Now the thing is, everyone knows and agrees that the convective adjustment was just a hack, and that even the newest parameterisations of convection are also hacks.If this is all Miskolczi is telling us, then the theory is not original.Or, if he is telling us something else, then what is it?”pochas:It seems as though “this is nothing new” is often heard as the culprit covers his tracks. What M has done is to write a paper that presents a method, with constant tau and surface temperature equilibrium (lets put Kirchoff to bed), which, if it stands, is a strong refutation of alarmist AGW theory.I don't think its necessary to debate eq (7) or the Virial rule any further. They don't really matter.

  175. I think there is no discontinuity, but often a VERY steep gradient over a VERY thin layer. In humid warm areas (15-30 C), the air directly over the surface has an AVERAGE absolute humidity that is 70-80 percent of the saturation level. It seems to me that this could occur only if there were a layer of water at the surface that is very close in temperature to the air above. Mabe a very thin layer.

  176. pochas:Another mistake that appears to have a life of its own is the constant relative humidity assumption that is behind this whole AGW scare. Spencer, Lindzen and Miskolczi have all argued against this, Lindzen and Spencer with data and analysis, Miskolczi with the hypothesis being discussed here. Will this get into the models?This silly furphy certainly seems to have a life of its own. There is no constant relative humidity assumption behind AGW. I know of no Lindzen/Spencer argument on this. Miskolczi doesn't mention relative humidity anywhere. Please provide some evidence before propagating this nonsense.

  177. Jan, this is tiresome. I've spelt it out many times. Your turn. What is the ode with 2 bc? What are they? What variable is unbounded? Give a proper argument instead of muddled allusions.

  178. Hey, Nicko: From NASA:”In climate modeling, scientists have assumed that the relative humidity of the atmosphere will stay the same regardless of how the climate changes. In other words, they assume that even though air will be able to hold more moisture as the temperature goes up, proportionally more water vapor will be evaporated from the ocean surface and carried through the atmosphere so that the percentage of water in the air remains constant. Climate models that assume that future relative humidity will remain constant predict greater increases in the Earth’s temperature in response to increased carbon dioxide than models that allow relative humidity to change. The constant-relative-humidity assumption places extra water in the equation, which increases the heating.”http://earthobservatory.nasa.gov/Features/Water…Now, it's your turn to provide a linky.

  179. “I don't think its necessary to debate eq (7) or the Virial rule any further. They don't really matter.”I agree. I think it is all about the effects of rising CO2 and nutrient pollution on biota, on humidity, on aerosols, on CCN, on clouds, on albedo, Bowen Ratio (B), on Evaporative Fraction (EF), on latent heat (LH), on latent heat escaping TOA (LH_U), and on sensible heat (SH) fluxes etc., etc.Have some fun:https://download.yousendit.com/U0d4K2VqMGN1YlBI

  180. Alex,Milne made no error. He was working with Solar type atmospheres and the computational shortcut worked fine there. The fact that early atmospheric types made the mistake of ASSUMING that this shortcut would be close enough for gubmint work here on earth and other planetary atmospheres is another issue entirely.

  181. OK, JaeWe've seen this one before. It's a science ed site, and they just got it wrong. Here is my linky. It is GISS Model E output. It's interactive, so you have to set map type to trend, and quantity to relative humidity. You can play with different time periods. The results are interesting, but RH is definitely not constant.You can of course go here and look at the code. I've done that. The transport model for water is quite conventional. No sign of any constant RH assumption.

  182. “Jan, this is tiresome.”I agree Nick.here first definitionhttp://www.answers.com/topic/semi-infinitesemi-infinite unbounded in one direction or dimension.Second onehttp://en.wikipedia.org/wiki/Laplace_transform#…You will no doubt notice that little '8' on it's side that is the symbol for infinity which means that the integral is unbounded in one direction, in this case the variable is time and that is unbounded i.e. it goes on forever. You will no doubt notice that the bilateral Laplace transform is unbounded in both directions.In the the radiation transport equations the variable is not time but tau or the optical depth. In the classical solution it is this variable that is unbounded i.e. infinite. Now the two boundary conditions are the two values implied for the surface temperature for a finite tau where tau was assumed infinite in equations 15 & 16 in Miskolczi's paper or 1 & 2 in the Lorenz and McKay for differential equations solved for the semi-infinite case.This is pretty elementary stuff and should not be a problem at all for someone who has a PhD in control mathematics.

  183. Hi ALex,You have the story more or less straight now what I objected to was “has independently ” I didn't think there was anything independent about it, but more a case of people going along with it because it was convenient. How come it could go on so long? It's what people wanted to hear as Miskolczi quoted from Milne's paper”Assumption of infinite thickness involves little or no loss of generality” Then goes on to provide the two stream fudge. Leaving no room for the IR window (or transmission through the atmosphere of IR) which was in fact discovered some time later as more work was done on absorption coefficients.

  184. Nick,you “proved” that the GISS model E does not assume, or output, a constant humidity.I DID notice that the only place it showed decreasing humidity was in the troposphere from 1980-2008 and excluding the high latitudes and the equator..Since I think most people agree that the humidity went UP in the trop, but, definitely did not go DOWN, the model is still WRONG!!!!

  185. Jan,Again, you're just not giving a connected argument. Yes, I know what semi-infinite means, and what a Laplace transform is. But where are they used in this theory?FM says he's using a “semi-infinite” solution in Eq 15. But he isn't. tau is there in the equation, and seems perfectly finite. He even, in the leadup to 16, defines a value of tau, tau_C, at the surface.L&M say nothing about their model being semi-infinite. Again tau is there in the equation, and they define tau_0 as the value at the surface. And they include it in their equations in the normal way.And you still haven't addressed this nonsense about two boundary conditions.

  186. Well, the goalposts are moving. But you'd better sort out which kind of humidity you are talking about. Check out specific humidity.

  187. “Yes, I know what semi-infinite means, and what a Laplace transform is.”Which what I would expect.”L&M say nothing about their model being semi-infinite.”I don't expect they should have to for someone who knows “know what semi-infinite means, and what a Laplace transform is” to know that it is.

  188. Nick,C'mon…. do some reading…. the assumption of constant relative humidity, unlike temperature discontinuity, is all throughout the modern literature on GCM modelling, starting with the Manabe & Wetherald paper I pasted above. Lindzen's arguments against it are in the last Lindzen paper I posted. If you then look in the ECHAM5 manual I posted, and followed that to Tiedtke 1989 cited therein, you can see for yourself that there is still a very unphysical hack in models for dealing with convection that goes back at least as far as 1989 (ironically, Lindzen himself seems to have had a significant role in creating the new hack so he's eminently qualified to comment on its shortcomings).

  189. “Again, these are useless answers. You've said L&M assume tau semi-infinite. Back it up!”I don't understand Nick I thought you had a PhD in control mathematics I only did one semester of of control mathematics and I can see that the two L&M equations 1 & 2 as well as the two Goody and Yung ones Miskolczi quote (16 & 17) are solutions of equation 12 assuming tau varies between e>0 to infinity. How is it that you can't? At the very least L&M eqn 1 & 2 and M 16 & 17 should have an exp(-tau) term if finite tau was assumed.

  190. Jan,You can't have it both ways.Either astrophysicists/climatologists have independently made the same mistake (and Milne 1922 does seem to be largely independent of Emden 1913) or one person made the mistake first (Schwarzschild??) and others copied (i.e. there was some kind of dependency).So which was it: who went along with whom because it was convenient? By bet is, you don't know the answer to this question, which proves that you like everyone else here have taken this whole thing on faith because it says so in M's 2007 paper.

  191. Alex,Yes, early models like M&W did make assumptions of that kind. But from the mid 70's, models used a world grid and solved the transport equations directly. Then assumptions about RH were not only unnecessary, but unfeasible, since they would override conservation of mass. I saw nothing in the Lindzen 1999 chapter about RH in modern GCM's. Convection adjustments are something different. Again his reference here seems to be to 60's papers.

  192. I can't see it (and don't believe it) and you're not helping. Where does this exp(-tau) come from? Both L&M and FM specify an explicit finite tau range – 0 to tau_0 for L&M and 0 to tau_C for FM.

  193. Alex Excuse me?Where dose Emden or his brother in law say: “Assumption of infinite thickness involves little or no loss of generality”or anything like it?Do you honestly believe that anyone could read through that 1922 paper of Milne's find that quote and the fudge, that everyone subsequently has used, without having heard of Emden, Swchwarzschild and Gold as you suggested to me earlier?Now kindly look at the equations from which Emden deduced his temperature discontinuity here and take a close look at what Milne comes up with and then compare it with the one quote from Goody and Yung and you'll soon see (I have a vain hope still) that you will see why Milne and not Emden was cited as the source of the error.

  194. Tell me Nick did you notice what happened the equation when Miskolczi specified a finite tau in appendix B do you notice the difference between his transfer function and say either of those in eqn 16 & 17 of the same paper. Anyone who knows ” what semi-infinite means, and what a Laplace transform is” should be able to tell us straight away. Also you can use your superior PhD in control mathematics training what equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity.

  195. Again, in Eq 16 he has a finite value tau_A at ground. The diff in appendix B is that he's applying his one BC at the ground rather than at TOA. There's no issue of finiteness there.OF course, his problem then is that you can't get it right in both places. Right at ground – wrong at TOA. Big problem.

  196. Jan,

    That is absurd.

    1.

    Once again:

    M07: “…About 80 years ago Milne stated: “Assumption of
    infinite thickness involves little or no loss of generality”, and later, in the same paper, he created the concept of a secondary (internal) boundary (Milne, 1922). He did not realize that the classic Eddington solution [Alex — a reference to Eddington 1916?] is not the general solution of the bounded atmosphere problem and he did not re-compute the appropriate integration constant. This is the reason why scientists have problems with a mysterious surface temperature discontinuity and unphysical solutions, as in Lorenz and McKay (2003)…”

    Okay, do we agree that these are M’s actual words? Do we agree that 1913 came before 1922? Can you accept that words written in 1922 can not have influenced words written in 1913? And finally, do you admit that it is well-documented (anyone can see this from the very Bateman link you just posted) that the “mysterious” temperature discontinuity originated in Emden 1913? So, therefore, Miskolczi was WRONG on this point. Will you please concede this so that there is some sanity in the conversation?

    2.

    Can you list all the Schwarzschild papers you have read and where I can find them?
    Can you tell me how you can know what is written in Emden since you can’t read German?

    1. Alex “? Do we agree that 1913 came before 1922? Can you accept that words written in 1922 can not have influenced words written in 1913?”

      Yes I agree but look at the equations. Yes Emden did get a temperature discontinuity but the equations derived by Emden are not the same as those derived by Milne. Milne was the one who was followed, so what Emden might have thought or done is quite irrelevent.

      I might even go so far as to say that even though Emden preceded Milne and Milne did look at his paper and Emden did find a temperature discontinuity that the influence of Emden on Milne was in fact minimal.

    2. Alex you’ve got to stop this hero worship of Emden.

      “Can you tell me how you can know what is written in Emden since you can’t read German?”

      I don’t read German well but get by. It’s not necessary unless Bateman miscopied the equations what Emden did is of no interest to us. I have no trouble reading the equations there are the same in all languages.

  197. Jan,That is absurd.1.Once again:M07: “…About 80 years ago Milne stated: “Assumption ofinfinite thickness involves little or no loss of generality”, and later, in the same paper, he created the concept of a secondary (internal) boundary (Milne, 1922). He did not realize that the classic Eddington solution [Alex — a reference to Eddington 1916?] is not the general solution of the bounded atmosphere problem and he did not re-compute the appropriate integration constant. This is the reason why scientists have problems with a mysterious surface temperature discontinuity and unphysical solutions, as in Lorenz and McKay (2003)…”Okay, do we agree that these are M's actual words? Do we agree that 1913 came before 1922? Can you accept that words written in 1922 can not have influenced words written in 1913? And finally, do you admit that it is well-documented (anyone can see this from the very Bateman link you just posted) that the “mysterious” temperature discontinuity originated in Emden 1913? So, therefore, Miskolczi was WRONG on this point. Will you please concede this so that there is some sanity in the conversation?2.Can you list all the Schwarzschild papers you have read and where I can find them?Can you tell me how you can know what is written in Emden since you can't read German?

  198. Moved up to avoid too much thinning”wrong at TOA. Big problem.”Fo mathematicians maybe. That the equations don't hold where there are no absorbers or air fro that matter are no problem at all just exclude to point at tau=0 like they do hereNow you have evaded the question: What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity.i don't know about anyone else but I don't see any problem with a temperature discontinuity between the finite source and an infinite sink at the TOA.

  199. It's not a temperature equation, it's a flux equation. And getting the outgoing flux wrong is a problem.In App B he's applying conditions at ground. If tau is infinite, there is no ground. But you're dodging the questions – L&M and FM both specify tau at ground (and for FM, not just in App B). Where's the semi-infinite assumption?

  200. “And getting the outgoing flux wrong is a problem.”OLR = f * Sg i.e. OLR = 2Sg/(1+tau + exp(-tau)) = 2/Sg*1/2 when tau = 0 so when tau is 0 OLR = Su what a surprise.Now again: What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity?

  201. By the bye Nick have you by any chance noticed that when tau = 0 the TOA IS the ground and roughly the same conditions apply as on the moon?

  202. Alex “? Do we agree that 1913 came before 1922? Can you accept that words written in 1922 can not have influenced words written in 1913?”Yes I agree but look at the equations. Yes Emden did get a temperature discontinuity but the equations derived by Emden are not the same as those derived by Milne, Milne was the one follows so what Emden might have thought or done is quite irrelevent. I might even go so far as to say that even though Emden preceded Milne and Milne did look at his paper that the influence of Emden on Milne was minimal.

  203. Nonsense. tau is just an altitude measure, like pressure. It starts at tau=0 at TOA – any atmosphere.And (prev comment) at OLR, no, the equation you've written involves tau_A, the tau at ground. You can't set that to 0. What you have to do is put tau=0 in eq 21.And again, the B eqs would be nonsense if he set tau to infinity. He didn't, and for this purpose (BC at ground) he can't.

  204. Nick,

    early models like M&W did make assumptions of that kind. But from the mid 70’s, models used a world grid and solved the transport equations directly.

    I believe the truth of the matter is that in the mid 1970s they ‘sorta’ started doing what you’re saying, but still haven’t really got it right. You might like to find that Tiedtke 1989 paper and read it because it’s actually quite influential, in as much as this convective scheme seems to be in use even today.

    Quoting Ozawa & Ohmura 1997:

    “…Thermal convection has long been examined since an
    early investigation of the atmosphere by Hadley (1735),
    early laboratory experiments by Be´nard (1901), and a
    numerical solution of nonlinear equations by Lorenz
    (1963). Yet there is no solid physical theory that is
    capable of expressing the complete process of thermal
    convection. Understanding of the convection process
    may be of urgent necessity since all living creatures,
    including human beings, are distributed at the earth’s
    surface where convective transport of sensible heat and
    latent heat is largest. If convection were more active
    (inactive), surface temperature would decrease (increase).
    Yet we have no solid understanding how convection
    would change with a future increase of, for instance,
    carbon dioxide.

    One may expect general circulation models to represent
    the convective process. However, GCMs contain
    an artificial device for convection. The convective adjustment
    was first introduced by Manabe and Strickler
    (1964) in order to adjust the vertical temperature profile
    to observations. The adjustment was necessary for
    GCMs since it was not possible to treat vertical instability
    of the atmosphere by a grid-scale dynamic motion;
    thus the calculation diverged during time integration
    (Manabe et al. 1965). Even current versions of GCMs
    contain a sort of convective adjustment whose parameters
    are tuned to reproduce observations (e.g., Tiedtke
    1988).”

    1. Alex,
      Again this has nothing to do with assuming relative humidity constant. But yes, GCM’s do have trouble with thermal convection The reason is that much occurs during tropical storms etc, on a scale too small for the grid. It’s a familiar situation – in CFD, eddies (turbulence) invariably get down to a scale below what you can resolve, and the subgrid scale has to be modelled. With turbulence it is fairly random, but thermal convection has structure, which adds to the problem. Fortunately, there is a lot of observational data that can be used.

      1. Nick,

        Whether constant relative humidity is assumed or output is beside the point. The modellers certainly believe that relative humidity should remain constant, and we’ve also seen that they regard their data as hopelessly uncertain whenever it doesn’t agree with what they already believe (witness, Santer+16 on the tropospheric temperature trends, or everyone on the NOAA radiosonde data that shows atmospheric humidity declining). We know for a fact that thermal convection affects the relative humidity, as do the circulation patterns of the atmosphere. We also know that the models can’t actually get either of these right. Thus, the models are tuned to all this data you mention, and it seems just too hard to believe that they haven’t been tuned in such a way as to give the result the modellers expected.

        The bottom line is, how can you model something when you know you don’t know the underlying physical theory?

        Incidentally, have a look at this Jeffrey Kiehl (2007) paper:

        https://www.atmos.washington.edu/twiki/pub/Main/ClimateModelingClass/kiehl_2007GL031383.pdf

        It’s only short, and I’ve read it four times now. It’s an incredibly eye-opening piece coming from an IPCC author. Correct me if I’m wrong, but Kiehl is as good as admitting here that the model hindcasts of the simulated 20th century temperature records must have been fudges. He doesn’t say so explicitly, but it is implied. Read his concluding paragraph. He finally just shrugs it off and says, “well, it may be fudging to say we simulated the 20th century record, but who cares? Aerosols won’t matter in the future, and this fudging of the 20th century is kinda like tuning a NWP model which is known to improve its accuracy anyway.”

      2. Alex I don’t agree. But I’ll have to leave it there, cos I’m away for a couple of weeks. Hope you’ve sorted it out when I return.

    2. ps I did find the Tiedtke paper and yes, it is a subgrid scale model like those used in turbulence, but incorporating knowledge of cumulus cloud behaviour. His conclusion summarises the strengths and weaknesses well.

    3. The refrain from Gavin S and others is that specific humidity and hence relative humidity are model output from the calculations – definitely not an input.

      But the calculation seems to be thus:
      Specific humidity, q=e.eps/p, where p is the atmospheric pressure output from the model, eps is the constant 0.622, e comes from e=RH*es, where RH is relative humidity and es comes from the Clausius-Clapeyron relation, which depends on the temperature output. This leaves you to either use a measured, calculated or assumed value for RH. The number of calculated outputs (or degrees of freedom) from a model is very limited and increasing these outputs comes from coupling combined with gross assumptions. Pressure & temperature ok but I don’t immediately see how relative humidity can be another separate output from the numerical analysis of the grid. It surely must be either constrained or the result of a constraint.

      By comparison, the same “it’s not an assumption, it’s an output” argument has been used for the CO2 sensitivity, yet as Steve’s link above showed, researchers can vary the sensitivity to produce different scenarios. Regardless of which parameter is used to actually nudge the sensitivity in the desired direction, it is not a true output. It leads me to believe that modelers are just being tricky with their definitions. Of course documentation on this is somewhat sparse – both implementation and validations.

      Part of the trouble of course is that all models are not the same. While some are clearly rubbish, others – the coupled models – are not too bad. Someone though can say that the models do this or that when they really mean that just one particular model does it. And then there is a blanket assumption to lump together all models, good or bad, and blithely consider without a shred of physicality that the ensemble has some merit. But then surely only the more simplistic models can be used to produce these projections 100 years into the future.

      1. James,
        Where do you get this “calculation” from? I believe it is the other way around. Specific humidity is computed, and your formula or similar used to derive RH.
        Alex linked above this good documentation for the GCM Echam5. Eq 2.4 is the equation for water advection. It is a conventional flux-form mass-conserving advection equation for water content. That could only yield SH directly.

      2. “Climatology” by Rohli and Vega, chapter 5 “Energy matter and momentum exchanges near the surface.”

        Thanks though, I’ll check that ref out. I’ve been fobbed off in the past by people claiming it all came out of the C-C equation which of course was nonsense. Nice to see where it really does come from. I’m a bit suspicious of assumption led conclusions I admit since I’ve seen so many of them. However I don’t even think it’s an odd assumption that water vapour would increase with temperature, since surely that is what is needed to cause the Amazon to green up and the Sahara to shrink.

  205. Nick,early models like M&W did make assumptions of that kind. But from the mid 70's, models used a world grid and solved the transport equations directly.I believe the truth of the matter is that in the mid 1970s they 'sorta' started doing what you're saying, but still haven't really got it right. You might like to find that Tiedtke 1989 paper and read it because it's actually quite influential, in as much as this convective scheme seems to be in use even today.Quoting Ozawa & Ohmura 1997:”…Thermal convection has long been examined since anearly investigation of the atmosphere by Hadley (1735),early laboratory experiments by Be´nard (1901), and anumerical solution of nonlinear equations by Lorenz(1963). Yet there is no solid physical theory that iscapable of expressing the complete process of thermalconvection. Understanding of the convection processmay be of urgent necessity since all living creatures,including human beings, are distributed at the earth’ssurface where convective transport of sensible heat andlatent heat is largest. If convection were more active(inactive), surface temperature would decrease (increase).Yet we have no solid understanding how convectionwould change with a future increase of, for instance,carbon dioxide. …One may expect general circulation models to representthe convective process. However, GCMs containan artificial device for convection. The convective adjustmentwas first introduced by Manabe and Strickler(1964) in order to adjust the vertical temperature profileto observations. The adjustment was necessary forGCMs since it was not possible to treat vertical instabilityof the atmosphere by a grid-scale dynamic motion;thus the calculation diverged during time integration(Manabe et al. 1965). Even current versions of GCMscontain a sort of convective adjustment whose parametersare tuned to reproduce observations (e.g., Tiedtke1988).”

  206. “tau is just an altitude measure, like pressure.”you can take it that way if you want but you will be wrong. tau is optical depth which does vary with height but tau-tilde is the average whole atmosphere tau. tilde-tau_A is the optical depth of the entire column of the atmosphere not just a layer of it. Regardless of what you think the mathematical meaning of the integrals are I have just given you the physical meaning. You did notice that he had different dummy variable in the integral I hope. Equation 20 and figure 3 are an equation and a graph of the effect on OLR and Bg of the average tau = tilde-tau_AI had thought you had realised this by now.

  207. That is how it has seemed to me too. BTW for the statement: “At the very least L&M eqn 1 & 2 and M 16 & 17 should have an exp(-tau) term if finite tau was assumed.” -I can't see that bunging in exp(-tau) has any math logic to it. What am I missing?Please explain. Just the math.

  208. Alex,Again this has nothing to do with assuming relative humidity constant. But yes, GCM's do have trouble with thermal convection The reason is that much occurs during tropical storms etc, on a scale too small for the grid. It's a familiar situation – in CFD, eddies (turbulence) invariably get down to a scale below what you can resolve, and the subgrid scale has to be modelled. With turbulence it is fairly random, but thermal convection has structure, which adds to the problem. Fortunately, there is a lot of observational data that can be used.

  209. Quite wrong. Look at what FM says after eq 15:”where tilde-Ï„ is the flux optical depth””At the upper boundary tilde-Ï„ = 0″Nothing whole atmosphere about that. He subscripts A to show ground values, which then means whole atmosphere.

  210. ps I did find the Tiedtke paper and yes, it is a subgrid scale model like those used in turbulence, but incorporating knowledge of cumulus cloud behaviour. His conclusion summarises the strengths and weaknesses well.

  211. “And again, the B eqs would be nonsense if he set tau to infinity.”It didn't stop Milne. Page 897 of his 1922 paper”Assumption of infinite thickness involves little or no loss of generality; we could if we liked, consider a mass of finite thickness with an inner boundary consisting of a black radiating surface, but since our results will only involve the optical thickness, we need only suppose the absorption coefficient or the density to become very suddenly large at the assigned depth in order to deduce the case of an inner boundary from the solution for an infinitely thick slab of material”

  212. There's no indication this quote is relevant. I don't think I have Milne's paper, and Zagoni isn't pushing it any more, but as I recall, he was treating radiation incident on a planet, not coming from the surface. You'll have to do better than that.

  213. The refrain from Gavin S and others is that specific humidity and hence relative humidity are model output from the calculations – definitely not an input.But the calculation seems to be thus:Specific humidity, q=e.eps/p, where p is the atmospheric pressure output from the model, eps is the constant 0.622, e comes from e=RH*es, where RH is relative humidity and es comes from the Clausius-Clapeyron relation, which depends on the temperature output. This leaves you to either use a measured, calculated or assumed value for RH. The number of calculated outputs (or degrees of freedom) from a model is very limited and increasing these outputs comes from coupling combined with gross assumptions. Pressure & temperature ok but I don't immediately see how relative humidity can be another separate output from the numerical analysis of the grid. It surely must be either constrained or the result of a constraint.By comparison, the same “it's not an assumption, it's an output” argument has been used for the CO2 sensitivity, yet as Steve's link above showed, researchers can vary the sensitivity to produce different scenarios. Regardless of which parameter is used to actually nudge the sensitivity in the desired direction, it is not a true output. It leads me to believe that modelers are just being tricky with their definitions. Of course documentation on this is somewhat sparse – both implementation and validations.Part of the trouble of course is that all models are not the same. While some are clearly rubbish, others – the coupled models – are not too bad. Someone though can say that the models do this or that when they really mean that just one particular model does it. And then there is a blanket assumption to lump together all models, good or bad, and blithely consider without a shred of physicality that the ensemble has some merit. But then surely only the more simplistic models can be used to produce these projections 100 years into the future.

  214. “I don't think I have Milne's paper,”Then get it google is your friend.You'll have to do better than that.No Nick you do first read the paper and then answer what you have been evading.What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity?Answer that and the problem with Milne will become clear to you. It's really immaterial whether we are talking about inbound or out bound the atmosphere of the earth is nowhere infinitely thick and cannot be sensibly modelled as an “infinitely thick slab of material”

  215. James,Where do you get this “calculation” from? I believe it is the other way around. Specific humidity is computed, and your formula or similar used to derive RH.Alex linked above this good documentation for the GCM Echam5. Eq 2.4 is the equation for water advection. It is a conventional flux-form mass-conserving advection equation for water content. That could only yield SH directly.

  216. “Climatology” by Rohli and Vega, chapter 5 “Energy matter and momentum exchanges near the surface.”Thanks though, I'll check that ref out. I've been fobbed off in the past by people claiming it all came out of the C-C equation which of course was nonsense. Nice to see where it really does come from. I'm a bit suspicious of assumption led conclusions I admit since I've seen so many of them. However I don't even think it's an odd assumption that water vapour would increase with temperature, since surely that is what is needed to cause the Amazon to green up and the Sahara to shrink.

  217. Alex you've got to stop this hero worship of Emden.”Can you tell me how you can know what is written in Emden since you can't read German?”I don't read German well but get by. It's not necessary unless Bateman miscopied the equations what Emden did is of no interest to us. I have no trouble reading the equations there are the same in all languages.

  218. If I could move the earth, I would!!!HAHAHAHAHAHAHAHAHAUnfortunately for your model the Specific Humidity is rising in the strat. Again, doesn't match the earth.How about some arm waving for the reduction of water vapor by increase in CO2?? I ran across this guy looking for other things:http://www.geocities.com/profadrian/ScienceOfGl…Scan down to the Forcing Concept section. Sounds almost too simple to be real!!The way I understand what he is saying is that increase in ANY gas to the atmosphere would tend to reduce water vapor!!Cheers!!!

  219. Jan,

    Yes I agree but look at the equations. Yes Emden did get a temperature discontinuity but the equations derived by Emden are not the same as those derived by Milne, Milne was the one follows so what Emden might have thought or done is quite irrelevent. I might even go so far as to say that even though Emden preceded Milne and Milne did look at his paper that the influence of Emden on Milne was minimal.

    Okay, so you admit that Emden’s temperature discontinuity has nothing to do with any “semi-infinite” assumption that Milne made in 1922.

    Does that mean you agree that Emden’s temperature discontinuity is real even if Milne’s (and I assume for the moment that Milne actually had one since I can’t actually see one in the 1922 paper) isn’t?

    Finally, you say that Milne is the guy that everyone follows. How do you explain then that none of Manabe, Strickler, Moller, Wetherald, King, Ostriker or Lindzen actually cite Milne at all yet they all cite Emden?

    1. “Does that mean you agree that Emden’s temperature discontinuity is real”

      No and if Bateman’s paraphrase is true to the source neither does Emden.

      “Finally, you say that Milne is the guy that everyone follows. How do you explain then that none of Manabe, Strickler, Moller, Wetherald, King, Ostriker or Lindzen actually cite Milne at all yet they all cite Emden?”

      I don’t think “Wow he found one (TD) too” means they use his methods.

      Look at the equations. You obviously haven’t done that.

      So how do we know by looking at the equations that Emden didn’t use Milne’s approximation (time is not a factor) and for example Manabe and Wetherald did despite their brief mention of Emden?

      1. Jan,

        Okay can you please cite page, line & equation number in Bateman’s (10 page) summary of Emden’s 150 page monograph as evidence that “Emden didn’t believe in his own temperature discontinuity.”

        Here is the paper: http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044-08-0450.pdf

        Next you say “how do we know by looking at the equations that Emden didn’t use Milne’s approximation (time is not a factor)”

        *) I thought you just said you had looked at the equations yourself???
        *) “time is not a factor” — what, you mean because of time machines???

      2. TRy not to twist things Alex.

        I said “No and if Bateman’s paraphrase is true to the source neither does Emden.”

        I made no claim that Bateman is true to his source or adding his own opinion it’s in the last line before the summary in full.

        No no time machines I thought you were a little more logical than that. You are disappointing me.

        After your song and dance about Milne coming after Emden I thought you might be worrying about the possibility that I thought Milne had a time machine.

        Now what is different about the equations that make it rather obvious that Emden did not use the semi-infinite approximation and others do.

        Thanks i don’t need the paper I seem to have several copies.

      3. Jan,

        I realise on second reading, I misread you, that you’re saying, since Bateman hasn’t reproduced Emden’s derivations… how do we know Emden didn’t use Milne’s approximation… in other words you’re saying, how do we know they haven’t made the same mistake independently. And of course this whole discussion began when you denied having ever said that they must have made the same mistake independently. Okay, so you have contradicted yourself already.

        You can’t win this, you have to choose between two possibilities, either their discovery was independent, in which case we have an absurd story of many great astrophysicists (Emden, Eddington, Milne) all independently making a very obvious, silly error OR their discovery was dependent, in which case Emden must have come into the future and stolen Milne’s 1922 result.

        The other problem is that Eddington’s 1916 approximation didn’t exist for Emden either… and they were in separate countries (England vs Germany)… and they were known to be working independently.

        Can you just admit this has gone way past the point of silliness?

      4. “in other words you’re saying, how do we know they haven’t made the same mistake independently.”

        You say it here alex they did not make the *same* mistake independently.

        they made different ones.

      5. Okay, so now you’re saying there’s no dependency on Milne (you have agreed that’s absurd as it would involve time travel), they haven’t all made the same mistake independently, so to get out of saying they made the same mistake independently, you’re now saying they have made different mistakes independently, thus we have TWO temperature discontinuities, both of them wrong, for two different reasons. Is that it?

      6. “thus we have TWO temperature discontinuities, both of them wrong, for two different reasons. Is that it?”

        Yep and the one Emden made is of no interest to us. It’s trivially true if you only take into account radiation balance, and forget about conduction, convection and latent heat, that you will get a temperature discontinuity. So in a sense Emden’s error may have been no error at all but simply an artefact of what he was attempting to do. In any case it’s of no interest to us because apart from the odd mention he does not appear to have been a major influence in the intervening years.

        Alex It wasn’t I who was saying every astrophysicist etc since Schwarzschild this you will have to own up to yourself I’m afraid.

        You have wanted to bring the others prior to Milne into the narrative not I. While it might be interesting from a historical perspective it does not advance the debate on iota.

      7. Jan,

        So when Manabe and Möller write in 1961: ‘…In the computation by the matrix method, we allowed the possibility of a temperature discontinuity at the earth’s surface as was first obtained by Emden 1913, whereas in the present computation we did not. The magnitude of temperature discontinuity was very small, about 0.06 C. This is much smaller than the discontinuity of about 20 C which Emden obtained in his computation of radiative equilibrium based upon the assumption of gray radiation…’

        are they talking about Emden’s temperature discontinuity, or are they talking about Milne’s temperature discontinuity…?

      8. “are they talking about Emden’s temperature discontinuity, or are they talking about Milne’s temperature discontinuity…?”

        They are talking about Emden’s temperature discontinuity. This does not mean they used his method. Look at their equations the all important indicator of using finite tau (which Emden did in his method) is missing. Since that is missing and they are gloating that they did better than Emden at reducing the discontinuity you can’t really pin their error on Emden.

      9. Jan,

        We can’t look at the equations in this particular instance because it’s another paper that’s never been translated from the German.

        Okay, so they think they’re talking about Emden’s temperature discontinuity but actually they’re talking about Milne’s. This is a strange story, but possible I suppose.

        Very well, but I suppose we can agree that their decision to “stuff the theory” and arbitrarily set the temperature discontinuity to 0 can not be blamed on either Emden or Milne, but that is clearly their own decision. Right?

        So after Milne made his mistake, where did it next appear in the literature? Has it ever appeared anywhere other than in Weaver & Ramanathan / Lorenz & McKay? GCM modelling began with Moller, Manabe et al. If they didn’t actually use Milne, how else can it have ever affected GCM models?

      10. “Okay, so they think they’re talking about Emden’s temperature discontinuity but actually they’re talking about Milne’s.”

        No they are talking about Emden’s discontinuity.

        “So after Milne made his mistake,”

        I know we have been talking about “Milnes mistake” for my part mainly for language economy but is it really an error? To use an approximation for a windowless atmosphere when the window has yet to be discovered?

        Alex has it ever occurred to you that Energy Balance Models and General Circulation Models are different sorts of models?

      11. “Has it ever appeared anywhere other than in Weaver & Ramanathan Has it ever appeared anywhere other than in Weaver & Ramanathan ”

        I remembered another
        here found by David are the lecture notes of Irina Sokolic of Georgia Tech. So it’s in the text books of Milne wirtten in 1930, Goody and Yung 1989 and current lecture notes. It is what has been taught to students since 1930 at the very least and still being taught.

        Is it any surprise then that so many who are working in the field get the same results?

  220. Jan,Yes I agree but look at the equations. Yes Emden did get a temperature discontinuity but the equations derived by Emden are not the same as those derived by Milne, Milne was the one follows so what Emden might have thought or done is quite irrelevent. I might even go so far as to say that even though Emden preceded Milne and Milne did look at his paper that the influence of Emden on Milne was minimal.Okay, so you admit that Emden's temperature discontinuity has nothing to do with any “semi-infinite” assumption that Milne made in 1922.Does that mean you agree that Emden's temperature discontinuity is real even if Milne's (and I assume for the moment that Milne actually had one since I can't actually see one in the 1922 paper) isn't?Finally, you say that Milne is the guy that everyone follows. How do you explain then that none of Manabe, Strickler, Moller, Wetherald, King, Ostriker or Lindzen actually cite Milne at all yet they all cite Emden?

  221. “Does that mean you agree that Emden's temperature discontinuity is real”No and if Bateman's paraphrase is true to the source neither does Emden.”Finally, you say that Milne is the guy that everyone follows. How do you explain then that none of Manabe, Strickler, Moller, Wetherald, King, Ostriker or Lindzen actually cite Milne at all yet they all cite Emden?”I don't think “Wow he found one (TD) too” means they use his methods. Look at the equations. You obviously haven't done that. So how do we know by looking at the equations that Emden didn't use Milne's approximation (time is not a factor) and for example Manabe and Wetherald did despite their brief mention of Emden?

  222. Jan,Okay can you please cite page, line & equation number in Bateman's (10 page) summary of Emden's 150 page monograph as evidence that “Emden didn't believe in his own temperature discontinuity.” Here is the paper: http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044…Next you say “how do we know by looking at the equations that Emden didn't use Milne's approximation (time is not a factor)”*) I thought you just said you had looked at the equations yourself???*) “time is not a factor” — what, you mean because of time machines???

  223. “I can’t see that bunging in exp(-tau) has any math logic to it. What am I missing?
    Please explain. Just the math.”

    The classical radiative transport equations are semi-infinite Laplace Transforms using the the standard stellar atmosphere approximations.

    Evaluating them in the standard way eliminates exp(-tau) terms evaluating them for finite tau at the surface retains them.

    So the classical solution eg equations 16 and 17 in M2007 and 1&2 in L&M2003 having been evaluated with the semi-infinite approximation has no exp(-tau) term but B11 does.

    1. Thanks Jan.

      I am going to re-visit this. Doesn’t this effectively also shift TOA (where by definition tau = zero) upwards relative to a semi-infinite solution TOA (although not by much it seems)?

      BTW, on a completely other matter have those here noticed that the current crop of GCMs have considerable difficulty predicting the temporal behaviour of evaporation in real time?

      http://www.knmi.nl/samenw/eldas/GLASS_GABLS_presentations/Dirmeyer.ppt

      This is not a trivial subject, as the validation of ET and EF on regional scales using remotely sensed data which is required for validate this component of GCMs is tricky due to its low data frequency and heavy reliance on application of Bouchet’s or Grager and Grays complementary relationships to Priestly-Taylor and Penman-Monteith.

      This is all about measuring the size of the (cough) steep slope between the near surface air temperature and the true surface temperature as the near surface water vapor pressure lags behind the degree of saturation of the surface. Temperatures have been used as surrogates for vapor pressures in many studies (Monteith and Unsworth 1990, Nishida et al. 2003). Although the relationship between vapor pressure and temperatures is not a linear one, it is commonly linearized for small temperature differences. The unknown surface temperature, Tw cannot be measured in the field, due to the process complexity and the intricate soil-vegetation-atmosphere feedback, but it can be estimated from the slope of the exponential surface vapor pressure curve.

      1. Steve “Doesn’t this effectively also shift TOA (where by definition tau = zero) upwards relative to a semi-infinite solution TOA (although not by much it seems)?”

        FM actually excludes tilde-tau_A = 0 in appendix B the relevant line

        “From Eq. (B10), assuming tilde-tau_A > 0″

        Like Nick says it shifts the singularity from the surface to the TOA if treated as and altitude measure which it isn’t in this case, or to the no absorber/air case if taken as an average tau for the entire atmosphere. Personally I think it’s more reasonable to have a singularity where there is no air, no emissions, no absorption and so on.

      2. “Personally I think it’s more reasonable to have a singularity where there is no air, no emissions, no absorption and so on.”

        At first thought one would be intuitively inclined to go with such a simple proposition simply because it takes the math singularity out in an almost utterly matterless milieu.

        But math is only a means of describing the fabric of reality more and more accurately purely in terms of testing it (yet again and again).

        I have to say it is my personal ‘visceral’ experience (from hang gliding, groundwater pumping and so on as explained) and also the physical fact of the non-radiative issues in, and at the surface of the ground with respect to soil water content, vegetation, wind and so on, that an actual real world singularity in favor of BOA is much more plausible physically.

        I’m not trying to pick a fight here, just simply say it how I see it. I am not convinced that Miskolczi has stumbed on any Holy Grail with this particular aspect of his theory.

        There is correspondingly no real proof of a near constant true LW IR tau (the supposed outcome of ‘eliminating’ any mathematical singularity within the ‘LW IR column’). Indeed, one might ask why should there really be any need to do so, if, as you say, most LW IR is absorbed in the lower, denser more H2O vapor-rich part of the atmosphere?

        I’m frustrated that Miskolczi seems to have only made a skimming pass past the underlying cause of (dare I say it) a global climate homeostasis (conditional or absolute). As you can see from my crude little spreadsheet model, I think the answers (in respect to Fo_f, S_U and OLR actually lie within the complex web of (Gyr timescale evolved) inter-relationships between biota, ET, biogenic aerosols, CCN, clouds, LH fluxes and albedo.

        I’m sorry but, in the grand scheme of things, what is the big deal about this new tau formalism? It’s a bit like concentrating on trying to have the most perfect potato on a plate with lots of other really good food on it.

      3. Steve I am quite happy not to pick a fight either.

        Just briefly: the absence of the exp(-tau_A) in the transfer function effectively closes the atmospheric IR window and there is no room for St in it at all whether 40 or 90 watt/m^2 and any value for St and it’s effect on Sg or surface temperature that cannot be arrived at without doing some or other fudge.

        As for the other I will, if I can find time before the ACAT comes after me, work on empirical testing of the TD. I don’t trust visceral feelings my own or anyone else’s for that matter.

  224. “I can't see that bunging in exp(-tau) has any math logic to it. What am I missing?Please explain. Just the math.”The classical radiative transport equations are semi-infinite Laplace Transforms using the the standard stellar atmosphere approximations. Evaluating them in the standard way eliminates exp(-tau) terms evaluating them for finite tau at the surface retains them. So the classical solution eg equations 16 and 17 in M2007 and 1&2 in L&M2003 having been evaluated with the semi-infinite approximation has no exp(-tau) term but B11 does.

  225. TRy not to twist things Alex. I said “No and if Bateman's paraphrase is true to the source neither does Emden.” I made no claim that Bateman is true to his source or adding his own opinion it's in the last line before the summary in full. No no time machines I thought you were a little more logical than that. You are disappointing me.After your song and dance about Milne coming after Emden I thought you might be worrying about the possibility that I thought Milne had a time machine. Now what is different about the equations that make it rather obvious that Emden did not use the semi-infinite approximation and others do. Thanks i don't need the paper I seem to have several copies.

  226. Jan,I realise on second reading, I misread you, that you're saying, since Bateman hasn't reproduced Emden's derivations… how do we know Emden didn't use Milne's approximation… in other words you're saying, how do we know they haven't made the same mistake independently. And of course this whole discussion began when you denied having ever said that they must have made the same mistake independently. Okay, so you have contradicted yourself already. You can't win this, you have to choose between two possibilities, either their discovery was independent, in which case we have an absurd story of many great astrophysicists (Emden, Eddington, Milne) all independently making a very obvious, silly error OR their discovery was dependent, in which case Emden must have come into the future and stolen Milne's 1922 result.The other problem is that Eddington's 1916 approximation didn't exist for Emden either… and they were in separate countries (England vs Germany)… and they were known to be working independently.Can you just admit this has gone way past the point of silliness?

  227. “in other words you're saying, how do we know they haven't made the same mistake independently.”You say it here alex they did not make the *same* mistake independently.they made different ones.

  228. Okay, so now you're saying there's no dependency on Milne (you have agreed that's absurd as it would involve time travel), they haven't all made the same mistake independently, so to get out of saying they made the same mistake independently, you're now saying they have made different mistakes independently, thus we have TWO temperature discontinuities, both of them wrong, for two different reasons. Is that it?

  229. Thanks Jan.I am going to re-visit this. Doesn't this effectively also shift TOA (where by definition tau = zero) upwards relative to a semi-infinite solution TOA (although not by much it seems)?BTW, on a completely other matter have those here noticed that the current crop of GCMs have considerable difficulty predicting the temporal behaviour of evaporation in real time?http://www.knmi.nl/samenw/eldas/GLASS_GABLS_presentati...This is not a trivial subject, as the validation of ET and EF on regional scales using remotely sensed data which is required for validate this component of GCMs is tricky due to its low data frequency and heavy reliance on application of Bouchet's or Grager and Grays complementary relationships to Priestly-Taylor and Penman-Monteith.This is all about measuring the size of the (cough) steep slope between the near surface air temperature and the true surface temperature as the near surface water vapor pressure lags behind the degree of saturation of the surface. Temperatures have been used as surrogates for vapor pressures in many studies (Monteith and Unsworth 1990, Nishida et al. 2003). Although the relationship between vapor pressure and temperatures is not a linear one, it is commonly linearized for small temperature differences. The unknown surface temperature, Tw cannot be measured in the field, due to the process complexity and the intricate soil-vegetation-atmosphere feedback, but it can be estimated from the slope of the exponential surface vapor pressure curve.

  230. Nick,Whether constant relative humidity is assumed or output is beside the point. The modellers certainly believe that relative humidity should remain constant, and we've also seen that they regard their data as hopelessly uncertain whenever it doesn't agree with what they already believe (witness, Santer+16 on the tropospheric temperature trends, or everyone on the NOAA radiosonde data that shows atmospheric humidity declining). We know for a fact that thermal convection affects the relative humidity, as do the circulation patterns of the atmosphere. We also know that the models can't actually get either of these right. Thus, the models are tuned to all this data you mention, and it seems just too hard to believe that they haven't been tuned in such a way as to give the result the modellers expected.The bottom line is, how can you model something when you know you don't know the underlying physical theory?Incidentally, have a look at this Jeffrey Kiehl (2007) paper:https://www.atmos.washington.edu/twiki/pub/Main…It's only short, and I've read it four times now. It's an incredibly eye-opening piece coming from an IPCC author. Correct me if I'm wrong, but Kiehl is as good as admitting here that the model hindcasts of the simulated 20th century temperature records must have been fudges. He doesn't say so explicitly, but it is implied. Read his concluding paragraph. He finally just shrugs it off and says, “well, it may be fudging to say we simulated the 20th century record, but who cares? Aerosols won't matter in the future, and this fudging of the 20th century is kinda like tuning a NWP model which is known to improve its accuracy anyway.”

  231. Alex I don't agree. But I'll have to leave it there, cos I'm away for a couple of weeks. Hope you've sorted it out when I return.

  232. “thus we have TWO temperature discontinuities, both of them wrong, for two different reasons. Is that it?”Yep and the one Emden made is of no interest to us. It's trivially true if you only take into account radiation balance, and forget about conduction, convection and latent heat, that you will get a temperature discontinuity. So in a sense Emden's error may have been no error at all but simply an artefact of what he was attempting to do. In any case it's of no interest to us because apart from the odd mention he does not appear to have been a major influence in the intervening years.Alex It wasn't I who was saying every astrophysicist etc since Schwarzschild this you will have to own up to yourself I'm afraid.You have wanted to bring the others prior to Milne into the narrative not I. While it might be interesting from a historical perspective it does not advance the debate on iota.

  233. Steve “Doesn't this effectively also shift TOA (where by definition tau = zero) upwards relative to a semi-infinite solution TOA (although not by much it seems)?”FM actually excludes tilde-tau_A = 0 in appendix B the relevant line”From Eq. (B10), assuming tilde-tau_A > 0″ Like Nick says it shifts the singularity from the surface to the TOA if treated as and altitude measure which it isn't in this case, or to the no absorber/air case if taken as an average tau for the entire atmosphere. Personally I think it's more reasonable to have a singularity where there is no air, no emissions, no absorption and so on.

  234. Jan,So when Manabe and Möller write in 1961: '…In the computation by the matrix method, we allowed the possibility of a temperature discontinuity at the earth's surface as was first obtained by Emden 1913, whereas in the present computation we did not. The magnitude of temperature discontinuity was very small, about 0.06 C. This is much smaller than the discontinuity of about 20 C which Emden obtained in his computation of radiative equilibrium based upon the assumption of gray radiation…'are they talking about Emden's temperature discontinuity, or are they talking about Milne's temperature discontinuity…?

  235. “Personally I think it's more reasonable to have a singularity where there is no air, no emissions, no absorption and so on.”At first thought one would be intuitively inclined to go with such a simple proposition simply because it takes the math singularity out in an almost utterly matterless milieu.But math is only a means of describing the fabric of reality more and more accurately purely in terms of testing it (yet again and again).I have to say it is my personal 'visceral' experience (from hang gliding, groundwater pumping and so on as explained) and also the physical fact of the non-radiative issues in, and at the surface of the ground with respect to soil water content, vegetation, wind and so on, that an actual real world singularity in favor of BOA is much more plausible physically.I'm not trying to pick a fight here, just simply say it how I see it. I am not convinced that Miskolczi has stumbed on any Holy Grail with this particular aspect of his theory. There is correspondingly no real proof of a near constant true LW IR tau (the supposed outcome of 'eliminating' any mathematical singularity within the 'LW IR column'). Indeed, one might ask why should there really be any need to do so, if, as you say, most LW IR is absorbed in the lower, denser more H2O vapor-rich part of the atmosphere?I'm frustrated that Miskolczi seems to have only made a skimming pass past the underlying cause of (dare I say it) a global climate homeostasis (conditional or absolute). As you can see from my crude little spreadsheet model, I think the answers (in respect to Fo_f, S_U and OLR actually lie within the complex web of (Gyr timescale evolved) inter-relationships between biota, ET, biogenic aerosols, CCN, clouds, LH fluxes and albedo.I'm sorry but, in the grand scheme of things, what is the big deal about this new tau formalism? It's a bit like concentrating on trying to have the most perfect potato on a plate with lots of other really good food on it.

  236. “are they talking about Emden's temperature discontinuity, or are they talking about Milne's temperature discontinuity…?”They are talking about Emden's temperature discontinuity. This does not mean they used his method. Look at their equations the all important indicator of using finite tau (which Emden did in his method) is missing. Since that is missing and they are gloating that they did better than Emden at reducing the discontinuity you can't really pin their error on Emden.

  237. Steve I am quite happy not to pick a fight either.Just briefly: the absence of the exp(-tau_A) in the transfer function effectively closes the atmospheric IR window and there is no room for St in it at all whether 40 or 90 watt/m^2 and any value for St and it's effect on Sg or surface temperature that cannot be arrived at without doing some or other fudge.As for the other I will, if I can find time before the ACAT comes after me, work on empirical testing of the TD. I don't trust visceral feelings my own or anyone else's for that matter.

  238. Fascinating conversation; if Steve and/or Jan reach some final conclusion about M let the rest of us know.

    I guess the interest in a simple self-equilibrising Tau is just that; because it is simple [sic]; if equilibrium is reached without a constant Tau as Steve’s spreadsheet shows then that runs counter to the attractiveness of nice simple packaging which is what most people want; this is why AGW has such traction; it has nice simple packages to present to the public and the media.

    One of the messier aspects of AGW is its need for an ECS, and Equlibrium Climate Sensitivity; this invention provides considerable ducking room for AGW advocates when, as always, there is a dearth of evidence in the form of Transient Climate Responses; the ECS, as far as I can gather, can be some 8 years after the forcing ceases, as in Schwartz’s paper, or centuries if one accepts the ocean pipeline effect in all its glory. M doesn’t seem to look at the idea of an ECS/delay/lag factor; the response to GHG variation seems to be instantaneous so that climate changes and adjustments are pretty much straight away. This being the case the question is, is there any ACO2 effect at all, or are the slight perturbations we have seen over the last century entirely caused by water in the form of PDO variation and its equivalents?

  239. Fascinating conversation; if Steve and/or Jan reach some final conclusion about M let the rest of us know. I guess the interest in a simple self-equilibrising Tau is just that; because it is simple [sic]; if equilibrium is reached without a constant Tau as Steve's spreadsheet shows then that runs counter to the attractiveness of nice simple packaging which is what most people want; this is why AGW has such traction; it has nice simple packages to present to the public and the media. One of the messier aspects of AGW is its need for an ECS, and Equlibrium Climate Sensitivity; this invention provides considerable ducking room for AGW advocates when, as always, there is a dearth of evidence in the form of Transient Climate Responses; the ECS, as far as I can gather, can be some 8 years after the forcing ceases, as in Schwartz's paper, or centuries if one accepts the ocean pipeline effect in all its glory. M doesn't seem to look at the idea of an ECS/delay/lag factor; the response to GHG variation seems to be instantaneous so that climate changes and adjustments are pretty much straight away. This being the case the question is, is there any ACO2 effect at all, or are the slight perturbations we have seen over the last century entirely caused by water in the form of PDO variation and its equivalents?

  240. Jan,We can't look at the equations in this particular instance because it's another paper that's never been translated from the German.Okay, so they think they're talking about Emden's temperature discontinuity but actually they're talking about Milne's. This is a strange story, but possible I suppose.Very well, but I suppose we can agree that their decision to “stuff the theory” and arbitrarily set the temperature discontinuity to 0 can not be blamed on either Emden or Milne, but that is clearly their own decision. Right?So after Milne made his mistake, where did it next appear in the literature? Has it ever appeared anywhere other than in Weaver & Ramanathan / Lorenz & McKay? GCM modelling began with Moller, Manabe et al. If they didn't actually use Milne, how else can it have ever affected GCM models?

  241. Nick Stokes,

    my first reply about GISS Model E humidity was based on Relative Humidity. You objected. I looked at Specific humidity and find they show increasing Specific humidity in the stratosphere which is WRONG again.

    These errors are based on their assumptions of what the atmosphere SHOULD be doing and isn’t!!

    I ran across an interesting explanation (not proof) of why increasing CO2, or other gasses, would cause a decrease in Humidity:

    http://www.geocities.com/profadrian/ScienceOfGlobalWarming.html
    (scan down to the Forcing Concept paragraph to save time.)

    Sounds straight forward enough to be real.

    Cheers!!

  242. Nick Stokes,my first reply about GISS Model E humidity was based on Relative Humidity. You objected. I looked at Specific humidity and find they show increasing Specific humidity in the stratosphere which is WRONG again.These errors are based on their assumptions of what the atmosphere SHOULD be doing and isn't!!I ran across an interesting explanation (not proof) of why increasing CO2, or other gasses, would cause a decrease in Humidity:http://www.geocities.com/profadrian/ScienceOfGl…(scan down to the Forcing Concept paragraph to save time.)Sounds straight forward enough to be real.Cheers!!

  243. “Okay, so they think they're talking about Emden's temperature discontinuity but actually they're talking about Milne's.”No they are talking about Emden's discontinuity.”So after Milne made his mistake,”I know we have been talking about “Milnes mistake” for my part mainly for language economy but is it really an error? To use an approximation for a windowless atmosphere when the window has yet to be discovered?Alex has it ever occurred to you that Energy Balance Models and General Circulation Models are different sorts of models?

  244. Jan,

    You’ve dodged the question: where did Milne’s method, erroneous or otherwise, enter the historical literature of climate modelling and lead to false predictions (if you want to include energy balance models, that’s fine with me)? Or are you saying that it entered in the early energy balance models of Budyko & Sellars? Is the ECHAM5 affected, if so why? Or is the answer in fact that it hasn’t actually affected the history of climate modelling at all since none of the climate models actually employ the faulty theory?

  245. Jan,You've dodged the question: where did Milne's method, erroneous or otherwise, enter the historical literature of climate modelling and lead to false predictions (if you want to include energy balance models, that's fine with me)? Or are you saying that it entered in the early energy balance models of Budyko & Sellars? Is the ECHAM5 affected, if so why? Or is the answer in fact that it hasn't actually affected the history of climate modelling at all since none of the climate models actually employ the faulty theory?

  246. Jan,

    If that was untwisting something I’d hate to see you tie something up in a knot! :)

    So are you going to admit that you don’t know the answer to my question?

    1. “So are you going to admit that you don’t know the answer to my question?”

      I’ll admit your question makes no sense. FM’s model is an Energy Balance Models and you are asking about General Circulation Models.

      Stuffing up EBM’s started with this
      “”The assumption of infinite thickness involves little or no loss of generality; ”

      but you know that already.

      1. Oh c’mon Jan… the question makes no sense… who are you trying to kid? You are proclaiming to know that Milne’s ‘assumption of infinite thickness’ led to a ‘temperature discontinuity’. I am asking you, where? Where is what I suppose we should call ‘Milne’s temperature discontinuity’ to distinguish it from Emden’s? You won’t give a straight answer. Now, you say it’s in the energy balance models. Great, I understand that one of the earliest of these was Budyko 1969 ( http://www.math.umn.edu/~mcgehee/Seminars/ClimateChange/references/Budyko1969Tellus21p611-Albedo.pdf ). So how did Milne’s ‘assumption of infinite thickness’ affect Budyko’s result?

      2. “Oh c’mon Jan… the question makes no sense… who are you trying to kid?”

        I am no politician and you are not Tony Jones how many times do I have to repeat myself before it finally sinks in that you have been answered many times I have just explained it again to Steve and Nick’s response “And again, the B eqs would be nonsense if he set tau to infinity.”

        Why on earth are you looking for the effect of infinite thickness in Budyko’s just looking at the heading is enough to tell you not to even bother. Budyko is looking at insolation variation FM at the effect of tilde_tau_A or if you like absorber concentration. So how do you think your question makes sense here?

  247. Jan,If that was untwisting something I'd hate to see you tie something up in a knot! :)So are you going to admit that you don't know the answer to my question?

  248. “So are you going to admit that you don't know the answer to my question?”I'll admit your question makes no sense. FM's model is an Energy Balance Models and you are asking about General Circulation Models.Stuffing up EBM's started with this “”The assumption of infinite thickness involves little or no loss of generality; “but you know that already.

  249. Oh c'mon Jan… the question makes no sense… who are you trying to kid? You are proclaiming to know that Milne's 'assumption of infinite thickness' led to a 'temperature discontinuity'. I am asking you, where? Where is what I suppose we should call 'Milne's temperature discontinuity' to distinguish it from Emden's? You won't give a straight answer. Now, you say it's in the energy balance models. Great, I understand that one of the earliest of these was Budyko 1969 ( http://www.math.umn.edu/~mcgehee/Seminars/Clima… ). So how did Milne's 'assumption of infinite thickness' affect Budyko's result?

  250. “Oh c'mon Jan… the question makes no sense… who are you trying to kid?”I am no politician and you are not Tony Jones how many times do I have to repeat myself before it finally sinks in that you have been answered many times I have just explained it again to Steve and Nick's response “And again, the B eqs would be nonsense if he set tau to infinity.”Why on earth are you looking for the effect of infinite thickness in Budyko's just looking at the heading is enough to tell you not to even bother. Budyko is looking at insolation variation FM at the effect of tilde_tau_A or if you like absorber concentration. So how do you think your question makes sense here?

      1. Note:

        1. In columns O and U, OLR can be calculated in two conceptually different ways.

        2. The average partitioning of upwelling (to TOA) and downwelling (to BOA) heat in the atmosphere is clearly about 0.375 : 0.625 i.e. 1 : 1.667 (3 : 5).

        3. With respect to S_U/(Miskolci E_U) = 2 (Virial Rule) FM was apparently more-or-less correct. Kirchoff is a dud though.

        4. FM must have mistaken S_T for the sum of S_T + LH_U where LH_U = upwelling latent heat from tops of (icing/precipitating) clouds exiting TOA.

        5. The source of conditional global homeostasis is the near-constancy of the sum of S_T + LH_U. This derives from the way the Evaporative Fraction (EF) scales with Latent Heat flux.

        6. The Miskolci so-called ‘tau’ (= -ln((S_T+LH_U)/S_U) is nearly constant for all sky conditions (1.818±0.078) but passes through a minimum near 60% cloud cover i.e. current conditions.

        7. S_U/OLR is not close to 3/2 but seems to vary tightly about 5/3 (1.643±0.058 i.e. ~1.667). It is inferred there must be something slightly wrong with FM’s transfer function.

        Regards
        Steve

        PS: It’s not perfect but hey ya gotta have a go.

    1. “Okay, I give up”

      Don’t do that your pursuit of the history us an interesting one but try to keep the apples, oranges and pears in their own baskets.

      You know the tell tale of the non-semi-infinite approximation exp(-tau*) in the equations work with that.

      Emden’s discontinuity is just the consequence of the geometry and reasonable if only radiation is taken into account.

      Milne’s approximation is one that is not reasonable for a thin atmosphere and leads to a singularity at the surface (e.g. division by zero or log(0) ).

      The climate system is a complex system and there are various studies that look at different aspects of it they don’t all overlap even.

      If I can help I will, I can’t help though if you want to know why an apple doesn’t taste like a grape even though they are both fruit. That’s outside my field of expertise and so are GCMs they be more like Nick’s babies.

      1. Milne’s approximation is one that is not reasonable for a thin atmosphere…

        Do you mean Milne’s approximation, or do you mean Eddington’s approximation?

      2. “Do you mean Milne’s approximation, or do you mean Eddington’s approximation?”

        Before I answer can you list the approximations used?

      3. This seems to be a diversion, but, okay, I understand that the Eddington approximation is a special case of the two-stream approximation which is a special case of the plane-parallel approximation.

        Here is indeed a paper on lots and lots of methods for two-stream approximation, including the Eddington approximation, quadrature approximations, hemispheric-constant methods, and you-name-it approximations, even discussing our very interesting case of thin atmospheres.

        http://ams.allenpress.com/archive/1520-0469/37/3/pdf/i1520-0469-37-3-630.pdf

        So it remains with you still to provide even one paper that shows that a surface temperature discontinuity has arisen that in the literature that wasn’t discovered by Emden.

        Is there one in this paper?

      4. “So it remains with you still to provide even one paper that shows that a surface temperature discontinuity has arisen that in the literature that wasn’t discovered by Emden.”

        L&M2003 equations 1 & 2. Not only that you’ll also find it in Goody and Yung.

        The Eddington approximation is not the semi-infinite approximation. It is the two stream Milne ony derives a single stream with the semi-infinite L&M’s 2 equations are the two stream ie the two equations (Eddington) and semi-infinite i.e. a lack of exponential terms (Milne) as FM put’s it Eddington-Milne. Neither Goody and Yung nor L&M 2003 have anything to do with Emden’s temperature discontinuity which might be a species of two stream and prior to Eddington but that we don’t know but Emden does not use the semi-infinite approcimation.

        I just wanted to be sure you were clear on what Eddington’s approximation was and it is clear to me that you were thus on a fishing expedition that I find offensive.

      5. I was “thus”(?) on a “fishing expedition”? I can’t understand a word of this. This is clearly hopeless; beyond scant references cited in M’s paper itself, you are aware of any single reference to the problem you are allegedly solving in the entire history of meteorology, end of story.

        BPL said the problem M is solving was firstly invented by M; you are failing hopelessly to show otherwise.

      6. “you are aware of any single reference to the problem you are allegedly solving in the entire history of meteorology, end of story.”

        Your slip here says it all

        You have been shown the problem quite clearly you have asked questions to which you knew the answer. You have been shown two references where the problem has occurred yesterday and more prior to to that and you can’t even say properly that you still think that there aren’t any.

        BPL had it right when he left He said he was drowning :- people drown because they can’t swim and are out of their depth.

      7. Jan

        Could you please provide us with a mathematical proof of this statement of yours (i.e. post it somewhere for easy download):

        “the absence of the exp(-tau_A) in the transfer function effectively closes the atmospheric IR window and there is no room for St in it at all whether 40 or 90 watt/m^2 and any value for St and it’s effect on Sg or surface temperature that cannot be arrived at without doing some or other fudge.”

        Thanks.

        Regards
        Steve

        PS: BTW I do note that there is no reliable evidence within the last decade of mainstream literature of a global all sky LW IR S_T remotely near 90 W/m^2.

  251. “Okay, I give up”Don't do that your pursuit of the history us an interesting one but try to keep the apples, oranges and pears in their own baskets. You know the tell tale of the non-semi-infinite approximation exp(-tau*) in the equations work with that. Emden's discontinuity is just the consequence of the geometry and reasonable if only radiation is taken into account. Milne's approximation is one that is not reasonable for a thin atmosphere and leads to a singularity at the surface (e.g. division by zero or log(0) ). The climate system is a complex system and there are various studies that look at different aspects of it they don't all overlap even. If I can help I will, I can't help though if you want to know why an apple doesn't taste like a grape even though they are both fruit. That's outside my field of expertise and so are GCMs they be more like Nick's babies.

  252. Milne's approximation is one that is not reasonable for a thin atmosphere…Do you mean Milne's approximation, or do you mean Eddington's approximation?

  253. “Do you mean Milne's approximation, or do you mean Eddington's approximation?”Before I answer can you list the approximations used?

  254. Note:1. In columns O and U, OLR can be calculated in two conceptually different ways.2. The average partitioning of upwelling (to TOA) and downwelling (to BOA) heat in the atmosphere is clearly about 0.375 : 0.625 i.e. 1 : 1.667 (3 : 5).3. With respect to S_U/(Miskolci E_U) = 2 (Virial Rule) FM was apparently more-or-less correct. Kirchoff is a dud though.4. FM must have mistaken S_T for the sum of S_T + LH_U where LH_U = upwelling latent heat from tops of (icing/precipitating) clouds exiting TOA.5. The source of conditional global homeostasis is the near-constancy of the sum of S_T + LH_U. This derives from the way the Evaporative Fraction (EF) scales with Latent Heat flux.6. The Miskolci so-called ‘tau’ (= -ln((S_T+LH_U)/S_U) is nearly constant for all sky conditions (1.818±0.078) but passes through a minimum near 60% cloud cover i.e. current conditions.7. S_U/OLR is not close to 3/2 but seems to vary tightly about 5/3 (1.643±0.058 i.e. ~1.667). It is inferred there must be something slightly wrong with FM’s transfer function.RegardsStevePS: It’s not perfect but hey ya gotta have a go.

  255. This seems to be a diversion, but, okay, I understand that the Eddington approximation is a special case of the two-stream approximation which is a special case of the plane-parallel approximation.Here is indeed a paper on lots and lots of methods for two-stream approximation, including the Eddington approximation, quadrature approximations, hemispheric-constant methods, and you-name-it approximations, even discussing our very interesting case of thin atmospheres.http://ams.allenpress.com/archive/1520-0469/37/…So it remains with you still to provide even one paper that shows that a surface temperature discontinuity has arisen that in the literature that wasn't discovered by Emden.Is there one in this paper?

  256. “So it remains with you still to provide even one paper that shows that a surface temperature discontinuity has arisen that in the literature that wasn't discovered by Emden.”L&M2003 equations 1 & 2. Not only that you'll also find it in Goody and Yung.The Eddington approximation is not the semi-infinite approximation. It is the two stream Milne ony derives a single stream with the semi-infinite L&M's 2 equations are the two stream ie the two equations (Eddington) and semi-infinite i.e. a lack of exponential terms (Milne) as FM put's it Eddington-Milne. Neither Goody and Yung nor L&M 2003 have anything to do with Emden's temperature discontinuity which might be a species of two stream and prior to Eddington but that we don't know but Emden does not use the semi-infinite approcimation.I just wanted to be sure you were clear on what Eddington's approximation was and it is clear to me that you were thus on a fishing expedition that I find offensive.

  257. I was “thus”(?) on a “fishing expedition”? I can't understand a word of this. This is clearly hopeless; beyond scant references cited in M's paper itself, you are aware of any single reference to the problem you are allegedly solving in the entire history of meteorology, end of story.BPL said the problem M is solving was firstly invented by M; you are failing hopelessly to show otherwise.

  258. “you are aware of any single reference to the problem you are allegedly solving in the entire history of meteorology, end of story.”Your slip here says it all You have been shown the problem quite clearly you have asked questions to which you knew the answer. You have been shown two references where the problem has occurred yesterday and more prior to to that and you can't even say properly that you still think that there aren't any.BPL had it right when he left He said he was drowning :- people drown because they can't swim an