Two more theories relevant to the Unified Theory of Climate by Nikolov and Zeller

Posted: January 9, 2012 by Rog Tallbloke in climate, Energy, Ocean dynamics, weather

While we await the reply to comments on the Unified Theory of Climate (not long now!), I’d like to draw attention to more complimentary work done independently by William Gilbert and Dean Brooks. In no particular order these are:-

Bill Gilbert’s paper (E&E 2010)

available here

and Dean Brooks

available on his website here

My reasons for taking these two together in one thread are pragmatic. Firstly, these two guys seem to get along fine. Secondly, their work is complementary, and so it is better to have the discussion in one place rather than to-ing and fro-ing between seperate threads. Thirdly, I don’t want the Loschmidt thread to drop off the bottom of the blog home page yet. :)

Valuable insights were gained on that thread, and in it Dean says this of the value of taking his and Bill’s (and Miskolczi’s) work together in considering the complex atmospheric conundrum we are attempting to untangle.

I think William’s argument, and mine, and Miskolczi’s, may turn out to be like the three legs of a stool. One deals with radiation balances; one deals with the daily dynamics of lapse rate change; one deals with the long-term density state function. They form a self-consistent picture that explains the failure to observe a “hot spot,” the decade-long pause in warming, the low RH values in the upper troposphere . . .

Bill Gilbert concurs, saying:

I agree that the work of Miskoczi is very relevant. I have had discussions with him on a few occasions. We both agreed (I think) that understanding the non-radiative processes, his variable “K”, are key to making his empirical model come together.

I wonder if we can tempt Ferenc to join us in this discussion, where I hope Bill and Dean can find some time to answer questions from those interested enough to actually DOWNLOAD AND READ the papers.

  1. Brian H says:

    Great minds thinking enough alike to build a whole greater than the sum of parts? Sounds delightful. And important. Carry on ASAP!

  2. mkelly says:

    Tallbloke, since you want comments about the Joel issue to have some relevance to a thread I chose this one as you mention N&Z it in the title.

    I also asked Joel for an explanation of his claim that N&Z violate the first law. He did respond (which you can find using mkelly).
    I used a simple first law Q = U + W. His response from memory was that 1. no work is done on or to the atmosphere by anything 2. Only U changes with a +dU and which according to him didnot exist (from memory).

    I responded that I believe he was incorrect that infact work was done on and by the atmosphere.

    I am not a fan of Joel’s but to be fair he did answer my post.

  3. Thanks Tallbloke.
    Dean Brooks’ atmospheric adjustment model can be further quantified by applying the rigorous thermodynamic models of the lapse rate with absorptive/radiative properties of greenhouse gases. See:
    Robert H. Essenhigh, Energy & Fuels 2006, 20, 1057-1067 “Prediction of the Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere by Solution of the (S-S) Integral Equations of Transfer and Evaluation of the Potential for Profile Perturbation by Combustion Emissions”

    . . . 1D formulation of the (1905−1906) integral (S−S) Equations of Transfer, governing radiation through the atmosphere, is developed for future evaluation of the potential impact of combustion emissions . . .
    steady state, one dimensionality, constant flux directional parameter (μ), and a gray-body equivalent average for the effective radiation absorption coefficient, k, for the mixed thermal radiation-active gases at an effective (joint-mixture) concentration, p. . . .
    The solution predicts, . . . a linear decline of the fourth power of the temperature, T4, with pressure, P, and, at a first approximation, a linear decline of T with altitude, h, up to the tropopause at about 10 km (the lower atmosphere). . . .the variations of pressure, P, and density, ρ, with altitude, h, are also then obtained, with the predictions . . .up to 30 km altitude (1% density). . . .
    the value of the group-pair (kp)o representing the ground-level value of (kp), the product of the effective absorption coefficient and concentration of the mixed gases, written as a single parameter but decomposable into constituent gases and/or gas bands. . . .
    provide a platform for future numerical determination of the influence on the T, P, and ρ profiles of perturbations in the gas concentrations of the two primary species, carbon dioxide and water

    See Sreekanth Kolan’s followup:
    Study of energy balance between lower and upper atmosphere

    Ferenc Miskolczi’s Line By Line (LBL) HARTCODE would provide the quantitative radiative model with elevation.

  4. tallbloke says:

    Thanks David, gathering the relevant materials for discussion is a good first step. Reading and understanding them may take a little longer. :)

  5. adolfogiurfa says:

    No need to search anywhere else but on the menu above: Aerology

    [Reply] This is one of the big pieces in the jigsaw, but not the only one.

  6. captdallas says:

    Comment on Bill Gilbert’s paper. Determining how much work versus entropy is performed to maintain the lapse rate is my pet peeve which you are addressing. Conduction and convection are joined at the hip at the surface atmosphere boundary layer. That is a conductive driven process, i.e. there is little convection if there is no conduction to transfer the energy. While the thermal coefficient of conductivity is small for a mixed gas at standard temperature and pressure, it is not negligible. CO2 increases thermal conductivity as it increases in concentration.

    CO2 also has a nonlinear impact on conductivity, peaking at -20C degrees. Changes in relative humidity have much less impact on the thermal conductivity of a mixed gas.

    Increased conductivity decreases entropy or heat loss, which is a negative feed back to the radiant impact of CO2. If just for the sake of argument, we assume that the change is conductivity is significant with respect to the change in radiant forcing in the lower troposphere, what would that imply? What would that imply in the Antarctic where the conductive impact increases while the radiant impact decreases per the S-B 4th power relationship of radiant flux to temperature of the source and sink?

    While radiant forces fluctuates wildly with various changes, the thermal sinks, the tropopause and the Antarctic are remarkably stable in the southern half of the globe. I would recommend adding a little blurb on the conductive impact of CO2, since it is very long term feed back with a small be not insignificant role in climate.


  7. Michele says:

    OT weather

    Stratwarning,stratosphere in disorder.
    Splitting the polar vortex.
    Dual action, anticyclones atlantic and Aleutian

  8. Joe Born says:

    It’s hard to contrast the situations depicted in Brooks’s p. 8 and p. 15 graphs, because the x axis is altitude on p. 8 and pressure on p. 15.

    The point of the p. 15 graph seems to be what the final sentence in page 16′s first full paragraph says: “The ‘after’ density curve never crosses the ‘before’ density curve, as it did in my earlier analysis.” But in the GCMs the layers expand in response to increased temperature, right? If so, the mapping between the two graphs x axes changes. It’s therefore hard to tell whether the p. 15 graph really is inconsistent with the p. 8 graph.

  9. DirkH says:

    O/T (maybe): Lubos Motl claims somebody fixed Milankovic’s theory by adding a derivative term. Tallbloke, I think you might be pleased by the resulting correlation with historical temperature reconstructions.

  10. captdallas says:

    Dr. Curry has a very relevant post on non-equilibrium thermodynamics that should be considered.

  11. adolfogiurfa says:

    @Michele: Not surprising if you follow M.Vukcevic´s research:

  12. Dean Brooks says:

    Hello Joe Born,

    You were concerned about the vertical coordinates in my graphs not being the same (one is height, one is pressure). I agree that this could be addressed in more detail, just to dot every i and cross every t. I will add a footnote on this question and publish a new version, for the benefit of the next wave of readers. But it doesn’t provide much of an “out” for the models. They are wrong regardless of which coordinates the results are presented in.

    Keep in mind that the models remain in pressure coordinates throughout their run. They do not translate back to height while the computation is being done. So for example the layers just below the tropopause will show from start to finish (in pressure coordinates) a rise in temperature and a fall in density, where in reality as we approach the tropopause there is a fall in temperature and a rise in density.

    Your concern amounts to wondering if the new temperature and density values, when reconverted to height coordinates, might show the opposite and so better accord with observations.

    Now first of all, the models just don’t accord with observations, and everyone agrees on that point. There’s no “hot spot” despite there being one in the models. The defense offered by the modelers is that the data are wrong. But set that aside.

    The most that one could hope for is that maybe, at the end of a simulated century of climate change, we might convert back into height coordinates, and get a different temp and density trend. Correcting for height changes means either drawing our final density and temp values from a different computational layer, or blending values from two adjacent layers. But no layer on either side of our original satisfies the requirement. The trend in the density-temp ratio simply isn’t correct in any of them.

    The algebra just won’t work. I can’t take two negative numbers and interpolate them to get a positive trend. I can’t get a more physically plausible value by blending or borrowing data from adjacent layers when the trend is equally wrong in all of them.

  13. Joe Born says:

    Dean Brooks:

    Thank you for your response.

    While I take time to discern in it the meaning that no doubt leapt instantly to other readers’ minds, I’ll re-state my concern against the possibility that latent ambiguities in my entry above are impeding communication.

    Pressure P1 prevails in the bottom computational layer, which initially extends in height from 0 to h1. Pressure P2 = 0.5 P1 prevails in the next computational layer, which initially extends in height from h1 to h2, and its density is half the bottom layer’s. Temperature then rises 25% so that both layers’ densities fall 20%, the top of the bottom layer rises to 1.25 h1, and the top of the next layer rises to 0.25 h1 + 1.25 h2.

    If I understood your paper correctly, the GCMs do recognize that the altitude change would occur. That being the case, they implicitly find a density increase in the height range from h1 to 1.25 h1; if bottom- and next-layer densities respectively begin at r1 and 0.5 r1, the GCMs implicitly find a density increase from 0.5 r1 to 0.8 r1 at heights between h1 and 1.25 h1 (if h2 > 1.25 h1).

    So, if the GCM’s do recognize that the layer altitudes change–and I may have misunderstood you on this point–it does not necessarily follow that the GCMs permit density to fall everywhere and that they therefore fail to conserve mass.

  14. Tenuc says:

    Quote from Dean Brooks – THE POT LID HYPOTHESIS – Conclusion

    “…Again, I want to stipulate that we have observed some real warming in recent decades and
    that it is cause for concern, especially in the Arctic where the negative water vapor feedback
    has a much smaller effect. If we keep adding CO2 to the atmosphere on an exponentially
    increasing scale, then at some point a disaster will happen. It is a great relief to think that it
    will not occur in the next few decades, but if the trend continues, then in another century or
    two I think it eventually must…”

    Overall I totally agree with the arguments which you put forward in your paper. However, I don’t think the above point regarding extra CO2 causing a future problem is correct. Why? you may ask…

    Firstly the bulk of the atmospheric ‘green house’ effect is caused by the bulk density of the atmosphere, with most energy being transferred to the system by direct collisions between photons and air molecules – CO2 has far fewer collisions due to it being a trace gas, and not all of these transfer enough extra energy to stimulate a photon emission. This is why there is a disconnect between the link with temperature and CO2, with observation clearly showing that temperature change dictates atmospheric CO2 levels. Graph showing recent Temp vv CO2 here…

    Secondly there are two papers which have similar themes, in that they both show how the physics behind CAGW has been misapplied. Papers here – well worth a read…

    The new climate theory of Dr. Ferenc Miskolczi…
    (Interestingly, Ferenc, also uses the Virial theorem!)

    Gerhard Gerlich & Ralf D. Tscheuschner – Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics…

    My final thought is that 75% of the globe is covered with water and as IR can only heat the surface film, any extra CO2 IR emissions would only increase the amount of water vapour which would rise and quickly radiate the extra energy back into space.

    Negative feed-back seems to be the predominant factor in our dynamic climate’s processes, not positive as the IPCC cabal of cargo cult climate scientists would have us believe.

  15. Dean Brooks says:

    Hello again Joe Born,

    Yes and no. I see where you’re going but it doesn’t solve the problem.

    GCM’s maintain a crude kind of mass conservation by having each layer expand separately, much as you describe. So layer 1 is 25 percent wider, and has density values 20 percent lower, and thus because 0.8 * 1.25 = 1.00, it has the same mass present before and after. If there was no long-term warming trend, then some layers would expand, and others would contract, and this would be sufficiently accurate mass conservation. The trouble arises because ALL the layers expand in the model, none contract, and the expansion yields a physically impossible outcome. Technically, though, even in a model where all layers expand, there is a nominal sort of “mass conservation”. The total quantity of mass in the model remains constant; it’s just the density values that are screwed up and physically implausible.

    You are also technically right that there is an implicit zone of overlap between the two layers in your example, where if we computed the new density for a specific height above ground, it would be higher than the old density. However, two points. First, GCM’s don’t actually do this. If they are set up in pressure coordinates, they stay in pressure coordinates throughout the run.

    Second, these implicit values would be very hard to make practical use of even if the modelers wanted to. For example, suppose we wanted to compute a new temperature for the overlap zone between layers 1 and 2. Now in effect, we’d be dividing up layer 1 into two sublayers, 1a and 1b. In sublayer 1a we’d have a temperature rise and density falling. In sublayer 1b, we’d have density rising and temperature falling. Then the same would hold for 2a versus 2b.

    So we’d have a “striped” temperature pattern: rising in the non-overlap portion of layer 1, falling in the overlap, rising in the non-overlap portion of layer 2, and so on. In your example, with temperatures rising 25 percent (!), the stripes would be fairly large. But in practical global warming scenarios, temperatures would rise 0.1 percent per decade, so the overlapping stripes would be infinitesimally small. You would observe temps rising and density falling in 99.9 percent of the air column, punctuated by little stripes of cooling that did nothing to stop the overall lapse rate from falling.

    There’s no physical process going on that resembles this. It’s just an artifact of the way they do the math. The proper way to adjust the density in each layer is to transfer mass from one layer to another, from 1 to 2 for example. It doesn’t help to create a whole new set of overlap layers.

    I should add that so far you’re the first person to actually work through the math and raise this particular argument. I hope you can see that it doesn’t help the IPCC avoid getting the wrong answer, but I do appreciate your efforts very much.

  16. Joe Born says:

    Dean Brooks:

    Thank you for your response, which unfortunately leaves me as mystified as ever.

    You may be confirming what I thought, which is that expanding all layers in height does not violate conservation of mass, although it’s hard to tell what you mean by “there is a nominal sort of ‘mass conservation’.”

    If mass is conserved, then failure to conserve mass, which is what I had thought your point was in comparing the graphs of pp. 8 and 15, is not the real problem with GCMS. So what is the problem?

    “[I]t’s just the density values that are screwed up and physically implausible.” This could be (1) a mere conclusory statement you expect the reader to accept without proof, (2) a conclusion at which he is being invited to arrive on the basis of facts previously set forth but not referred to here, or (3) a conclusion that you are about to prove. For obvious reasons, the first two possibilities don’t work for me.

    I’m afraid the third doesn’t either. Your “screwed up” comment is followed by three points, namely, (1) that the GCMs “don’t work that way,” i.e., don’t convert from pressure to altitude, (2) that striping results as the pressure layers expand, and, maybe, (3) that lapse rate ends up falling inappropriately in the GCMs.

    I don’t see why performing the calculations in the pressure domain would necessarily produce wrong answers. Problems are solved all the time by first translating things into a different domain; differential equations from the time domain are solved just fine, thank you, in the Laplace complex-frequency domain; it’s just that after all the dust settles the result is transformed back into the time domain.

    Perhaps to demonstrate the problem you bring up the striping issue. I had been aware of a striping issue, although I had seen it in density change rather than in temperature. But I don’t see how that is anything more than simply one of the quantization effects that attend most numerical solutions; when you solve a problem numerically rather than analytically, you almost always have some level of quantization error. If you did the modeling by using altitude layers rather than pressure layers, for example, errors still would almost inevitably arise from assuming intra-layer uniformity of some quantities that you know in real life vary with intra-layer position.

    Finally, you may be saying that the technique used by the GCMs systematically reduces lapse rate from what it should be when temperatures rise, but I can’t see the train of logic by which you establish that.

    Please understand that I’m not trying to debate here. You’ve gone to a lot of trouble to understand the issue, and I appreciate your attempting to share your findings with others. I’m just attempting to follow you and, perhaps, help you make your findings more understandable to laymen.

  17. Dean Brooks says:

    I’m going to attempt to post a graphic here. Forgive me if this doesn’t work, it’s my first try.

    [Reply] Dean, just post the url, and Tim or I will enable it as an image by editing the post. Only admins can do this. Cheers. Rog

    Hmm. doesn’t seem to work with imageshsack, some sort of no robots policy I think. I’ll upload it to wordpress and link it from there.

    [ strangeness indeed, hacked up a copy, probably a bit poor --Tim]

  18. Dean Brooks says:

    Looks good to me, many thanks Tim and Rog.

    Joe, I think the greatest thing about putting out a paper into the blogosphere is that it produces just this kind of dialog. I think of this as being the simplest part of my argument, but that doesn’t mean it actually is. So let’s look at the graphic and see what we can clarify about it.

    The “typical” pre-warming curve was computed using the International Standard Atmosphere as a guide. Interestingly, even the International Standard Atmosphere is defined in terms of pressure and temperature, not density. But you can derive a density curve from it. So while the precise values will vary from place to place, it has the correct, agreed shape. Every airline pilot and meteorologist relies on the ISA in some fashion. It is in very broad use. And we know from looking at radiosonde data that the long-term trend of density with height really is just this smooth and consistent.

    Now, if there is global CO2 warming, the one point everyone agrees on is that there will be lower density at sea level, pretty much everywhere from the equator to the poles. So at any given point, the local ISA profile will have to be modified. There are three constraints:

    1) The total area under the curve will remain the same (conservation of mass).
    2) The basic shape of the curve will remain the same (still the same ISA model).
    3) The sea level intercept will be a lower value (ideal gas law).

    Given these three constraints, there is no way to draw the curve without having the red post-warming line cross the black pre-warming line. It is just basic calculus. The missing mass near sea level gets pushed up. It has to go somewhere, after all. Everywhere above that crossover, the air is going to be denser.

    Now here’s the first puzzle, and it has nothing to do with GCM’s. If I am limited to using the ISA model, then I cannot lower the lapse rate AND lower the value of the sea level intercept without causing the area under the curve to be lower overall. Do you see what I’m getting at? The lapse rate determines the slope of the density curve. If lapse rate falls, the density curve drops more steeply. It can’t start from a lower point, drop more steeply, and still sweep out the same area.

    What has to happen is what is pictured: The red line starts from a lower point, but drops more slowly. Air is pushed up so that the density values decline more slowly with height. Temperatures then decline more quickly with height (higher lapse rate).

    Therefore, it is impossible for the “hot spot” to develop as described by the IPCC models, without contradicting the far more widely used ISA model of the air column. The “hot spot” is defined as temperatures rising at sea level, and rising faster (a falling lapse rate) in the upper troposphere. But it is impossible to draw the graph that way.

    So then the puzzle becomes, why don’t the people running GCM’s know this? It’s a huge hole in their argument, and it is really basic stuff. What has gone wrong?

    Most of my 46-page “Pot Lid” paper is devoted to figuring out how people could run GCM’s for 50 years and not notice that they lead to paradoxes. That is a vexing and complex argument, as we have seen. But there is no need to have that argument until this graph makes sense to you.

    So what are your thoughts? I can post the equation I used, put up my spreadsheet on my blog . . . just let me know which part of this isn’t making sense.


  19. Joe Born says:

    To give your response justice, I’ll have to go back an re-read the first part of your paper (and maybe read forward a little from where I got stuck, and other matters will prevent me from doing that for a day or so. In the interim, I’ll make two comments.

    First, providing equations usually helps dispel latent ambiguities. Spreadsheets can, too.

    Second, my initial reaction to the crux of your last response, namely “If I am limited to using the ISA model, then I cannot lower the lapse rate AND lower the value of the sea level intercept without causing the area under the curve to be lower overall,” is that it makes sense only if by “lower the lapse rate” you mean lower it everywhere rather than only in the upper troposphere. But I haven’t thought that through completely–and I don’t know which specific constraint of the ISA model enters into your conclusion.

  20. Joe Born says:

    Dean Brooks:

    From p. 6 of your paper:

    “Each computational layer in a model is bounded by constant-pressure surfaces, and so can expand individually when heated. The layers not only expand individually, but are considered to be lifted by all the layers that expand below them. The model thus provides for what seems like a convincing
    kind of atmospheric expansion. However, layers cannot easily give up air mass to their
    neighbors, or receive any, so while there is expansion, there is little or no vertical displacement
    of air mass.”

    To me this contains two logically inconsistent statements. If the “layers are . . . lifted,” then there must indeed be “displacement of air mass.” The layers consist of air mass. If they are lifted, then air mass is displaced.

    Of course, displacement of layers is not the same as layers’ “giv[ing] up air to their neighbors,” but why should they? It seems to me that the pressure of a “constant-pressure surface” tells you what the weight–and therefore mass–of the air above it is per unit area, independently of what the temperature or lapse rate is. So the mass of air above a given constant-pressure surface shouldn’t change. If the GCMs’ pressure layers–defined, as you say, by constant-pressure surfaces–increase and decrease with temperature, why should air mass ever have to cross those constant-layer boundaries?

  21. Dean Brooks says:

    This is the critical question: Why should air mass have to cross those boundaries?

    Because doing the computation without allowing air mass to cross boundaries limits how the variables . . . er . . . vary. As a general rule, with the ideal gas law there are three variables: temperature, pressure, and density. If we input a change in just one of them, there is no unique solution for the other two. We must specify two of the three to get a single solution.

    Sea level constitutes an exception, because the total pressure at sea level is more or less fixed. That leaves temperature and density to vary inversely in relation to one another.

    Away from sea level, many combinations are possible. It is possible for the air at 10,000 meters to be both hotter and denser after some perturbing change at sea level (and this is what we find when doing the computation using the ISA model as shown earlier).

    This kind of response is ruled out in a GCM. It just cannot ever happen that the air gets hotter and denser in the same layer at the same time. Not without transfer of mass from another layer.

  22. Joe Born says:

    Dean Brooks:

    I’m sorry, but I just can’t seem to get my mind around this–and either you didn’t quite answer my question or I was unable to discern the answer in what you said.

    My question had to do with the apparent inconsistency between (1) pressure’s being dictated by the mass above and (2) the problem you have with not having mass cross isobars. If my understanding is correct, pressure is a way of measuring the weight of the atmosphere above the location where the pressure is measured. If the pressure at some level stays the same, so does the same amount of atmosphere disposed above that level. To me this means there’s no net gross motion of mass through that level. (Yes, I know there’s such a thing as convection, and packets of air are passing through pressure levels all the time. But I think those going up are balanced out by those going down.)

    With regard to your situation in which “the air at 10,000 meters [is] both hotter and denser after some perturbing change at sea level,” I again don’t see how that necessitates air mass’s crossing equal-pressure surfaces. To me that just sounds as though an equal-pressure surface rises though the 10,000-meter level: still no movement across isobars. What am I missing?

  23. [...] theory are  starting to emerge in the thoughts of out of the box thinkers like Harry Dale Huffman, Bill Gilbert, Wayne job and Stephen Wilde among many others. The future for climate science is looking clearer [...]