Ned Nikolov: In Science, New Messages Mean More than the Messengers’ Names

Posted: September 25, 2016 by tallbloke in Analysis, Astrophysics, atmosphere, censorship, Dataset, Natural Variation, radiative theory, solar system dynamics, Temperature, Thermodynamics, volcanos

An Interview Given by Dr. Ned Nikolov (a.k.a. Den Volokin) to Ben Guarino,
a Staff Writer at The Washington Post
Sep. 17, 2016

Research Paper Withdrawal by the Journal Advances in Space Research  

peer-reviewQ1: As succinctly as possible, could you tell me why you chose to publish this work under a pseudonym?

A1: We adopted pseudonyms as a measure of last resort as we could not get an unbiased and fair review from scientific journals under our real names. This is explained in more details in the attached letter we sent to the chief editor of the Journal Advances in Space Research (JASR) on Sep. 17, 2015. In brief, our real names became known to the climate-science blogosphere in 2012 when a poster, which we presented at an International Climate Conference in Denver in 2011, became available online and caused broad and intense discussions. When we later tried to publish elements of this poster as separate articles in scientific journals, we discovered that journal editors and reviewers would reject our manuscripts outright after Googling our names and reading the online discussion. The rejections were oftentimes justified by the journals using criticisms outside the scope of the manuscript at hand.  On two occasions, journal editors have even refused to send our manuscripts for review after reading the blogs and realizing the broader theoretical implications of our results, although the manuscript itself did not explicitly discuss any new theory. For example, our first paper was rejected 4 times by different journals while submitted under our real names before it was finally accepted by SpringerPlus after submitting it under pseudonyms.

To summarize, publishing new research findings that go against the grain of mainstream theories and belief systems is challenging to say the least, since such findings are often met with a fierce resistance by both the scientific community and socio-political institutions. Hiding our names was not an attempt to dodge responsibility, but to allow the readership (including journal editors and reviewers) to see our results for what they really are without being influenced by prejudices related to our true identities. Anonymity in science has a long history and is being recognized by scholars as a useful approach in advancing knowledge (see for example Neuroskeptic 2013; Hanel 2015). Our decision to use pseudonyms was guided by the understanding that a new message is more important than the name of the messenger.

Q2: How did you arrive at these pseudonyms? Did you think it would be likely those names would be eventually linked to your real identities?

A2: We purposefully chose pseudonyms that were not difficult to decipher yet shielded our identities well enough to permit an unbiased reading and review of our work. We wanted pseudonyms that could relatively easily be linked to our true identifies if needed in the future.

Q3: You published once previously under these names before, is this correct? Have you published other papers under different names?

A2: Yes, this is correct. The first paper in the series of our new climate concept was published in the open-access journal SpringerPlus in 2014 under these pseudonyms. We have not published any other papers under pseudonyms before.

Q4: What is Tso Consulting?

A4: This is explained in the attached letter to the JASR editor (see pp. 3 – 4).

Q5: Regarding the Advances in Space Research paper, this discusses a new model for determining the average surface temperature of rocky planets, broadly speaking based on solar radiation and atmospheric pressure. For a lay audience, as much as possible, could you describe the new “macro-level thermodynamic relationship” that emerges from such an analysis?

A5: The model described in the JASR paper is empirical in nature meaning that it was derived from observations. Specifically, in our model development, we used a technique called Dimensional Analysis (DA). DA is a method for extracting physically meaningful relationships from measured data without a reference to any theory. In other words, DA is a data-exploration technique aimed at inferring (discovering) new physical laws and relationships. It has been successfully used in the past in solving complex problems of physics, engineering, mathematical biology, and biophysics.

We started our quest by asking “What controls the long-term average surface temperature of a planet?”. Instead of looking at theoretical explanations, we decided to answer this question by analyzing data from a broad range of planetary environments in the Solar System. Our initial premise was that factors controlling Earth’s mean global temperature must also be responsible for determining the temperature on other planetary bodies. After an extensive query of the peer-reviewed literature we selected 6 bodies for analysis: Venus, Earth, the Moon, Mars, Titan (a moon of Saturn) and Triton (a moon of Neptune). Our selection was based on 3 criteria: a) presence of a solid surface; b) availability of high-quality data on near-surface temperature, atmospheric composition, and total air pressure/density preferably from direct observations; and c) representation of a wide range of physical environments defined in terms of solar irradiance and atmospheric properties. Using vetted NASA measurements from numerous published sources, we assembled a dataset of incoming solar radiation, surface temperature, near-surface atmospheric composition, pressure, density, and a few other parameters for the selected planetary bodies. We then applied DA to group the available data into fewer non-dimensional variables (ratios) forming 12 prospective models that describe the average planetary surface temperature as a function of solar radiation reaching the orbit of a planet, atmospheric greenhouse-gas concentrations, greenhouse-gas partial pressures, total atmospheric pressure and total atmospheric density. Next, we performed a series of regression analyses to find the best mathematical model capable of describing the non-dimensional data. One non-linear model outperformed the rest by a wide margin. This model describes the atmospheric greenhouse effect only as a function of total atmospheric pressure. In our study, we call the Greenhouse Effect an Atmospheric Thermal Enhancement (ATE) quantified as a ratio of the planet’s actual surface temperature (Ts) to a temperature that the planet would have in the absence of atmosphere (Tna). ATE = Ts/Tna. The ‘no-atmosphere’ temperature, Tna, depends on solar irradiance and is computed from the physical model of Volokin and ReLlez (2014).  The figure below illustrates the final pressure-temperature relationship emerging from the data and its success in reproducing the relative atmospheric thermal effects of the 6 planetary bodies.

ate-v-p

This newly discovered relationship possesses several important characteristics that make it as meaningful as a physical law. These include high accuracy, broad scope of validity, statistical robustness, and a close qualitative similarity to other well-known pressure-temperature relationships such as the dry adiabatic temperature curve. To our knowledge, this is the first and only model in planetary science capable of accurately describing the average global surface temperature of planetary bodies across such a wide range of radiative and atmospheric environments. This relationship suggests that the long-term equilibrium temperature of Earth is part of a cosmic continuum controlled primarily by two factors – solar flux at the top of the atmosphere and total surface atmospheric pressure. These features give our model the significance of a macro-level thermodynamic relationship heretofore unbeknown to science. By macro level we mean applicable to a planetary-scale quantity such as the global average surface temperature across a broad range of conditions. The term thermodynamic refers to the interaction between temperature, pressure, volume and energy.

The theoretical implications of this new relationship are numerous and fundamental in nature. To name just a few, our model suggests that: a) the greenhouse effect of a planetary atmosphere is in fact a pressure phenomenon that is independent of atmospheric composition; b) solar irradiance and atmospheric pressure determine the planet’s baseline (‘backbone’) equilibrium temperature while constraining the annual and decadal temperature variations to a narrow range around that baseline value; hence, large deviations in Earth’s global temperature are not possible without a significant change in either the total atmospheric mass or the incoming solar radiation; c) the climate system is well buffered and does not have tipping points, i.e. functional states that foster rapid and irreversible changes in the global surface temperature cannot occur.

Q6: Have these variables – upper atmosphere solar irradiance and atmospheric surface pressure — been ignored by other planetary temperature models? If so, do you have a sense why?

A6: Yes and no. Solar irradiance is a main driver of climate in all current models including the 3-D Global Circulation Models (GCMs), which are the preferred tool for studying planetary climates today. In GCMs, pressure only indirectly affects the surface temperature through the atmospheric optical depth. According to the standard Greenhouse theory, atmospheric pressure impacts the energy content of a climate system only through its effect on the infrared absorption lines of greenhouse gases. A higher atmospheric pressure broadens the absorption lines, thus increasing both the thermal infrared opacity of an atmosphere and the down-welling longwave radiation, which is believed to control the surface temperature. Our empirical model, however, suggests that pressure directly impacts the surface temperature through added force (by definition, pressure is a force applied over a unit area). The direct effect of pressure on the internal energy and temperature of a gaseous system is well understood in the classical thermodynamics as exemplified by the Ideal Gas Law. Fundamentally, there cannot be a kinetic energy and temperature without a force, i.e. without some form of pressure. Even electromagnetic radiation has pressure!

A change of temperature due to a change of pressure without any addition or subtraction of heat is known as an adiabatic process. Adiabatic heating, a.k.a. heating by compression is a basic principle in the working of diesel engines, a technology we have successfully utilized for over 120 years. The results from our empirical data analysis suggest that the thermal effect of the atmosphere is analogous to a compression heating found in diesel engines except that it is caused by gravity. Therefore, the direction of causality in the real system appears to be different from that assumed in GCMs. In the real system, pressure controls temperature and these two in turn control the atmospheric optical depth while, in climate models, pressure along with greenhouse-gas concentrations control the atmospheric optical depth, which in turn controls temperature. This discrepancy has fundamental implications for projecting future climatic changes. For example, according to our model, altering the atmospheric optical depth by increasing ambient greenhouse-gas concentrations cannot in principle affect the surface temperature, because a change in system’s temperature requires a net change in the applied force, while the optical depth being a dimensionless quantity by definition carries no force. This is why the effect of CO2 on climate is only visible in model outputs, but has never been observed or shown in reality.

Q7: Dr. Nikolov, you told me that no serious scientist would deny that average global temperature is increasing. You do, however, take a critical approach to the idea it has an anthropogenic cause, is that correct? Therefore, would you consider yourselves to be outside the mainstream consensus regarding climate science?

A7: The climate is always changing. The question is what’s forcing the changes? The available evidence from both direct observations and reconstructed geological records does not support the hypothesis that CO2 and other heat-absorbing trace gases control Earth’s climate. Explaining this properly, however, requires a longer discussion. If the global temperature is independent of atmospheric composition as suggested by our inter-planetary analysis, then there is no mechanism for human-induced carbon dioxide emissions to impact Earth’s climate.

The progress of science is not driven by consensus! If you study the history of scientific discoveries, you will find that theoretical breakthroughs, i.e. the introduction of fundamental new concepts, have always been carried out by individuals or small groups of researchers outside the mainstream consensus. For example, there was a unanimous consensus once that the Earth was at the center of the Universe and all celestial bodies revolved around us. Likewise, 120 years ago, there was a consensus among physicists and engineers that heavier-than-air machines cannot fly. The truth about physical phenomena can only be uncovered through careful observations, proper experimentation, and unbiased sound reasoning. A blind adherence to the consensus of the day oftentimes hinders the advancement of knowledge.

Q8: If so, does this stance make it difficult to publish your work?

A8: There is no doubt that trying to publish research results, which do not conform to accepted theories or mainstream beliefs poses a challenge in today’s world of academic political correctness. This is not just our experience and it is not just happening in climate science. In my view, it is a worldwide phenomenon. For science to be useful to society it must be based on a free and open inquiry of the physical reality, where publishing of novel findings and proposing of new hypotheses based on such findings is not constrained by political considerations. In other words, scientific theories and conceptual paradigms should not be institutionalized as has been done in some areas of science.

Q9: And if the cause is not anthropogenic, do you have a hypothesis as to why not?

A9:  As explained above, our finding that the atmospheric thermal effect (a.k.a. the Greenhouse Effect) is entirely due to pressure while independent of atmospheric composition implies most likely natural causes for the observed warming during the 20th Century. There are several lines of evidence discussed in numerous papers by different research teams over the past 10 years that support this hypothesis. For instance, both satellite observations and ground measurements show that the cloud cover and cloud albedo (the fraction of solar radiation reflected back to space)  declined appreciably between 1980 and the early 2000s. As a result, the amount of solar radiation reaching the surface has increased. This is known as ‘global brightening’. The rate of this increase is more than enough to explain the observed surface warming over the past 30 years. The figure below shows, how closely the global temperature follows changes in global cloud covers according to satellite data. Note that cloud-cover variations precede temperature changes by about 12 months indicating that clouds drive temperature (cloud data are from Dim et al. 2011; temperature data are from the University of Alabama at Huntsville dataset).

cloud-v-temp

The question then arises what controls cloud cover? This is an area of intense research at the moment, but the available evidence thus far indicates that the global cloud cover is affected by the Sun’s magnetic activity – high solar activity creates conditions for fewer clouds (causing warming), while a low activity promotes more clouds (causing cooling). According to the above figure, the appreciable slowdown of global warming after year 2000 is likely the result of increased low-level clouds. The Sun’s influence on Earth’s cloud cover and albedo, although small in absolute terms, is sufficient to cause global temperature variations in the order of ±0.7 C, which is the size of climatic change we have observed since 1850. The good news is that solar induced changes in cloud cover are buffered by negative feedbacks within the Earth’s climate system making it impossible for the global temperature to deviate more than ±0.7 C around a central mean.

Further indication that the 20th Century warming was most likely caused by natural forcing is provided by this rather interesting study of Viterito (2016), who found that global temperature variations since 1980 were highly correlated with global seismic activity, which is a source of geothermal heat. The author reports that seismic activity precedes temperature changes by 2 years. These results strongly suggest that the observed warming over the past 35 years could not possibly be due to anthropogenic factors.

The observed statistical link between global temperature, cloud cover, and seismic activity points towards possible electric or electro-magnetic effects of the Sun upon the entire Earth-atmosphere system, which are currently unbeknown to science. This is an unchartered territory, where future research efforts should focus.

Q10: The paper passed peer-review and was only withdrawn once the question of pseudonyms was raised. Would you consider the withdrawn Advances in Space Research paper to be controversial in and of itself? Are there conclusions to be drawn from it that contradict existing mainstream theories of climate change?

A10:  The paper is not controversial as far as data are concerned, since we have used publically available vetted observations from NASA. The Dimensional Analysis and subsequent non-linear regressions employed are also standard techniques. However, the results obtained are unexpected (even to us) and have theoretical implications pointing to a new paradigm in understanding about the physical nature of the Greenhouse Effect.

Q11: Do you anticipate any reaction from the U.S. Department of Agriculture, where I understand you are both employed?

A11: It is only I, Ned Nikolov, who is currently employed by USDA. Dr. Zeller is a retired meteorologist from the US Forest Service… It is difficult to predict what the reaction of my employer would be (if any) given the fact that I have followed closely their instructions to conduct this research in my spare time and not to show my federal affiliation on any papers published on this topic.

Q12: Would you recommend that more journals accept pseudonyms or at least double-blind review? (I believe Dr. Nikolov, you said only about 1 in 10 currently allow for such peer review.)

A12: Yes! As studies have shown, anonymity in science is critical for the advancement of knowledge especially when new theoretical paradigms are being introduced. I also think that double-blind reviews should become standard practice at all scientific journals. Another change that should also be universally adopted, in my view, is to ban the current practice of rejecting manuscripts based on reviewers’ personal opinion about the importance or implications of reported results. If the analyses are done correctly and the stated conclusions are supported by numerical results, the manuscript should be accepted for publication. It should be left up to the broader readership to decide later on after the paper is published, what the importance or relevance of reported findings to the field is.

___________________________________________

Editors Note: Some very minor grammatical and style amendments have been made to the supplied transcript of the WashPo interview for clarity and readability.

Comments
  1. Jaime Jessop says:

    As Dr Nikolov has provided a clear and logical explanation for the use of pseudonyms and as the paper passed peer review presumably on account of its scientific merit, it should now be re-published under the author’s real names. It’s a travesty that a scientific study is withdrawn merely because, out of desperation to overcome the hurdle of arguably unjustified public notoriety, the authors chose to publish under easily decipherable pseudonyms. Papers should only be withdrawn from publication if they are demonstrably shown to lack scientific credibility or have serious ethical issues. Neither applies here.

  2. tallbloke says:

    Indeed Jaime. In fact, the page at Springer clearly states:

    “This article has been withdrawn upon common agreement between the authors and the editors and not related to the scientific merit of the study.”

    http://www.sciencedirect.com/science/article/pii/S0273117715005712

  3. Jaime Jessop says:

    Yes, I read that Rog. In effect what they are saying is that they published a study which passed peer-review on its scientific merit alone, but then discovered that the pseudonymous authors had a bad rep so they withdrew the paper from publication in “common agreement with the authors and editors”. If not that then it is simply a matter of punishing said authors for having the audacity to publish under a pseudonym. I suspect a bit of both. Neither is a valid reason for withdrawing peer-reviewed research from the public eye.

  4. oldbrew says:

    There appears to be a certain ‘fear factor’ in climate-related publishing. Need I say more?

  5. Jaime Jessop says:

    Fear of the unknown [becoming known].

  6. erl happ says:

    “This is why the effect of CO2 on climate is only visible in model outputs, but has never been observed or shown in reality.”.

    A paper should be judged on its merit independent of any other matter like the colour, creed, nationality, or apparent qualification or the lack thereof of the authors.

    In fact, for a paper to be considered on its merit it is probably better that the authors remain anonymous. That these authors had to conceal their identity in order to have the paper accepted for publication speaks volumes about the integrity of the editors of the journals who have refused publication. The boot is on the other foot entirely.

    Now it so happens that the AWG thesis does no stand inspection anyway.

    The Southern Hemisphere has not warmed in December for seven decades. If you are practising ‘science’ one instance of failure should be sufficient to have you reject a hypothesis. If you persist with a failing hypothesis you are engaged in a religious observance, not science.
    I reference three ways to verify this assertion, two utilising raw data and the third relating to the variations from the whole of period average temperature for each month of the year: The data source is Kalnay’s reanalysis.
    1. http://www.esrl.noaa.gov/psd/tmp/climindex.101.177.92.161.264.15.27.21.png
    2. https://reality348.files.wordpress.com/2016/07/hemisphere-surface-temp.jpg?w=612
    3. https://reality348.files.wordpress.com/2016/08/sh-sst.jpg?w=612

    It is plain from the data in the third figure that temperature evolves differently according to the month of the year.

    If we are to understand climate change that is what we need to explain and the cloud cover thesis is the most plausible. As surface pressure rises in low and mid latitudes so does geopotential height at 500 hPa the warming of the atmosphere responsible for a diminution of cloud cover and rising surface temperature. This relationship has been long observed.It is linked to the phenomenon called the annular modes, the chief natural modes of climate variation that have been under observation for half a century or more.

    Climate science is a no go area unless you subscribe to the prevailing orthodoxy. The eyes are tightly blindfolded.It’s a closed shop. This episode illustrates the point. I congratulate Nikolov and Zeller on their resourcefulness on two counts, first the science and secondly on their neat demonstration that the climate establishment is rotten to the core.

  7. “Common agreement with the authors and editors” is a whitewash. If the paper passed on “scientific merit”, then the editor should have politely explained that they needed to publish the paper with the authors’ real names, and THAT should have been done “by common agreement with the authors and editors”.

    Nikolov and Zeller still don’t get it, though:

    “It is simply the hydrostatic condition”,

    and that has been known for well over a century, in the Standard Atmosphere model, which my 2010 Venus/Earth temperature-vs-pressure comparison precisely confirmed.

    And “The results from our empirical data analysis suggest that the thermal effect of the atmosphere is analogous to a compression heating” merely confuses the transient (and local) effect of compression with the constant (and global) effect of the hydrostatic condition (most simply described as “the pressure at any level in the atmosphere is just the weight of the atmosphere above that level”). The Standard Atmosphere, as everyone should know by now (I have been pointing it out for 6 years now), is based upon the hydrostatic condition.

    And the figure really does no more than agree with what my Venus/Earth comparison showed more fully and clearly, that those two planets have essentially the same temperature-vs-pressure profile, over the full range of Earth tropospheric pressures, when only their different distances from the Sun are taken into account. Mars, Moon, and Triton are useless, as the curve is vertical–hence, the “thermal enhancement” is completely indeterminate–for very low surface pressure. I have also pointed out, many times, that the surface temperature of Titan is too low, by about 7K, when compared to Earth in the same way I compared Earth and Venus, and I have given the most likely reason for that (and observed haze in Titan’s atmosphere), while Nikolov and Zeller’s theory cannot even address it (I am surprised they even show Titan as a point off the curve, not on it, since previously they have reported that their theoretical relationship–the curve–predicts precisely the surface temperature of Titan). And Venus’s planet-wide, thick cloud cover does not affect its T-P profile, outside of the clouds themselves, so continually bringing in clouds to explain global temperature variations is also wrong. Sorry, but my Venus/Earth comparison is definitive, and everyone (consensus believer or skeptical critic) will have to admit that in the end.

    My Venus/Earth analysis is earlier, better, and more simply and clearly explained, by the hydrostatic condition alone (without any “compressional heating”, which is irrelevant, incompetent and immaterial). The Standard Atmosphere, over a century old, contains that, so their “new understanding” is not new; it has just been ignored, for 2 generations now, by incompetent scientists and unethical politicians bent on world dominion.

  8. tallbloke says:

    Harry, I hope Ned finds the time to give a lengthy reply to your comment that both acknowledges your earlier contribution, and addresses the criticisms you raise. I see value in both your work and theirs.

  9. JB says:

    With regard to the peer review process, I have long felt it is an inherently defective process for the following reasons:
    1) Whenever two or more creatures engage in a mutual association a hierarchy will develop.
    2) The most fundamental behavior to manifest in any hierarchy is PYOA (protect your own *ss).
    3) The hierarchical behavior rules outlined by Laurence J Peter then append.

    Peer review, like differing forms of governments, simply cannot insulate against licentious behavior, and consequently both will remain forever a conundrum to solve. I offer Fred Hoyle’s observation:

    “This by the way is a grave objection to the committee system. Committee members are nearly always inhibited against offending each other. Yet it is probably correct that, if any enterprise is to go forward efficiently, decisions must be taken that offend people. Just as natural selection depends on rejection and extinction, so successful enterprise depends on lots of offense being given.” p12 Man in the Universe

    With regard to Nikolov’s findings, it is a glaring example of how scientific methodology is being ignored by a significantly large proportion of those claiming the title of “scientist.” The proper method is to analyze the data derived from observation, determine an explanation (hypothesis) for the data, and perform experiments to verify the hypothesis as an accurate explanation for the observation. This appears to me what Nikolov was describing what they attempted (minus the last stage of experimentation) in the interview.

    Nikolov suggested as I interpret his statements that hypotheses are being concocted that have little relevance to the data, models are assembled to reflect the hypotheses, subsequent reconstruction of the models in endless iterations, and in extreme cases removing data to force fit the model’s results to satisfy the original hypothesis.

    I can think of only one place where such backward and convoluted methodology arises–from poor scholarship at universities. From my studies of electronic engineering through a long career I have noticed repeatedly the tendency of professors to extol the virtues of software in hardware control and its cousin, software analysis in computer modeling. The result has been several generations of software “revision” both in operating systems and the defects in robotic controls, with a corresponding predilection to modeling rather than experimental verification. What was completely missing in the academic world not only in my several experiences on campus, and that observed coming from academia elsewhere, was the basic instruction in what constituted valid scientific methodology. I cannot recall a single instance on campus where proper methodology and intellectual honesty was addressed in any classwork instruction or laboratory investigation. At best it was implied. Perhaps readers have had more positive experiences in this.

    Again, Hoyle’s observation:

    “There are many known cases in physics where men have had ideas which subsequently turn out to be correct and which they have killed in their own brains. The curious paradox is that it tends to be the technically most competent men that are the greatest murderers of new ideas.” p22

    The history of science is rife with instances where someone has attempted to alert others to something they “discovered” only to have it reappear in one form or another until someone in the most conducive circumstances finally gets noticed. Sour grapes of the human condition. In my estimation, we are certainly capable of doing better than this example of Nikolov’s (and countless others) experience. Allowing ourselves to become polarized is one of the fundamental impediments.

    I laud Nikolov’s efforts, whether his analysis is correct or not, in successfully “offending” a large number of people to provoke intellectual honesty.

  10. tom0mason says:

    It is beyond the wit of academia to devise a method of peer review that keeps the authors anonymous during the review process?

  11. Ned Nikolov says:

    Harry,

    Without a doubt, you were the first to point out the fact that temperatures at equivalent pressure levels on Earth and Venus are the same when corrected for the difference of their distances to the Sun. I should also say that Karl and I were not aware of your results when we first did our analysis and came up with pressure concept of the Greenhouse effect in 2011. It was the online discussion prompted by our 2012 poster that brought your name to our attention. We acknowledge your pioneering work and discovery …

    As you know, our analysis focuses on surface temperatures only. Hence, our empirical model is not applicable to calculating vertical temperature changes in the free atmosphere. We explain this clearly in Section 3.1.3 of our second paper available at:

    https://tallbloke.files.wordpress.com/2016/09/planetary_temperature_model_volokin_rellez_2015.pdf

    As far as we know, ours is the first and model that accurately describes the average surface temperatures of planetary bodies across a very broad range of environments in the Solar System using a common set of drivers.

    Moon, Triton and Mars are bodies with low atmospheric pressures, and as such are quite important in determining the shape of the P-T relationship, because (as shown by our analysis) the relative thermal enhancement (ATE = Ts/Tna) is very sensitive to changes in P at low pressures. This can be shown numerically by calculating the first partial derivative of Ts with respect to P from Eq. 10a or 10b in the above paper. The high sensitivity of T to P is also visible in the steep slope of the curve in the region of Moon, Triton and Mars. Without Moon, Mars and Triton the relationship would be incomplete!

    As far as our comparison of the atmospheric effect on surface temperature to “compression heating”, I think this analogy is correct because compression heating works by increasing the force acting on the gas, which is similar to adding gas pressure to na airless surface. What’s important to be understood here is the fact that the direct effect of pressure on temperature is through added force. This is so, because force defines internal energy, which in turn determines temperature…

  12. The reason that surface temperature is related to surface pressure is that the higher the pressure the closer are the individual molecules situated at and just above the surface and so the more effective conduction from the irradiated surface can be.

    The so called greenhouse effect is actually a consequence of mass absorbing surface energy via conduction and then moving it up and down in the process of convective overturning.

    During uplift surface KE (heat) becomes Convective Available Potential Energy (CAPE in meteorology) which is not heat and does not register on temperature sensors hence the decrease in temperature with height.

    During descent CAPE is converted back to KE the effect of which is to reduce the net loss of energy from the surface via radiation by partially offsetting it.

    It is the fact that at any given moment KE is returning towards the surface in descending columns comprising half the atmospheric mass which causes the uplift of surface temperature.

    I recently described the process at WUWT thus:

    “the adiabatic processes within the convective overturning cycle are indeed a closed loop and that is what causes the surface temperature enhancement above that which the purely radiative S-B equation would lead us to expect.

    During ascent some of the surface heat becomes convectively available potential energy (CAPE in meteorology) and that CAPE is not energy that can register on temperature sensors hence the cold at height.

    During descent CAPE converts back to heat energy.

    Thus the surface must retain an additional store of energy at the surface to maintain ongoing convective overturning and that energy never gets past the atmosphere to be radiated to space as long as convective overturning continues at the same rate.

    The enhanced surface temperature is a consequence of conduction to ALL gases at the surface and their subsequent adiabatic uplift from the base and adiabatic descent from the top due to density differentials in the horizontal plane. It is nothing to do with radiation downwards from the atmosphere (DWIR).

    For an atmosphere in hydrostatic equilibrium any radiative imbalances are neutralised by convection so that if GHGs alter the radiative characteristics of the atmosphere then convective changes will ensue to neutralise those imbalances BUT since the surface temperature enhancement is a result of conduction to the entire mass of the atmosphere the effect of radiative variations would be far too small ever to notice.”

    which is all consistent with the propositions of Nikolov and Zeller who appear to me to be bravely restating old knowledge implicit in the concept of the hydrostatic atmosphere and the Standard Atmosphere as Harry Huffman points out.

    I am pleased to see that Nikolov and Zeller and Erl Happ all agree on the importance of solar induced cloud albedo changes in relation to which my views are well known and I have put forward a suitable hypothesis related to solar effects on the balance of ozone creation and destruction above equator and poles.

  13. Ned Nikolov says:

    To JB’s comments above: My impression is that the peer review process as it is currently implemented by the vast majority of science journals only works with respect to so-called ‘normal science’ (to use the term introduced by Thomas Kuhn). Novel or ‘transformative’ science cannot be handled well at all, because most reviewers and editors interpret non-conventional findings and hypothesis as being ‘wrong’ or even a ‘threat’ since they do not conform to known theories and beliefs. We have had situations, where journal editors would reject our manuscript simply because it ‘implies’ that their colleagues had been wrong for decades…

    I agree that it is the task of Universities to do a better job in teaching students the basic principles of the scientific method, so that we could alleviate the subjective biases in the peer review process later on.

    I also believe that the current system of funding of science has a lot to do with the poor ability of the peer review process to handle novel findings. When funding is restricted to studies supporting certain theoretical paradigms and there is a political pressure by the management of government institutions on scientists to stay within the limits of consensus-driven views and concepts, innovative thinking tends to be suppressed as something that ‘disturbs the order’ instead of being viewed as an engine of progress… This is why I think that a reform of the funding mechanisms in science will make a huge difference.

  14. Ned Nikolov says:

    Thank you, Stephen Wilde. I agree with most of your comments …

  15. Ned Nikolov says:

    I also would like to point out that the mainstream media is not unbiased when it comes to politically charged science areas such as climate. You can see this by comparing the answers I provided to the reporter with content of the article that the Washington Post published at:

    https://www.washingtonpost.com/news/morning-mix/wp/2016/09/19/scientists-published-climate-research-under-fake-names-then-they-were-caught/

    A lot of information was omitted and the tone of the article failed to adequately represent the meaning and spirit of my statements …

  16. J Martin says:

    Another nail in the co2 coffin. One wonders how many nails it will take before the adherents to CAGW realise they got it wrong. I expect that many of them will know they are wrong but are making too much money from giving the politicians what they want to stop now. Not to mention the loss of reputation.

  17. Ned Nikolov says:

    I think that objective science will eventually prevail. These types of long-lasting confusions fueled by political powers and money have happened numerous times throughout human history. Mainstream has always been opposing new paradigms … 🙂

  18. Paul Vaughan says:

    a quote from the interview:

    “The theoretical implications of this new relationship are numerous and fundamental in nature. To name just a few, our model suggests that: […] c) the climate system is well buffered and does not have tipping points, i.e. functional states that foster rapid and irreversible changes in the global surface temperature cannot occur.”

    As I sort through the misunderstanding Harry & Ned are having I notice that (c) appears to depend on defining “the climate system” as surface-ONLY GLOBAL ANNUAL-average TEMPERATURE.

    This would imply that sharp REGIONAL variations somehow fall outside of “the climate system”.

    Semantics are clearly playing a role in miscommunications / misunderstandings / misinterpretations. Given how super-politically-charged the context has become, enforcing some kind of standardized terminology is impossible (wasting time quibbling about this won’t change the reality), so the sensible thing to do is take personal responsibility for recognizing and understanding the diversity of languages.

    Similarly the definition of “science” being used (in climate discussion generally) has become corruptly administratively dictatorial and the notion that “science” is good has consequentially become implausible. The definition of science being used in discussion is artificially narrow and restrictive. It ARTIFICIALLY CLOSES avenues to learning from raw exploration. The narrowing (a framing-control mechanism) is being used as a key component in a political containment strategy. I reject the assumption implicit in so many climate comments that “science” so-defined is a desirable priority. I would advise sensible people to just slap this down hard as this tactic is so violently underhanded. The portion of science so-defined that is not subsumed by raw exploration is the exploited abstract portion. They’re blowing a bubble to restrict discussion to the abstract. The overwhelming majority of commentators submit blindly to the abstract containment.

    Again that is something too big to correct, so the best we can do is take personal responsibility for recognizing the political containment strategy being waged at wuwt & ce. I propose that the Talkshop remain resistant to such political containment. It will try to find a way to creep in here. It has infected JoNova’s site I noticed. She’s usually pretty sharp, but this one got by her. Volunteering to be artificially confined? Why? Sensible reasons include things like food, shelter, and family needs.

    I think Ned & Harry can sort out their miscommunications with deliberate focus on observed spatial pattern (for example Ned’s response to Q9)…

  19. Paul Vaughan says:

    Erl, the annular modes alone don’t span the balanced multi-axial differential space. The space is only spanned with inclusion of the monsoons. One way to conceptualize: the notion of a “global monsoon” …but certainly there exists capacity for misunderstanding since different authors are defining that term differently. Talkshop commentator “Poly” and wuwt’s only sensible commentator, Bill Illis, are the only 2 climate discussion commentators I’ve noticed being aware of the balanced multi-axial differential (even though they did not call it that).

    Stephen, you talk about convection being a closed loop, but what about the portion of the descending branch that advects moisture poleward? I’m underscoring this because of the consequent meridional sign-change in multivariate spatiotemporal cloud-temperature pattern. I’m aware of no one informing their climate aggregation criteria mindfully of this.

    Ned, I suggest it would be wise to at some point to comment on north-south terrestrial asymmetry. For example: For which terrestrial hemisphere does your “rocky” planet model perform better? (There’s a reason why I’m asking this. Maybe we’ll eventually have time to get there….)

  20. erl happ says:

    Paul,
    You assert that the annular modes don’t span the entire……… space including the monsoons.. Indeed the Monsoons have a local genesis but see here: http://onlinelibrary.wiley.com/doi/10.1002/cjg2.1010/abstract and here: http://159.226.119.58/aosl/CN/abstract/abstract31.shtml

    Steven: Indeed atmospheric pressure does mediate the process of energy transfer in many ways. Expansion involves cooling while compression involves warming. Convection cools via decompression a very important mode of energy transmission immediately above the equator.

    A neat illustration of the importance of atmospheric pressure in mediating the transfer of energy relates to the fact that although there is only a tiny amount of ozone in the troposphere by comparison with the stratosphere. Ozone absorbs and transfers more infrared and so heating the troposphere more than it does in the stratosphere. Think about the implications of that reality as it effects air temperature and cloud cover in descending columns of air in the low and mid latitudes.In this way ozone influences the relative humidity of the air drawn into the continents by monsoons.

    Low pressure zones are ozone rich at elevation. These pressure cells can be initiated by warming of the air by the surface (monsoonal) or by the difference in density between parcels of air that have a different tropopause height in which case the propagation is top down. In the case of monsoons bottom up and top down influences are active and that takes us back to Paul’s statement.

  21. Paul Vaughan says:

    quoting from the article headline:

    “In Science, New Messages Mean More than the Messengers’ Names”

    This sounds like a plea for unity, but nature works by diversification. The climate discussion is fragmented. Unifying it is impossible. We need to be practical. Small groups are more free and agile.

    By not being part of a broader community we make trade-offs. Some discoveries may be delayed by fragmentation but others will be hastened. Net gains include peace and exploratory efficiency.

    Information is sufficiently voluminous to drown. Diversification is more efficient than unity …and therein lies a natural advantage that is easily leveraged by luminaries. Key messages don’t have to reach a mass audience. They only have to reach competent luminaries.

    Overall I think it’s ok if some messages get ignored because of names because luminaries don’t need unity to be efficiently diverse. Luminary holistic vision will derive more efficiently from diversity than from unity and this isn’t a paradox.

    Anyone curious to see the bidecadal oscillation in El Nino Modoki Index? …and how this insight arose from recognizing the ITCZ as the central aggregation criterion of terrestrially-asymmetric sun-shaped climate pattern? Maybe we’ll eventually get there whenever the seemingly never-ending thickets of philosophical misunderstanding subside! ….Well one thing’s for sure: if we ever become a bureaucracy we’ll be even more inefficient! (Wuwt is running like a bureaucracy.)

  22. Paul Vaughan says:

    I’m going to exit this conversation, but I’ll continue monitoring.

  23. Karl Z says:

    Hi Paul V in response: ….. ‘Ned, I suggest it would be wise to at some point to comment on north-south terrestrial asymmetry. For example: For which terrestrial hemisphere does your “rocky” planet model perform better? (There’s a reason why I’m asking this. Maybe we’ll eventually have time to get there….)’
    Ned may be taking a rest at the moment! 🙂 Please remember that our empirical results are specifically for long term (we say 30-year) global mean surface temperatures. And that our results provide for a stable Ts (or backbone value) around which Ts wiggles will occur (like Ned’s Q9 wiggle example). Hence all shorter than 30-year term phenomena (convection, hemispheric asymmetry and interactions, shorter solar cycles, ocean currents, ocean heating, glaciers melting, butterfly wing-flappings, all of it, etc.) happen within the stable single macro-level Ts value our equation provides. Years from now once we get planetary hemispherical observations from other moons and planets perhaps your question could be addressed using our approach. With that said I suppose we could simply take a long term average of each hemisphere and see if one or the other has a greater or lessor error, but that would be applying the model beyond the observations used to establish it. You might try it and let us know the result and what you conclude.

  24. Ned Nikolov says:

    I agree with Karl.

    A long-term global average near-surface temperature represents the time-averaged kinetic energy of the lower troposphere. The asymmetry we observed between NH and SH reflects the non-uniform distribution of this energy across the globe (between the Hemispheres). I think this asymmetry arises a from the asymmetrical distribution of land masses between NH and SH, and particularly around the poles. This type of climatic asymmetries exist on other planetary bodies as well such as Mars and Titan … We should be careful not to confuse an average estimate of the available planetary kinetic energy near the surface with the spatial distribution of such energy.

    One cannot change the average global surface temperature of a planet by changing the spatial distribution of available kinetic energy across the surface. Only a change in the global average kinetic energy can achieve this!

    The term rocky planet applies to bodies that have hard/liquid surface. Thus, the gas giants Jupiter and Saturn are not rocky planets by this definition.

  25. Paul Vaughan says:

    Karl, I figured out that Ned might be either taking — or in need of — a rest! …and yes I understood that your model explores the balance (not its asymmetric variance). I already know the answer to the question I asked to provoke next-level awareness of spatial diagnostic utility. Even though I may be quiet, I’ll continue monitoring this thread, particularly for any observation-based (not theoretical) insights arising in relation to Ned’s answer to Q9. Cheers.

  26. Paul Vaughan says:

    Ned Nikolov suggested:
    “I think this asymmetry arises a from the asymmetrical distribution of land masses between NH and SH, and particularly around the poles.”

    Tip for if/when spatial-diagnostic time/interest develops:

    pattern of wind-driven coastal upwelling along west coasts of Africa & Americas in relation to vertical velocity and low cloud at these locations (I touched briefly on this in 2 or 3 or however many Suggestions-21 comments) — compare and contrast with Indian Ocean which informatively does not have similar features (but instead has strong monsoonal features…)

    I’ll have more to say about 5:1 geometry moving forward (weeks to months from now). Meanwhile I’ll continue monitoring this thread, particularly for any observation-based clarification and/or elaboration on the answer to interview Q9.

  27. Ned Nikolov says:

    Paul,

    I preliminary analysis of cloud cover changes I did some time ago revealed that the decline in low-level clouds from the 1980s to the early 2000s was somewhat higher in the NH than in the SH. So, a differential impact of solar activity on cloud albedo between the two Hemispheres may explain at least part of the asymmetry …

  28. ren says:

    Average counts of neutrons in Oulu for the past 30 days has exceeded 6500 counts.
    Very high ionizing radiation in the lower stratosphere.

    If the ions are embryos for the clouds, an increase over the polar circle must lead to cooling.

  29. ren says:

    Anomaly pressure in the stratosphere hinders the development of the polar vortex.

  30. ren says:

    In the south, the wind is strong, despite the visible waves in the upper stratosphere.

  31. ren says:

    Ned Nikolov, congratulations. You have to do everything to make available your results.

  32. ren says:

    “Since Moon and Earth orbit the Sun at the same distance, they receive equal amounts of solar radiation and have the same S o . Serendipitously, the Moon effective albedo α e  = 0.131 nearly equals Earth’s present surface average cloudless albedo (0.122–0.13) inferred from satellite- and ground-based observations (Stephens et al. 2012; Wild et al. 2013). This is in spite of the fact that our Planet has highly reflective regions such as deserts, glaciers, and Polar Ice Caps that are absent on the Moon. However, the high reflectivity of these Earth surfaces is counterbalanced by the low albedo of the World’s Oceans.”

  33. craigm350 says:

    Reblogged this on WeatherAction News and commented:
    “the effect of CO2 on climate is only visible in model outputs, but has never been observed or shown in reality.”

    Stating this simple truth has caused those with closed minds to reach for their blunderbusses and crayons, so they can see off the truth which threatens their cosy little empire of lies, which is why Gavin Schmidt’s response starts with an ad hominem;

    “climate contrarians”

    And why there is a bitter irony that his teams’ despicable data torture to make CO2 match temperature is by far the worst “curve fitting” exercise that some would call criminal:

    https://stevengoddard.wordpress.com/2014/08/02/proof-that-us-warming-is-mann-made/

    Projection Gavin?

    http://realclimatescience.com/fitting-an-elephant/

  34. gallopingcamel says:

    Nikolov & Zeller have done a great service to science by showing that atmospheric pressure is the prime variable when it comes to what most people call the Greenhouse Effect.

    However I am with Harry Huffman who wants a more general theory that is applicable at any altitude and on gassy bodies as well as rocky ones. That is why I like the model described here:
    http://faculty.washington.edu/dcatling/Robinson2014_0.1bar_Tropopause.pdf

    Another thing to like about the Robinson & Catling model is the “First Principles” application of radiative/convective energy transfer equations. The only arbitrary constant R&C use is “Alpha” which is needed on bodies that have oceans (Earth and Titan).

  35. Yes, I’m familiar with that R & C paper too.

    They acknowledge that the vertical temperature profile is related to declining infra red transparency as one moves down through the mass of an atmosphere such that pressure increases with increasing weight above the point of observation.

    The critical point that no one seems to have yet elucidated (thouigh I have mentioned it multiple times on various blogs) is that infra red transparency falls as conduction increases and conduction increases as pressure forces molecules closer together.

    A unit of infra red energy cannot be both radiated AND conducted at any one moment and so conduction MUST be at the EXPENSE of radiation.

    A unit of infra red energy cannot be in two places at once or perform two separate functions simultaneously.

    Therefore it is conduction from surface to atmospheric mass that prevents surface KE (heat) from escaping to space and that is why additional energy accumulates at the surface in excess of that predicted by the purely radiative S-B equation.

    The radiation fluxes within an atmosphere are therefore the consequence and not the cause of the lapse rate slope which in turn is created by atmospheric mass absorbing energy by conduction from the irradiated surface.

    Gravity acting on gas molecules suspended off the surface creates a density gradient whereby density increases with depth and it is the density gradient that determines the rate at which infra red transparency declines with increasing depth of atmosphere towards the surface.

    More atmospheric mass or a stronger gravitational field = more surface pressure = greater surface density = more surface conduction = higher surface temperature at a given level of top of atmosphere irradiation.

    Surface heat gets split between radiation and conduction so that at hydrostatic equilibrium as much energy radiates out to space as comes in from space AND the balance of (allegedly ‘surplus’ )surface heat is permanently engaged in being cycled up and down in convective overturning so as to keep the mass of the atmosphere suspended off the surface for as long as insolation continues.

  36. ren says:

    Stephen Wilde
    Therefore, the average temperature in the troposphere to about 0.1 bar is highly dependent on the height above the Earth. You can see that it is important to the gas density. 0.1 bar below the atmospheric gas behaves differently.

  37. ren,

    I agree that the lapse rate slope reverses in the stratosphere but then it reverses again in the mesosphere and yet again in the thermosphere.

    However, averaging out all heights would reveal the ‘ideal’ lapse rate slope set by mass and gravity.Any atmosphere not meeting that ‘ideal’ slope on average would eventually be lost.

    Radiative processes at heights below 0.1 bar do interfere with the temperature and density gradient as you say.

  38. Ned Nikolov says:

    To gallopingcamel (September 26, 2016 at 3:02 pm):

    I would like to point out that the paper by Robinson & Catling (2013) is an illustration of the standard method utilized by the Greenhouse theory to describe vertical temperature profiles in planetary atmospheres! This method is also employed by global climate models (GCMs) and is responsible for the projected increase of surface temperature with rising atmospheric CO2 levels, which we now know is nonphysical!

    The problem with this method can be summarized as follows: the effect of pressure on temperature is represented as an effect of IR optical depth on temperature. However, as I mentioned in my answer to Q6 above (second paragraph), pressure is the real causative factor for temperature change, since it’s a force and as such it directly impacts the internal energy of a system. Optical depth, on the other hand, is a dimensionless quantity. which depends on both pressure and temperature and is only correlated with temperature. Since it carries no force, the IR optical depth of an atmosphere cannot be a causative factor for temperature change! In my answer to Q6 I said:

    In the real system, pressure controls temperature and these two in turn control the atmospheric optical depth while, in climate models, pressure along with greenhouse-gas concentrations control the atmospheric optical depth, which in turn controls temperature. This discrepancy has fundamental implications for projecting future climatic changes.

    Expressing temperature changes as a function of changes in IR optical depth instead of pressure is a fundamental misrepresentation of thermodynamic principles and a core problem of the Greenhouse theory.

    Another interesting fact is that Dr. Robinson acted as an official reviewer to several versions of our manuscript. He argued that pressure had no direct effect on temperature other than through its impact on atmospheric IR optical depth. He was using ‘theoretical’ arguments to dismiss an empirical relationship, which is in violation of the standard scientific method. His negative comments and unsubstantiated claims were the reason for 2 rejections of our manuscript by different journals …

  39. ren says:

    Ned Nikolov
    Identical behave troposphere Venus.


    “So, in spite of the surface temperature of Venus being on the order of 864 degrees Fahrenheit, and the Venusian surface pressure being on the order of 90 earth atmospheres, there is a region in the Venusian atmosphere which approximates that of earth at sea level with respect to temperature and pressure.”
    http://www.datasync.com/~rsf1/vel/1918vpt.htm

  40. Ned,

    Pressure has an effect on optical depth for IR because higher pressure causes more conduction in place of radiation.

    Doesn’t that square the circle between you and Robinson?

    Pressure is the primary causative factor in both your narratives isn’t it ?

  41. Paul Vaughan says:

    Jumping back in to comment on Ned’s observation:

    He was using ‘theoretical’ arguments to dismiss an empirical relationship

    The mark of a devil. Pure evil.

    Conclusive verification of earlier exploratory insights:

    The analysis was restricted to 1 PC to ensure ONLY FIRST ORDER variation unaffected by rotation in multivariate space.

    Variance accounting is stable to the nearest 1% while varying the central ITCZ aggregation criterion:
    1. meridional wind
    2. meridional wind convergence
    3. precipitation

    ITCZ = InterTropial Convergence Zone
    EOF = empirical orthogonal function
    PC = principle component

    ITCZ aggregation criteria supplement — see figure 2:

    2011 – Climatology of the ITCZ derived from ERA Interim reanalyses
    http://onlinelibrary.wiley.com/doi/10.1029/2011JD015695/full

    “Our wind climatology of the ITCZ contains a single ITCZ characterized by large seasonal migration; over the global ocean the ITCZ location shifts by about 20° between July and January (Figure 2g). The corresponding shift of the precipitation maxima is about 4° (Figure 2d).”

    The rationale for purer meridional stratification is clear from the zone of mode-mixing in the following supplement — see the band of meridional wind triangles in the hovmollers:

    2002 – Wind Convergence Observed by QuikSCAT – Liu, Xie, & Tang (NASA JPL)

    A few key notes from their article:

    1. “The high accuracy and resolution made it possible to compute derivative quantities, such as atmospheric wind convergence, which reveal details of the atmospheric processes [Liu, 2002].”

    2. “Near the equator, -∂u/∂x is much smaller than -∂v/∂y; wind convergence is dominated by the meridional gradient of the meridional wind component.”

    3. “[…] wind divergence is in quadrature rather than in phase with SST.”

    4. “The vertical mixing mechanism appears to be applicable over a broad spectrum of temporal and spatial scales over different regions.”

    There’s scope for trivial extension of models of global annual average surface temperatures to differentiate spatial pattern according to simple systematic geometric criteria.

  42. Paul Vaughan says:

    As Ned pointed out on a parallel thread, they’re failing the tests on differentiating fringe from fact and fantasy:
    https://tallbloke.wordpress.com/2016/09/22/when-lukewarmists-attack/#comment-120009

    Here’s a fun one…

    Did anyone notice what happened to the QBO when the financial polarity of the world flipped in 2008?

    Keeping dark red spirits up on a fanatical high:

    (sarc) Capitalism caused an abrupt pole-to-pole change to Earth’s circulatory architecture in 2008! (/sarc)

    I’ve issued a fun test on Suggestions-21. I’ve posted 2 ozone illustrations, one of which is seriously wrong. My hope is that people will look at the 2 illustrations and realize whether on not they know how to check which is wrong.

    There’s nothing nefarious or underhanded going on. I’m telling everyone upfront and clearly that there’s a serious error in one of the 2 OZONE illustrations.

    Later on I’ll clarify.

    I’m also hoping Ned and/or Karl might comment (venturing a bit afield) on the role of chemical composition in spatial variations around temporal central limits.

    I’ve illustrated the El Nino Modoki BDO (bidecadal oscillation) on Suggestions-21. I’ve also found something very interesting in the East Equatorial Pacific at a node in the global circulation. It’s a linear feature (at the scale of the record) that does not match ln CO2. When time permits I’ll check the amplitude against orbital parameters. My instinct is that for an asymmetry line to be that straight (and not at all curved), it has to be shaped by something long slow and orbital. If I had to guess I would guess precession (ITCZ hydrology), but I want to be clear that it will be weeks if not many months before I have time to run the needed careful diagnostics. I’m noting it here now so that the community will be aware that there exists this linear feature awaiting careful attention and diagnostics. (sarc) Why taxpayer-funded government employees aren’t directed by their bosses to explore with integrity such features, nobody knows… (/sarc)

  43. Brett Keane says:

    All the radiation-based hypotheses advanced seem to be ignorant of cause and effect. Fairly basic physics seems to me to hold that it first requires sufficient work energy on molecules to make them vibrate faster. This may be called sensible heat, measured as temperature. But it is raised Kinetic Energy, polar matter moving faster through magnetic fields, which, measured as T, cause EMF ie radiation, primally. Not the other way round. I see their ‘confusion’ starting there. Am I wrong?

  44. Brett,

    The confusion arises because the radiation only scenario which underles the S-B equation does not recognise that radiation and conduction are mutually exclusive.

    If a molecule has kinetic energy that gives it a temperature of 288K (Earth’s surface) then it can only radiate 288k to space if there is no atmosphere in the way,i.e. no conduction.

    Incoming sunlight would only permit a surface temperature of 255 K if there were no atmosphere.

    AGW proponents say that the ‘extra’ 33K is due to infra red radiation returning back to the surface from the atmosphere but that is wrong.

    That ‘extra’ 33k is due to recovery of kinetic energy from convective available potential energy within descending columns of air which arise inevitably from convective overturning.

    Due to the lapse rate slope the only place where the atmosphere releases that ;’extra’ 33k via radiation is in the lowest layer of molecules at or just above the surface. Above that point the air temperatures are lower and so that higher air cannot effectively either warm the surface or reduce its rate of cooling. The much lower density of the air above the surface as compared to the density of the surface prevents any significant thermal effect at the surface from any radiation emanating downwards from the relatively cold air off the surface.

    A hot dense solid surface will always radiate (or conduct) upwards many magnitudes more energy than a cold thin atmosphere could ever radiate (or conduct) downwards. We could never observe the minute difference to the rate of radiative surface cooling that such weak downward infra red radiation would achieve.

    Only recovered KE from CAPE can do the job.

    There is a specific relevant flaw in the famous Trenberth energy budget cartoon which I have drawn attention to elsewhere but that can await another day.

    No one seems to have any problem with the idea that the temperature at the Earth’s centre is way higher than 288k yet the surface only reaches 288K. The amount of energy that leaks out to the air is miniscule.

    The reason is that conduction is going on between the Earth’s centre and the Earth’s surface and that conduction fuels convective overturning within the Earth’s mantle. The ‘extra’ radiative energy generated at the centre never gets out since it is being diverted into conduction and convective overturning.

    It is exactly the same for the mass of an atmosphere around a rocky planet. That mass absorbs energy from the surface and diverts it into convective overturning so that it does not escape to space. It is well established that solids under enough pressure will exhibit liquid characteristics and start to flow as a result of density differentials. So it is within the Earth.

    It is all a mass based phenomenon whether that mass the the solid material (with liquid characteristics) around the Earth’s core or the gaseous material within the atmosphere.

    No essential difference.

    Does anyone suggest that the Earth’s high core temperature arises because the material further out radiates back into the centre?

    No, the Earth’s core temperature arises because solid matter convecting back towards the centre converts CAPE back to KE just as within the atmosphere. The only difference is one of scale such that the extremely high density of the mass within the Earth (due to its high pressure) reconverts prodigious amounts of CAPE back to KE between the top of the mantle and the core.

    For the Earth’s atmosphere the reconversion process is ‘worth’ only 33K due to the low density of air.

    I really don’t understand why the radiative proponents appear to have a blind spot on the issue of conduction and convection and the interplay with radiation since it is all very simple, clear and established thermodynamics.,

  45. Ned Nikolov says:

    Brett,

    Yes, on a global scale, the observed down-welling thermal IR radiation is actually a product of temperature, which in turn is a function of solar heating and atmospheric pressure. So, as a global average, the atmospheric long-wave (LW) radiation is actually a result of temperature, not a cause for it! This is not currently understood in climate science.

    Locally, however, you have situations, where the LW back radiation can and does influence surface temperature such as the often cited comparison of cloudy vs. cloudless sky at night. A cloudy sky tends to produce a warmer surface temperature at night, but if the clouds persist during the day, the daytime temperature will be lower, so that, in general, the average diurnal temperature would be cooler under cloudy sky compared to clear sky. The local effect of water-vapor LW emissivity on surface temperatures is well documented, but these local effects are a product of redistribution of energy in the atmosphere. On a global scale, the presence or absence of water vapor does not matter with respect to the mean planetary temperature. This conclusion follows from the results of our empirical analysis, which showed that, across a broad range of environments, the average planetary temperature is independent of atmospheric composition. This is a new insight to climate science!

    The radiative Greenhouse concept confuses cause and effect with respect to pressure and atmospheric IR optical depth. This confusion began in 1824 with Joseph Fourier’s conjecture that the atmosphere functioned as a blanket that slows down the Earth’s IR cooling to space. This conceptual model was never questioned and remained with us until today. It has guided the way radiative transfer had been linked to convection in GCMs… This is a fascinating story of genuine confusion arising from what appears to be the ‘obvious’ – down-welling IR radiation is real and it was a priory assumed to be the cause for the atmospheric thermal effect. However, oftentimes in science, causality that seem ‘obvious’ at first sight turns out to be false upon closer examination. Just like it was ‘obvious’ to people in the 13th century that Earth was at the center of the Universe (since all celestial objects appeared to revolve around us), there was a hidden reality to the atmospheric thermal effect that could not be understood by Fourier, Tyndall and Arrhenius in the 19th Century. The hidden reality was that the atmosphere does not act as a blanket trapping thermal radiation, but actually enhances the energy received from the Sun through the force of pressure. This is like realizing that the Earth is not at the center of the Universe anymore … 🙂

  46. Ned Nikolov says:

    Stephen Wilde,

    Please, note that the thermal effect of Earth’s atmosphere (a.k.a. the natural ‘greenhouse effect) is 90.4 K, not 33 K! The 33 K estimate is based on a mathematically incorrect calculation. It’s a purely fictitious (non-physical) number … This is explained in great details in our first paper:

    On the average temperature of airless spherical bodies and the magnitude of Earth’s atmospheric thermal effect“:
    https://springerplus.springeropen.com/articles/10.1186/2193-1801-3-723

    It is time to retire the myth about the 33 K ‘greenhouse effect’ one and for all!

  47. Brett Keane says:

    Stephen, well yes, though I put that under ‘equipartition’ of transfer. The lack of power from DWLR may speak more to its being an effect than a cause. I point at basics, and note that transfer equations can be misapplied. Particularly when gases are treated as blackbodies. I was just seeking the source of confusion. And yes, radiation only is senseless, it takes kilometers of uplift (SH, SE) to get through.
    With fluid gases, I really was asking, when radiation is an effect only, negative 4th power, does the equipartition of energy allow it a place at the table among jostling, conducting, expanding, self-uplifting gases, Lines of least resistance, buoyancy in a gravity well sort of thing.

  48. Brett Keane says:

    I might add, that if it was not for conduction and convection happening so many times faster, radiation should have a show.. Just not in gases. It still has reasonable pace….

  49. I think we are all on the same page save for some semantic issues which I think I should try to address.

    i) Note that I said this from 2008:

    http://climaterealists.com/index.php?id=1562

    “The warming effect is a single persistent phenomenon linked to the density of the atmosphere and not the composition. ”

    ii) It doesn’t matter to my description whether the surface thermal enhancement is 33K or 90.4K. The process remains the same either way.

    iii) Conduction and convection is way slower than radiation hence the delay in energy transmission through the system when it occurs and the consequent surface warming effect. The warming effect at the surface does not occur until the end of the first overturning cycle when air previously lifted upwards returns to the surface.

    iv) Clouds are a bit of a red herring because they are liquids floating in air rather than the air itself. They will therefore naturally radiate at their ambient temperature which is always fixed by the temperature along the lapse rate slope at the height at which they float. In general the air is too thin, too cold and mostly comprised of non radiative gases so that any downward IR is far too small to make any real dent on the radiative flow from a hot solid surface. One can point to the gap in emissions from the atmosphere caused by CO2 but there is ample evidence that those wavelengths escape via a different route as a different wavelength after convective adjustments have occurred.

    v) Ned says:

    “the atmosphere does not act as a blanket trapping thermal radiation, but actually enhances the energy received from the Sun through the force of pressure.”

    which is true (but is achieved by delaying the escape of some of the sun’s energy) but one can go further by saying that any gaseous body floating in space will enhance energy received from any source including its own internal radioactive decay and/or collisional friction and/or energy from a nearby sun or even starlight. Once convective activity begins any energy involved in that activity ceases to be radiated out and instead accumulates KE at the gravitational centre.
    That is how stars form from nebulous gases in the emptiness of space and the greater the pressure of mass becomes at the centre the more effectively conduction and convection take over from radiation.

  50. Brett asked:

    “With fluid gases, I really was asking, when radiation is an effect only, negative 4th power, does the equipartition of energy allow it a place at the table among jostling, conducting, expanding, self-uplifting gases, Lines of least resistance, buoyancy in a gravity well sort of thing.”

    I’d say that once hydrostatic equilibrium is reached then the continuing flow of radiation into an atmosphere can have no further effect because at that point radiation in equals radiation out.

    That leaves radiation buzzing about within the atmosphere as a mere side effect of the division of the atmosphere’s internal energy between KE and CAPE the balance of which is highly variable from place to place as a result of the complexity of the convective circulation.

  51. Roger Clague says:

    Ned Nikolov says:

    Our empirical model, however, suggests that pressure directly impacts the surface temperature through added force (by definition, pressure is a force applied over a unit area). 
    and
    The results from our empirical data analysis suggest that the thermal effect of the atmosphere is analogous to a compression heating found in diesel engines except that it is caused by gravity

    From this I understand your theory to be:
    1. gravity(g) causes air pressure(p)
    2. air pressure causes surface temperature( T)
    g>p>T

    1. Do I understand you correctly?
    2. How does gravity cause surface air pressure?
    3. How does Earth’s gravity, which acts only in one direction downwards, cause
    air pressure which acts equally in all directions?
    4. Is not the internal energy of a gas caused by velocity of molecules, and pressure by change of velocity (Bernoulli, 1738)?

  52. Trick says:

    “It is time to retire the myth about the 33 K ‘greenhouse effect’ one and for all!”

    Ned – Not really, since there are two different baselines being measured. An airless spherical body such as the moon has no greenhouse so from that baseline ~90.4K (288K – brightness 197.6K) GHE is ok relative to current Earth as both are measured.

    The N2 Earth atm. has a greenhouse, so from that baseline GHE ~33K (288K – brightness 255K) relative to current earth is also ok as it is measured. The writer can not expect the reader to know and/or figure out which baseline is being written about, so the author must clearly state the basis.

    The difference in baselines is generated from the L&O Earth surface with 1bar vs. the ~50% pulverized surface, no ocean airless 0bar spherical body moon.

  53. Brett Keane says:

    Thanks Stephen, also Ned and Trick. All good comment.

  54. Paul Vaughan says:

    Quoting from N&Z 2016:

    “[…] complex natural systems consisting of myriad interacting agents have been known to exhibit emergent responses at higher levels of hierarchical organization that are amenable to accurate modeling using top-down statistical approaches (e.g. Stolk et al. 2003).”

    A property is not “emergent” from lower scales if it derives from a constraint (or limit) existing at a higher scale.

    Reductionism is on a simplicity continuum — not a scale continuum — and is not in tension with holism.

    Use of the term “emergent” has become romanticized. The term is being applied in contexts where it is not technically correct.

    If that’s too philosophical, a simple example:

    The aggregate shape of water flow in a copper pipe — whether turbulent or laminar — is not most simply described by a model of copper molecules but rather by the shape of the pipe, which is an aggregate geometric constraint imposed at the scale of the pipe.

    Stubborn aversion of the philosophical attention deserved by aggregation criteria is widespread and we remain blindered by such ignorance. Speculation: Our perceptual potential goes orders of magnitude beyond today’s culturally assumed limits.

    Can upscale geometric constraints emerge from downscale properties? Yes, but it’s also possible to have simple upscale constraint that does not emerge from downscale properties.

    When can we sensibly assume an upscale constraint actually is emergent from downscale properties? (certainly not in the copper pipe example) How can we prove emergence empirically? (I’m raising these questions, not answering them.)

    At the time when we detect a simple aggregate constraint, we may have insufficient information and/or depth of awareness to determine whether its an emergent property.

  55. erl happ says:

    Ned, you wrote re “the often cited comparison of cloudy vs. cloudless sky at night. A cloudy sky tends to produce a warmer surface temperature at night, but if the clouds persist during the day, the daytime temperature will be lower”.

    Lets remember that air must have moisture to produce cloud. In the mid latitudes,clear sky turns cloudy when a warm moist air mass arrives from equatorial regions. The warmth is not due to the presence of cloud. Its due to the origin of the air.

    If the air were static and never moved then night time cooling would produce cloud and daytime warming a clear sky. I wait for someone to demonstrate that, in still air, the rate of cooling overnight falls away as the cloud appears.

    Cloud is water vapour in a liquid form. Water in the gaseous form also absorbs infrared. Does a sky that has the same absolute quantity of water per cubic metre in the wholly gaseous form differ in its absorptive and radioactive properties to a sky that has its equal quantity of water partitioned between the gaseous and the liquid form.

    There is no doubt about the reflective, surface temperature reducing effect of cloud during the day. That is responsible for the lower temperature of the Earth as a whole when it is closest to the sun in January.

  56. erl happ says:

    Is this graph not sufficient evidence that the supposed CO2 driven greenhouse effect is non operational.

    https://i2.wp.com/reality348.files.wordpress.com/2016/09/sst-jan-and-june.jpg?ssl=1&w=450

  57. oldmanK says:

    I have been following this attempting to understand the issues (I not being conversant with these things), but something Earl said hit a nerve.

    “Lets remember that air must have moisture to produce cloud. In the mid latitudes,clear sky turns cloudy when a warm moist air mass arrives from equatorial regions. The warmth is not due to the presence of cloud. Its due to the origin of the air.”

    Moisture is like a thermal conveyor belt, absorbing a lot of heat to evaporate and releasing it to condense, twice over if it freezes to snow. The phase change accounts for a lot of heat energy.

    Question: how does that figure in all the above?

  58. ren says:

    Erl really not the same clouds, but the origin of the air is more important. The air from the equator carries warm water vapor at high latitudes due to different density of dry air and moist. In a word, is important circulation on which is also affected ozone above the polar circle.

  59. Paul Vaughan says:

    oldmanK, good work. You’re cluing in to the problem with cloud narratives. For lots of places on Earth clouds in winter mean MUCH warmer and clear skies mean BIG temperature drops. (Are people really not aware of this???) I did animations years ago and recently I brought this up again with illustrations on Suggestions-21. There are too many people who want to pretend temperature = energy. It rouses a lot of suspicion… One of the highlights of recent months of climate discussion was one of the ring leaders over at wuwt admitting ignorance of advection. (Like wtf???) It was the same ring “leader” who though maximum equatorial insolation always occurs in June going way back on Milankovitch timescale. (Incompetence to the power infinity…)

  60. ren says:

    Density of water vapor relative to air amounts to 0.594.
    Normal conditions apply to temp. = 273.15 K and pressure of 1013.25 hPa = 1 atm.

  61. ren says:

    Ozone density relative to air under normal conditions amounts to 1.710

  62. […] for bruken av psevdonym er diskutert her, etter at det kom skarp kritikk bl.a fra WUWT her. At forskeren publiserer under psevdonym er […]

  63. Paul Vaughan says:

    Normally I ignore all climate discussion comments when people start going on and on and on and on and on about radiation because I established years ago that they were falsely assuming uniformity.

    I’ve never looked at this before right now:
    https://tallbloke.files.wordpress.com/2016/09/ge_magnitude_volokin_rellez_2193-1801-3-723_springerplus_2014.pdf

    I got to the section on Holder’s Inequality and realized wow there’s someone else who noticed the false uniformity assumption. I’ve always wondered why people were going on and on and on and on in these endless debates about radiation that illuminate nothing because no one ever stops to correct the false assumptions upon which they’re based.

    I’ve always regarded all discussion of radiation based on false assumptions as a red herring used to divert people away from serious spatiotemporal climate exploration. Given how thick was the filter with which I blocked my attention to what looked like a massive sea of no signal, I’m finding it interesting that I actually decided to look at this paper. It reminds me of when Tomas Milanovic managed to acquire my attention when absolutely no one else was being sensible about chaos.

    The Talkshop is a place where minds open and Tallbloke deserves credit for weathering all the campaigns to beat minds strictly closed.

    The false uniformity assumption and the false assumption that temporal chaos equates to spatiotemporal chaos are being used by sites like wuwt to wage a campaign of dark red corruption.

  64. Paul Vaughan says:

    Harry, what are the ratios supposed to be according to conventional mainstream theory?


    at 1000 millibars (mb), T_earth=287.4 (K), T_venus=338.6, ratio=1.178
    at 900 mb, T_earth=281.7, T_venus=331.4, ratio=1.176
    at 800 mb, T_earth=275.5, T_venus=322.9, ratio=1.172
    at 700 mb, T_earth=268.6, T_venus=315.0, ratio=1.173
    at 600 mb, T_earth=260.8, T_venus=302.1, ratio=1.158
    at 500 mb, T_earth=251.9, T_venus=291.4, ratio=1.157
    at 400 mb, T_earth=241.4, T_venus=278.6, ratio=1.154
    at 300 mb, T_earth=228.6, T_venus=262.9, ratio=1.150
    at 200 mb, T_earth=211.6, T_venus=247.1, ratio=1.168

  65. I just got banned from Roy Spencer’s site for politely showing him to be wrong by referring him to a Met Office site about convection.

    Strange how someone so skilled and experienced in a highly specialised and narrow field can so thoroughly lose his grip on the basics of meteorology despite having issued publications and even taught various aspects of the subject.

    He seems wedded to the radiative exchange of energy and blind to the effects of conduction and convection.

  66. Paul Vaughan says:

    We’ve had some discussion of inequalities in the past.

    Reviewing link-trails from Holder’s Inequality…
    https://en.wikipedia.org/wiki/H%C3%B6lder%27s_inequality

    …one might encounter Proof Without Words:

    “In mathematics, a proof without words is a proof of an identity or mathematical statement which can be demonstrated as self-evident by a diagram without any accompanying explanatory text. Such proofs can be considered more elegant than more formal and mathematically rigorous proofs due to their self-evident nature. When the diagram demonstrates a particular case of a general statement, to be a proof, it must be generalisable.”

    https://en.wikipedia.org/wiki/Proof_without_words

  67. ren says:

    Ned Nikolov the average temperature on the equator of the moon is very low, only 206K. It is about 90 degrees less than the Earth ?
    http://www.diviner.ucla.edu/science

  68. Blob says:

    1) The irradiation-temperature-pressure relationship amongst the various planets/moons with substantial atmospheres is very compelling.

    2) The 33 C “greenhouse effect” is an artifact of treating the earth as an evenly illuminated sphere and taking the wrong average.

    I’m not sure anyone has really figured out the first yet, but expect that to be a very important hint. Regarding the second, this point is important to researchers outside the climate change field, so this will eventually be widely accepted regardless of what those people do:

    “Because this emissions scales as T^4, it can be demonstrated that the spatial average of the local equilibrium temperature (Eq. (1)) is necessarily smaller than the effective equilibrium temperature defined by Eq. (2)5. Rigorously, this stems from the fact that Teq, mu* is a concave function of the absorbed flux which, with the help of the Rogers-Hölder inequality (Rogers 1888; Hölder 1889), yields…”
    https://arxiv.org/pdf/1303.7079

    Thanks to everyone involved for pointing this stuff out.

  69. Ned Nikolov says:

    Ren,

    The average temperature of the Moon Equator is 213 K, not 206 K ! The value shown on the Diviner webpage you quoted is a model projection, not derived from measurements. I know this, because I personally interacted with the model’s author, Dr. Matt Siegler a few years ago. He ran his model to estimate this value and placed it on the Diviner web page. However, his model was not as well tuned to the real Deviner data at that time as was the TWO model of Dr. Ashwin Vasavada at NASA. In our paper we used lunar equatorial temperature data from Vasavada et al. (2012). So, the 213 K estimate is much more accurate than the 206 K estimate …

    The average temperature at Earth’s equator is about 300 K, so the lunar equator is some 87 K cooler than Earth’s equator.

  70. ren says:

    Ned Nikolov thank you very much. Can you explain the problem long day on earth and the moon? How does it affect real measurements?

  71. Paul Vaughan says:

    I’m flagging up a few quotes from the articles worthy of careful consideration and exploration:

    “The non-radiative portion of Earth’s ATE is likely greater than 15.7 K in reality due to horizontal heat transports by oceanic and atmospheric currents not considered in our model.”

    “Nevertheless, important differences exist between Eq. (10a) and other simpler pressure-temperature relations. Thus, while the Poisson formula and the SB radiation law can mathematically be derived from ‘first principles’ and experimentally tested in a laboratory, Eq. (10a) could neither be analytically deduced from known physical laws nor accurately simulated in a small-scale experiment. This is because Eq. (10a) describes an emergent macro-level property of planetary atmospheres representing the net result of myriad process interactions and feedbacks operating in actual climate systems that are not readily computable using mechanistic (bottom-up) approaches found in climate models or fully reproducible in a laboratory setting.”

    “[…] the observed 0.8 K rise of Earth’s global temperature since 1880 is not captured by our model, since this warming was likely not a result of an increase in atmospheric pressure.”

    related reminder

    “The only significant forcing remaining in the present paleoclimatology toolbox to explicate the Pleistocene cycles are variations in greenhouse-gas concentrations “ (assumptions)

    “According to the present understanding, the atmospheric pressure on Earth has remained nearly invariant during the Cenozoic era (i.e. past 65.5 My). However, this notion is primarily based on model analyses (e.g. Berner 2006), since there are currently no known geo-chemical proxies permitting an objective and accurate reconstruction of past atmospheric pressures in a manner similar to that provided by various temperature proxies such as isotopic oxygen 18, alkenones, TEX86, and deuterium in ice. The lack of independent pressure proxies makes the assumption of a constant atmospheric mass throughout the Cenozoic era a priory. Although this topic is beyond the scope of our present study, allowing for the theoretical possibility that atmospheric pressure on Earth might have varied significantly over the past 65.5 My could open exciting new research venues in Earth sciences in general and in paleoclimatology in particular.”

    I looked for one of the cited papers:
    “Discovery of emergent natural laws by hierarchical multi-agent systems”

    …and the search turned up literature emphasizing conflicted definitions of emergence:
    http://www.per.marine.csiro.au/staff/Fabio.Boschetti/papers/emergence_kes_final.pdf

    There’s too much quoted above for quick discussion.

    Cultural concern:

    The conventional and traditional approaches taken by physicists artificially restrict exploration to a subset of possibly existing features, whether natural or artificial.

    In an age of mass misinformation this artificial restriction (theoretical bubbles blown from strategic assumptions) is being applied as a convenient tool in political containment strategies.

    I’m impressed that Ned and Karl are raising so many philosophical issues.

    I await a response from Harry before proceeding any further on this line of consideration. If I do not hear back from him, I will probably completely drop this line of peripheral interest and go straight back to exploration of spatiotemporal pattern.

  72. oldmanK says:

    From PV’s quotes above –““The non-radiative portion of Earth’s ATE is likely greater than 15.7 K in reality due to horizontal heat transports by oceanic and atmospheric currents not considered in our model.””” It is also missing the possibly of quite great ‘horizontal heat transport’ by moisture, which does not need any large pressure changes to occur – at constant temperature.

    I hark back to a paper by Raymo and Huybers, which is in a way a review of the many questions that have never or not yet been resolved. It is interesting to read and a reminder of the real state of ‘climate science’.

    http://www.nature.com/nature/journal/v451/n7176/pdf/nature06589.pdf

    I point to fig 1 in that paper, mainly the middle double curve of obliquity and insolation. And I point again — that is a false gospel. (For example: change obliquity to about 14.5 degrees during much of the High Holocene to see what effect it might have. Insolation at the tropics will be higher, with higher evaporation. In dry colder regions -like polar-that moisture is dumped, giving rise to extended glaciation. Simultaneously that specific heat of the phase changes is released resulting in warmer air – a warmer holocene.-. How’s that; a new model from beyond the fringe.)

  73. Ned Nikolov says:

    oldmanK,

    The problem of explaining Pleistocene ice ages is quite interesting. The truth is that these are still a mystery to science. The current notion that changes is greenhouse gases is the primary driver of glacial-interglacial cycles is obviously incorrect. Our research has provided some fascinating new insights into this phenomenon, which we will discuss in future publications.

    One key conclusion we have reached is that orbital variations as currently understood based on the Milankovitch theory are NOT a direct cause for the ice-age cycles. These variations only have a weak correlation with glacial cycles…

  74. TLMango says:

    ” The truth is that these (ice ages) are still a mystery to science ”

    Ned,
    Your work is incredibly important for reaching an understanding
    of how the atmospheres of planets behave. And your approach
    which compares one to another is correct and necessary. But the
    behavior of an atmosphere is mostly an internal issue. A very very
    complex internal issue. When outside forces come in to play, all the
    rules go out the window.

    The most plausible explanation for re-occurring ice age cycles is:
    the substantial loss of magnetic field strength. During an ice age
    there are 90,000 years when the earth is incapable of retaining its
    heat. This is something that cannot be adequately explained by the
    internal workings of our atmosphere.

    Ice ages are easily explained using no more than first semester algebra.
    Please visit . . Weathercycles.wordpress
    ” Fibonacci and the climate “

  75. oldmanK says:

    Ned Nikolov,

    TY for the reply. This subject, in its entirety, I find interesting and fascinating. However my foray into ‘climate mechanisms’ came as a result of stumbling on a connecting factor from a different field. Ancient structures that functioned as calendars have recorded (as I have found from their dimensions) what the Earth obliquity was over the more than two thousand years of operation -and design evolution-, which engineering-wise, are absolutely not by chance. Obliquity was not in the range as given by the established “assumption” ; ~~22-24. The era is between 5k-2k bce; in the Holocene max. I have since found various tell-tales that corroborate. So I say with conviction: Do not assume Earth obliquity to keep to the ~22-24 deg limits.

    How that works out for the respective period – the holocene- I gave my 2c worth as a ‘process’ guesstimate. For the pleistocene its ???? – or any other period for that matter. What I can say from observations from material from papers I read out of interest or curiosity, is that long period dynamic stability as asserted by JN Stockwell (who worked out the 22-24 deg rule) does not exist.

  76. Paul Vaughan says:

    NN: “One key conclusion we have reached is that orbital variations as currently understood based on the Milankovitch theory are NOT a direct cause for the ice-age cycles. These variations only have a weak correlation with glacial cycles…”

    …and if the correlation was absolutely perfect antagonists would say correlation isn’t causation, it must be coincidence, etc.

    When there’s no correlation antagonists demand correlation …and when there’s correlation they dismiss it.

    Antagonists fool timid, gullible people by applying convenient double-standards.

    Devilishly ruthless noble-cause political activism doesn’t go hand-in-hand with integrity and sound interpretation.

    What has come to light is open, strong, fundamental disagreement on the definition of Milankovitch.

    Diagnostically this provided critical distilling insight….

    Defining “the climate system” as a global surface average over an orbital period deliberately (flash!) ignores geometry and flow shape. A sensible definition of Milankovitch includes both insolation gradients and land-ocean geometry changes (including those with land depression and rebound and sea level change).

    One cannot refute gravity as the cause of a river’s flow because of spatial MEANDERS. The flow shape is NOT static and the runway is NOT homogeneous. That means that spatially and latently aggregated and spatially and latently aliased MULTIvariate measures FLAP about any temporal attractor.

    Insisting that the river flow in a straight light is just a provocation.

    The lights are on and it’s easy to see wuwt’s up.

  77. p.g.sharrow says:

    The one variable no one seems to consider in warm age / ice age is density altitude. A rather small change in sea level barometric pressure and therefore surface air density could trigger changes in snow levels and albedo. Solar cycles change both volcanic and solar storm activities that add and strip planetary atmosphere. The temperature of both land and sea are controlled by the pressure of the atmosphere on refrigerant, water in this global air-conditioner..pg.

  78. oldmanK says:

    pg sharrow said “The temperature of both land and sea are controlled by the pressure of the atmosphere on refrigerant, water in this global air-conditioner..pg.”

    Exactly. The natural power cycle.

  79. Brett Keane says:

    Ned et al, recently warmists have seized on ‘spectral emittance’ Temp as a signal of proof for DWLR GH warming. I tell them heating only comes from work done, and weaker cannot do effective work on more energetic. So, the freedom of all matter to emit is irrelevant in this case. Do I have it right?

  80. Paul Vaughan says:

    Another misunderstanding?

    The key word there is “direct”.

    Coupled internal variables correlate tightly with one another. Pressure-centric “internal” reframing doesn’t change the underlying potential, which is a function of spatiotemporal insolation pattern (including gradients).

    The utility would be the constraint itself. It’s a statement about spatial/latent partitioning.

    I can see where this is going. A few luminaries will manage to stay focused on the utility of the constraint, but ignorant misinterpretations and false conclusions will flood the discussion waves, further clarifying the low quality of participants.

  81. It isn’t enough to note that a rocky planet with a tangible atmosphere is currently a blackbody and in light of that to apply the raw S-B equation.

    One must first realise that whilst the atmosphere was being formed the planet was a greybody to which the S-B equation does not hold good.due to emissions being less than insolation. Thus there was accumulated additional surface energy during that period..

    Conduction from surface to air during the first convective cycle reduced surface temperature so that less was emitted to space than was received. On completion of that first cycle the necessary energy for convection no longer needed to be taken from incoming solar energy since it had by then accumulated at the surface during the greybody phase. At that point the planet reverts to blackbody status.

    That additional energy then remains at the surface for as long as convective overturning continues. In that situation the S-B equation under estimates surface temperature.

    The only necessary variables are insolation and atmospheric mass (leading to pressure) assuming a given strength of gravitational field.

    The S-B equation should not be applied unless the planet has always been a blackbody and that is not the case for a planet with a convecting atmosphere. The greybody period puts the surface thermal enhancement in place and convection keeps it there.

  82. Kristian says:

    Stephen Wilde says, October 2, 2016 at 11:05 am:

    One must first realise that whilst the atmosphere was being formed the planet was a greybody to which the S-B equation does not hold good.due to emissions being less than insolation. Thus there was accumulated additional surface energy during that period..

    This circumstance hasn’t got anything to do with the application of the S-B equation, Stephen.

    The S-B law simply states that a blackbody surface emits a radiant flux (to surroundings at absolute zero) according to its temperature in the following way: P/A = σ T^4. This is a universal law, Stephen. It applies everywhere and at any time.

    However, once the blackbody’s surroundings are no longer at absolute zero, you need to take them into account when calculating the radiant flux leaving the blackbody surface. In other words, you can no longer look at the temperature of the blackbody alone and deduce its emission flux. You will have to determine the effective temperature DIFFERENCE between the blackbody (bb) and its surroundings (srr): P/A = σ (T_bb^4 – T_srr^4).

    The fact that Earth’s surface can never reach a purely RADIATIVE equilibrium with the solar input, because it also loses heat via conduction and evaporation, doesn’t mean that the S-B law somehow doesn’t work or apply. You simply need to understand how to apply it.

    * * *

    Stephen Wilde says, September 27, 2016 at 5:08 am:

    That ‘extra’ 33k is due to recovery of kinetic energy from convective available potential energy within descending columns of air which arise inevitably from convective overturning.

    And,

    Only recovered KE from CAPE can do the job.

    And,

    No, the Earth’s core temperature arises because solid matter convecting back towards the centre converts CAPE back to KE just as within the atmosphere. The only difference is one of scale such that the extremely high density of the mass within the Earth (due to its high pressure) reconverts prodigious amounts of CAPE back to KE between the top of the mantle and the core.

    What is this “reconverting CAPE back to KE” nonsense of yours!? You should realise that “CAPE” isn’t at all what you seem to think it is …

    https://www.google.no/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&uact=8&ved=0ahUKEwjDqty_iLzPAhWF2ywKHf2xBmM4ChAWCBswAA&url=http%3A%2F%2Fwww.atmo.arizona.edu%2Fstudents%2Fcourselinks%2Ffall10%2Fatmo551a%2FCAPE.doc&usg=AFQjCNF4qLwmz8ZseQ3HeNj5hiAfA6TkFg&bvm=bv.134495766,d.bGg

    http://ww2010.atmos.uiuc.edu/(Gh)/guides/mtr/svr/modl/fcst/params/cape.rxml

    https://en.wikipedia.org/wiki/Convective_available_potential_energy

    CAPE isn’t heat from the surface converted into PE inside the air as it ascends, somehow making the air cool “adiabatically”, as you think. And so it cannot be reconverted from PE into KE as the air descends either.

    You have completely misunderstood this meteorological concept, Stephen.

    CAPE simply describes an air parcel’s lifting potential relative to its surroundings. You increase the air parcel’s CAPE either by i) cooling the air above the parcel, by ii) letting the surface transfer more energy as heat to the parcel, thus warming it some more, and/or by iii) letting the surface transfer more water vapour to the parcel, thus increasing its “latent heat” content.

    CAPE is a term used by meteorologists to assess the degree of stability/instability of air masses. It’s got nothing to do with the adiabatic cooling or warming of an air parcel as it rises or falls through the tropospheric column. That happens through so-called “PV work”.

  83. oldbrew says:

    Leading Climate Sensitivity Scientist “Admits Mathematical Errors in The AGW Theory”!

    Dr. Robert D. Cess admits mathematical errors in the AGW theory of the IPCC
    – by Kyoji Kimoto

    http://notrickszone.com/2016/10/02/leading-climate-sensitivity-scientist-admits-mathematical-errors-in-the-agw-theory/

  84. Kristian:

    From one of your links:

    “In meteorology, convective available potential energy (CAPE),[1] sometimes, simply, available potential energy (APE), is the amount of energy a parcel of air would have if lifted a certain distance vertically through the atmosphere.”

    Once the lifting has taken place the amount of energy in potential form is CAPE because it can be re used in descent. Adiabatic processes are fully reversible.and it was derived from surface KE because ultimately surface KE causes all uplift in an atmosphere. The other factors you mention are indeed relevant in calculating in advance how far the air will rise and how much potential energy will be realised but that does not detract from the point.

    As regards blackbodies and the S-B equation you have ignored the point that a planet is a greybody whilst the atmosphere was forming. That renders the S-B equation an unreliable indicator of surface temperature.

  85. Kristian says:

    Stephen Wilde says, October 2, 2016 at 6:16 pm:

    From one of your links:

    “In meteorology, convective available potential energy (CAPE),[1] sometimes, simply, available potential energy (APE), is the amount of energy a parcel of air would have if lifted a certain distance vertically through the atmosphere.”

    Once the lifting has taken place the amount of energy in potential form is CAPE because it can be re used in descent.

    Stephen, give it up. CAPE isn’t what you and your deluded mind believe it to be. It isn’t PE transformed from KE and back as a parcel of air ascends and then descends adiabatically through the tropospheric column. Plain and simple. Stop with your quote mining to find snippets that might somehow appear consistent with your view. Read the links I provided in their entirety, and you will see how deep into La La Land you are.

    Did you even read your quote? “(…) the amount of energy a parcel of air WOULD HAVE IF lifted a certain distance vertically through the atmosphere.” The article goes on (next couple of sentences): “CAPE is effectively the positive buoyancy of an air parcel and is an indicator of atmospheric instability, which makes it very valuable in predicting severe weather. It is a form of fluid instability found in thermally stratified atmospheres in which a colder fluid overlies a warmer one.”

    Read this:
    http://www.theweatherprediction.com/habyhints/305/

    From the link: “1. What is CAPE?

    CAPE (Convective Available Potential Energy) is the integration of the positive area on a Skew-T sounding. The positive area is that region where the theoretical parcel temperature is warmer than the actual temperature at each pressure level in the troposphere. The theoretical parcel temperature is the lapse rate(s) a parcel would take if raised from the lower PBL.

    2. How is CAPE determined?

    The positive area on a sounding is proportional to the amount of CAPE. The higher the positive area, the higher the CAPE. The positive area is that area where the parcel sounding is to the right (warmer) than the environmental sounding. The units of CAPE are Joules per kilogram (energy per unit mass). (…)

    3. Operational significance of CAPE:

    CAPE
    1 – 1,500 Positive
    1,500 – 2,500 Large
    2,500+ Extreme

    High CAPE means storms will build vertically very quickly. The updraft speed depends on the CAPE environment.

    (…)

    4. Pitfalls:

    a. Storms will only form and the CAPE actualized if the low level capping inversion is broken.

    b. CAPE magnitude can rise or fall very rapidly across time and space.”

  86. Here is a reasonable description of the interaction between diabatic and adiabatic processes

    http://www.theweatherprediction.com/habyhints/33/

    Cooling and warming are adiabatic only to the extent that they are related to the chamge in height and nothing else.

    They are diabatic if related to an interaction with other molecules or a result of radiative activty.

    All ascent and descent is a mixture of the two.

    In my comments I am referring only to the adiabatic portion.

    The adiabatic portion is what gives rise to the creation of potential energy from kinetic energy and the reverse in descent.

    No energy is gained or lost in adiabatic ascent and descent, it just changes form.

    Useful example of the effect of the conversion process in tropical instability here:

    http://www.asp.ucar.edu/colloquium/2001/tomas01.pdf

  87. Kristian says:

    CAPE and CIN
    http://thevane.gawker.com/nerdin-it-up-how-to-find-instability-by-hand-on-a-skew-1706378907

    “CAPE is an acronym for Convective Available Potential Energy, or the fuel that feeds thunderstorms their energy. CAPE is measured in joules per kilogram (j/kg), and higher levels of CAPE equate to stronger updrafts and more intense thunderstorms.

    CIN is short for “Convective Inhibition,” or the opposite of CAPE. CIN is an area of the atmosphere where the environment is warmer than the parcels of rising air, preventing the air from rising beyond the area of CIN. This is usually called a “cap,” because it caps the atmosphere like a ceiling, preventing air from rising beyond the warm layer.

    Surface temperatures can rise sufficiently through the day that rising air can surpass this inversion, “breaking the cap.” Thunderstorms can turn severe in a hurry if rising air can break the cap on a high CAPE day.”

  88. Kristian,

    Then just call the energy at the top of a convective column potential energy. Doesn’t matter to me.

    All molecules in an atmosphere contain the same amount of energy on average.

    At the base it is all KE, at the top it is nearly all PE. It cannot all be PE at the top because the temperature of space is above absolute zero so there is still some KE even up there.

    The purely adiabatic portion of convective overturning just involves conversion of KE to PE in ascent and PE to KE in ascent.

    I prefer the term ‘convectively available potential energy’ to distinguish it from the more normal scenario where convection is not involved i.e. the fall of a solid object from height.

    Convection differs because it involves compression and decompression which does not occur with a solid object.

  89. Kristian,

    In your post at 7.13 you are just looking at one limited aspect of CAPE.

    Once the predicted uplift has occurred you then have the predicted quantity of CAPE at the top of the convective column. KE drives the ascent and CAPE is the final product.

    It is then available to be used in the subsequent descent, just as I keep telling you. During the descent that quantity of CAPE predicts the amount of KE that can be derived from it during the descent.

    Just calm down and think 🙂

  90. Kristian says:

    Stephen Wilde says, October 2, 2016 at 7:17 pm:

    All molecules in an atmosphere contain the same amount of energy on average.

    No, they absolutely do not. This is what you still do not understand. The BULK AIR does, each individual molecule doesn’t. If you do the math, you will see that this is unequivocally the case. Each individual molecule does not hold any gravitational PE. By definition. Gravitational PE is specifically tied to the air parcel as a whole, as a ‘system’. And only that. Go read a book on this subject, Stephen. Because you REALLY have no grasp of it …

    Adiabatic cooling is NOT about converting internal (molecular) KE to gravitational PE. It’s about draining the rising – and thus expanding – air parcel of internal molecular KE as the parcel does PV work on its surroundings. Anyone with just a little bit of knowledge in thermodynamics, meteorology and/or atmospheric physics knows this full well. You learn it on the 101 level. The 1st Law of Thermodynamics. Read up. For your own sake.

  91. Kristian says:

    I know you don’t care at all that you blatantly misrepresent well-established scientific concepts, Stephen. Like CAPE. Like adiabatic cooling and warming. I know you will just dig your heels in and double down on your delusions.

    I’m just trying to let other people know how utterly confused you are on these matters.

  92. Ned Nikolov says:

    Kristian,

    You are absolutely right about adiabatic cooling … It occurs when an air parcel moves up in height, where the surrounding pressure is less. As the pressure (i.e. the force acting on the parcel) drops, so does the internal energy and temperature of the parcel. Adiabatic heating and cooling is all about changes in the internal energy and temperature of an air parcel as a result of changes in the force (pressure) applied to the parcel.

  93. gallopingcamel says:

    @Ned Nikolov,
    “Since it carries no force, the IR optical depth of an atmosphere cannot be a causative factor for temperature change! In my answer to Q6 I said:

    “In the real system, pressure controls temperature and these two in turn control the atmospheric optical depth while, in climate models, pressure along with greenhouse-gas concentrations control the atmospheric optical depth, which in turn controls temperature. This discrepancy has fundamental implications for projecting future climatic changes.”

    When we were discussing the temperature of an airless Earth impeccable mathematics produced a wide range of answers. Without mentioning names one scientist opined that the average temperature was 255 K while another said it was 154 K. Both of these estimates were hopelessly wrong and today we all agree that the correct answer is closer to 209 K.

    My point is that elegant mathematical analysis can produce wrong answers while grubby differential equation solvers (e.g. Finite Element Analysis) sometimes comes closer to observations. I am more of an engineer than a physicist so while I like “Elegant” I make “Sanity Checks” using more laborious methods.

    When atmospheric pressure is high enough, collision broadening ensures that IR emitted from a body’s surface cannot pass directly into space. Thus it is that tropospheres are essentially closed systems so at equilibrium it does not matter whether the heat was transported by conduction, convection or radiation. Thermodynamics rules!

    While Robinson and Catling employ complex calculations to model temperature gradients in planetary atmospheres their predictions within tropospheres exactly match what my high school physics teacher taught me sixty years ago. He used thermodynamics to predict the Dry Adiabatic Lapse Rate:
    DALR = -g/Cp

    IMHO the Robinson & Catling model is in close agreement with you but it is more general in its application. For over a year I have been trying to improve the R&C model to include cloud layers. Finite Element Analysis loves layers! Sadly, things are not going well and the grim reaper may get me before I can produce anything useful.

  94. gallopingcamel says:

    @ Ned Nikolov,
    “One key conclusion we have reached is that orbital variations as currently understood based on the Milankovitch theory are NOT a direct cause for the ice-age cycles. These variations only have a weak correlation with glacial cycles…”

    There is a nice poster on WUWT on this but I agree with you as this comment shows:
    https://wattsupwiththat.com/2016/09/29/earths-obliquity-and-temperature-over-the-last-20000-years/#comment-2310717

  95. dai davies says:

    Each individual molecule does not hold any gravitational PE.

    Kristian, I can’t let that pass without comment. Everything with mass in a gravitational field has gravitational potential. In the article brindabella.id.au/downloads/Lapse_Rates.html (or .pdf) I show a simple derivation of the abstract adiabatic lapse rate at a molecular level. It’s molecules that gravity acts on.

    The rising and falling parcel approach that is used confuses the issue, obscuring the fundamental simplicity of the role of gravity. Also, it doesn’t apply to an atmosphere with radiative gas components because they are radiating through the parcel boundaries. It assumes a zero net radiative transfer, which is not true if the parcel pressure and temperature don’t match the surrounding air. It may be near enough for meteorological thought experiments, but it’s a simplification. At least it doesn’t assume that the lapse rate is completely due to radiative gasses (RGs, not GHGs).

    Most of the radiative transfer in the atmosphere is gas collision induced. It locally short-circuits the underlying gravitational temperature gradient. Reducing that by 30% or more requires a massive amount of energy transfer. I’ve been meaning to try a simple rough estimate of that and compare it with surface-space energy flows. Intuition tells me they are a minor component.

    In their simple 101 form, the laws of thermodynamics don’t apply to RGs unless in an IR reflective container, which in accurate experiments they may be. They also don’t apply when there is a significant gravitational potential across the system. At lab dimensions it’s negligible.

  96. dai davies says:

    That should be brindabella.id.au/downloads/Lapse_Rates.html

  97. dai davies says:

    For someone with years of web coding experience I have a lot of trouble getting simple thing right with WordPress. Maybe this time:

    brindabella.id.au/downloads/Lapse_Rates.html

  98. dai davies says:

    Lots of interesting ideas floating here. Horizontal heat transport not only takes equatorial heat poleward, it allows time for radiative cooling to space, which negates the argument that upper troposphere radiation is weak because temperatures are low. Temporal transfer in ocean currents of up to a millennium, likewise. We’re receiving heat and CO2 from the MWP.

    Worth noting that IR also transports horizontally. This can have an impact on the role of clouds. Small scale patchy or striated cloud can be bypassed – IR leaking around them – so just looking at % cloud cover is not enough. As far as I can tell, modellers ignore that.

    The IR opacity of the atmosphere is not simple extinction. The energy soon moves on. All very rapid so no significant ‘trapping’. The radiative energy transfer is effectively instantaneous. Even allowing for many steps along its path and mean collision times of molecules, a small bias upward due to the increasing radiative mean free path with altitude, total transfer takes place on millisecond timescales – per my rough calcualtion, but even if I’m 10X out it makes little difference.

    As for names, we live at a time when the use of pseudonyms is commonplace. People use them for many different reasons. In academic publishing it shouldn’t matter at all – perhaps desirable. I thought the use of reversing the letters was neat and clearly demonstrating honest intentions. The big issue with academic publishing – parasitic publishers aside – is open, transparent reviewing.

    As an aside, I think it is unhelpful politically to refer to GHGs. That terminology is emotive and concedes incorrect thinking as does ‘climate sensitivity’ which assumes that radiative process dominate the energy dynamics of our water planet – something that the post article shows to be empirically wrong. This issue is, after all, primarily political, and those who are pushing the scare choose their wording carefully.

    Off to walk the dog and think. How much IR flux is involved in decreasing the lapse rate?

  99. Kristian,

    You have accepted CAPE as a calculation of the amount of PE that will be made available if a specific amount of buoyancy is allowed to run its course.

    The buoyancy is a result of kinetic energy lower down in the first place.

    Therefore you have already conceded the principle of conversion of KE to PE.

    Whether one calls that PE ‘gravitational PE’ or ‘available PE’ or simply ‘PE’ is irrelevant.

    I have referred you to a link about eddys in tropical air that goes into some detail about the conversion of KE to PE and back again.

    What goes up must come down so KE at the base fuels conversion of KE to PE on the way up and PE at the top fuels conversion of PE to KE on the way down.

    That refers only to the adiabatic component of uplift and descent. Radiation and conduction during the process are present but are diabatic processes and can more suitably be regarded as ‘leakage’ of energy from the adiabatic convective process.

    Nonetheless all movement in the vertical plane results in a temperature change (the amount of KE) even without any diabatic componernt at all by virtue of the Gas Laws. Adiabatic changes in temperature are linked ONLY to displacement in the vertical plane and nothing else. Such changes in temperature are fully reversible as between ascent and descent.

    There is really nothing else that I can say to you on that particular issue.

  100. Ned and Kristian,

    Be careful of the use of the term ‘internal energy’ when referring to gases.

    “In thermodynamics, the internal energy of a system is the energy contained within the system, including the kinetic and potential energy as a whole. It keeps account of the gains and losses of energy of the system that are due to changes in its internal state.[1][2]”

    from here:

    https://en.wikipedia.org/wiki/Internal_energy

    It includes BOTH KE AND PE.

    That is why a molecule at the base contains the same internal energy as a molecule at the top but at the base it is in thermal form (heat – KE) and at the top it does not register as heat but rather the amount of energy in non thermal form (not heat – PE) required to account for the presence of the molecule at that height against the force of gravity

  101. Also, note this:

    “Then the first law of thermodynamics states that the increase in internal energy is equal to the total heat added plus the work done on the system by its surroundings”

    The total heat added by conduction from the surface is the KE content.

    The work done in moving the molecule up to a given height against the force of gravity (and nothing else) is the PE content.

    Both added together are the total internal energy.

    The term ‘the surroundings’ requires clarification.

    If it refers to other gas molecules then that is a diabatic process (radiation or conduction) which does change total internal energy.

    If it refers simply to the gravitational field then that is an adiabatic process and that does not change total internal energy, it just changes the form of internal energy between KE and PE.

    All convection involves a mixture of the two but the basic adiabatic component is fixed by the amount of work required to cause movement with or against the gravitational field. The diabatic component is highly variable and depends on a multitude of other factors.

  102. Kristian says:

    dai davies says, October 3, 2016 at 5:50 am:

    “Each individual molecule does not hold any gravitational PE.”

    Kristian, I can’t let that pass without comment. Everything with mass in a gravitational field has gravitational potential.

    Sorry, no. The gravitational PE of a parcel of air is specifically NOT tied to its individual molecules, but to the parcel as a macroscopic system. Think of it this way: If you lift, say, a book up in the air, you have increased its gravitational PE, but you have NOT reduced its internal (molecular) kinetic energy at the expense of this gravitational PE, meaning, the book hasn’t cooled. Rising air cools ONLY because it expands against an external pressure AS it moves upward, NOT because it moves up upward.

    The PE of each individual molecule inside a real gas (an ideal gas only contains internal KE) is fully related to intra and intermolecular forces, NOT to gravity.

  103. Kristian says:

    Try math, dai davies, and you will see that I’m right. Calculate the reduction in translational KE for a nitrogen molecule as it moves from the surface air layer to the tropopause air layer (from 288K to 210K). Then calculate the increase in the hypothetical gravitational PE for that same molecule as it climbs from 0 to 12,000 metres. Do the two numbers you get match? Does the drop in molecular KE equal the rise in gravitational PE? For any individual molecule …? No. There is no direct connection. You will have to look at a full air parcel to get a match. Because then you will include the changes in both molecular and bulk energies.

    But the drop in parcel temperature is ONLY related to its internal energy, that is, to its molecular translational KE. So just simply lifting the parcel against gravity won’t do. Nothing cools just from being lifted. It will have to expand against an ever decreasing external pressure.

  104. Kristian said:

    “Nothing cools just from being lifted. It will have to expand against an ever decreasing external pressure.”

    That is correct.

    However the ever decreasing external pressure is caused by gravity setting up a density gradient with height.

    So, the adiabatic portion of the work being done is against gravity rather than against surrounding molecules. Gravity causes the pressure gradient/ density gradient so any movement in the vertical plane is effectively with or against gravity.

    Note that in the absence of any diabatic energy exchange such as radiation or conduction:

    i) Rising and expanding into falling pressure requires no diabatic work against surrounding molecules because the space available for expansion increases at the same rate as expansion occcurs and

    ii) Falling and contracting into rising pressure requires no diabatic work against surrounding molecules because the contraction occurs at the same rate as the space available reduces.

    Diabatic processes do of course occur at the same time as expansion or contraction but they are independent of the adiabatic component.

    Imagine an army advancing into a vacated battlefield. It advances without any necessary interaction with the retreating troops.

    Imagine an army retreating from a battlefield. It retreats without any necessary interaction with the advancing troops.

    So it is with molecules advancing into or retreating from a region of higher or lower pressure.

    Interactions can occur but they are independent of the underlying advance or retreat.

  105. Kristian said:

    “Calculate the reduction in translational KE for a nitrogen molecule as it moves from the surface air layer to the tropopause air layer (from 288K to 210K). Then calculate the increase in the hypothetical gravitational PE for that same molecule as it climbs from 0 to 12,000 metres. Do the two numbers you get match? Does the drop in molecular KE equal the rise in gravitational PE? For any individual molecule …? No. ”

    For convective overturning, what matters most for a gas around a sphere is the compression or expansion so you need multiple molecules to get the right calculation even though it is the temperature of the individual molecules that is affected.

    Most of the PE involved in atmospheric convection is that which is derived from KE as the molecules move apart when they rise and move up along the density gradient imposed by gravity.

    That is why I said above that one should distinguish that phenomenon from the more simple situation where a single piece of solid matter is lifted up against gravity.

    It is primarily the PE derived from expansion that is then available for subsequent downward convection which is why I prefer the CAPE concept as being convectively available potential energy.

  106. pochas94 says:

    Review the Carnot Cycle. Once you understand it, it is only a short step to understanding convection. For convection the source is the earth’s surface and the sink is the equivalent emissions height. The compression and expansion stages follow the Carnot Cycle.

  107. pochas94

    The Carnot Cycle has similarities but appears to regard internal energy only as KE rather than KE+PE.

    It contains no apparent consideration of PE, presumably because it operates on such a very small scale that pressure changes in the external surroundings need not be considered.

    For convective overturning within an atmosphere the changing pressure as one moves upward is a critical consideration and gives rise to the conversion of KE to PE or vice versa.

    For adiabatic expansion in an atmosphere the emissions height doesn’t matter because radiation out to space is a diabatic process independent of the adiabatic component.

  108. pochas94 says:

    Steven,
    I guess its my engineering training, but for me an adiabatic expansion is an adiabatic expansion, and decompressing a parcel of air without doing work is an adiabatic expansion.

  109. pochas94 says:

    I neglected to point out that the work done by a rising parcel as it expands is recovered by the cooler parcel as it descends. And of course the upward velocity of an ascending atom slows as it gets higher, your KE / PE conversion, all accounted for in the adiabatic expansion calculation. It doesn’t matter how the work is done, as long as it is recoverable as it is in convection, which makes for a reversible process. It’s the best example of a reversible process I know of, even though a small amount of work is lost in viscous friction.

  110. pochas94

    I agree with that clarification.

    The Carnot Cycle has similarities to convection in the manner that you say.

    The aspect of a small amount of work being lost in viscous friction is similar to convective overturning losing a small amount of work in conduction and radiation.

    It isn’t actually ‘my’ KE / PE conversion since the literature is full of it if one cares to look.

  111. Bruckner8 says:

    I’ve said this before, but I get very frustrated at threads like this where one of the following is happening:

    1) people talking past each other
    2) One person [Kristian] working very hard to convince another [Stephen], but no common ground is being found whatsoever, and others are working very hard to fix the semantics, and yet no one is fazed [Stephen]

  112. dai davies says:

    Kristian says, ‘Calculate the reduction in translational KE for a nitrogen molecule as it moves from the surface air layer to the tropopause air layer (from 288K to 210K).

    That’s what I’ve done in the article I made such a mess of linking to, except the other way around – I drop a ball in vacuum from 11km. Its vertical velocity increases by 464 m/s which matches the mean velocity of air molecules at 20 Cº. Between collisions, a gas molecules is dropping in a vacuum. It follows a parabolic path. The PE gained is thermalised in collisions.

    Kinetic energy (vertical component increase) = potential energy lost.
    mv^2/2 = mgh
    v^2 = 2gh, after dividing both sides by m/2. The mass cancels, ball or molecule
    v = √2gh = (2*9.78*11000)^0.5 = 463.85 m/s.

    For air at STP, v = 464 m/s from:
    http://www.pfeiffer-vacuum.com/en/know-how/introduction-to-vacuum-technology/fundamentals/thermal-velocity/

    In a book, the molecules are bound together by forces much much greater than gravity, but the tendency is still there. People have criticised the gravitational lapse rate by noting that in the seas the temperature is hottest at the top. Again, water molecules in a liquid are bound by forces much greater than gravity. They are not free to fall, but the tendency is still there.

    Seas are heated from the top so tend to be stable except around the poles where they absorb CO2 then go down taking the CO2 on a millennial journey, releasing it back into the atmosphere around the equator where the water is warmed. We are currently experiencing a CO2 surge passed on to us from the Medieval Warm Period. But I digress.

    Another century old criticism is that you could use the temperature difference between the top and bottom of a column of air to make a free energy machine with gravity magically replenishing the energy. This is wrong. Gravity just redistributes energy between KE and PE. You’d just be draining energy from the gas just as you drain heat from hot rock in geothermal energy systems.

  113. dai davies says:

    Bruckner8
    True, but blog discussions generally tend more towards cocktail party chatter than a formal debating society. An advantage, here at least, is that people can exchange differing views without being pushed into a consensus view. A disadvantage is that the ideas get lost in a sea of verbage, but the thinking aloud and meandering around topics keeps people thinking and communicating, and new ideas or ways to express old ones emerge – a form of Monte Carlo exploration of idea space.

    Trying to research topics in blogs would be impossible without search functions, but it’s these forums where most of the productive debate on CO2 and climate has taken place. As far as I can see, The Talkshop is way ahead of academia in exploring the resonant nature of solar systems and how this influences our climate. It’s been a valuable source for me trying to summarise debate on other aspects of the climate issue, but it’s hard work.

    I think Stephen and Kristian are both wrong – not fundamentally, but looking at the issue from a messy perspective. As I see the issue it’s quite simple, but I think people resist the idea that an issue that has created so much conflict over so many years can have a simple solution.

  114. p.g.sharrow says:

    Bright people tend to complicate explanation while the Brilliant simplify…pg

  115. gallopingcamel says:

    @p.g.
    Nice one!

    I am staying out of the CAPE discussion as the dynamic situation is beyond complicated. I am still trying to understand the static (equilibrium) situation.

  116. Kristian says:

    dai davies says, October 4, 2016 at 1:15 am:

    Kristian says, ‘Calculate the reduction in translational KE for a nitrogen molecule as it moves from the surface air layer to the tropopause air layer (from 288K to 210K).’

    That’s what I’ve done in the article I made such a mess of linking to, except the other way around – I drop a ball in vacuum from 11km. Its vertical velocity increases by 464 m/s which matches the mean velocity of air molecules at 20 Cº. Between collisions, a gas molecules is dropping in a vacuum. It follows a parabolic path. The PE gained is thermalised in collisions.

    Kinetic energy (vertical component increase) = potential energy lost.
    mv^2/2 = mgh
    v^2 = 2gh, after dividing both sides by m/2. The mass cancels, ball or molecule
    v = √2gh = (2*9.78*11000)^0.5 = 463.85 m/s.

    But is the molecular velocity of air 0 m/s at 11 km, davies? No. And that’s exactly my point. The individual molecules in air do NOT act like balls independent from all the other balls around them. The hypothetical gain in PE for any one molecule in going from 0 to 12 km is much bigger than the loss in KE for any one molecule in going from 0 to 12 km. In your scenario, for the change in KE and PE to match from surface to tropopause, the atmospheric temperature would have to drop from ~288 to ~0 K.

  117. Kristian says:

    (…) for the change in KE and PE to match from surface to tropopause, the atmospheric temperature would have to drop from ~288 to ~0 K.

    Sorry, the temp would have to drop from ~288 to ~129 K from surface to tropopause. It doesn’t. Because the temperature drop is NOT a result of molecular KE being converted into molecular gravitational PE as the air rises. It’s a result of the air expanding against an ever-decreasing external pressure, draining the air of molecular KE.

    You say you think I’m wrong on this, davies. I’m not. I’m telling you like it is. Read a book on the subject and you’ll see.

  118. tom0mason says:

    Forgive me if this is the wrong place to ask but…
    Water can stay in a liquid state down to -41°C or slightly lower, however once at -48°C water will become a solid. That temperature happens to be just where the troposphere and the tropopause meet. Is this just a coincidence? Could this temperature of -48° happen at lower altitudes outside of the polar regions, and how are cloud energy balance affected by this change. Clouds of colloidal water?
    https://jsmyth.wordpress.com/2011/12/01/supercold-water/

    Just asking..

  119. Kristian said:

    ” draining the air of molecular KE.”
    That would be diabatic, not adiabatic. Diabatic drains the KE, adiabatic converts it to PE.

    dai,
    I have previously expressed it very simply but convoluted objections are tricky to deal with.

    Bruckner8
    From my perspective it is me trying to explain something to Kristian. He won’t accept trhat PE is derived from KE by molecules doing work against gravity (via the gravity induced pressure field) and so mixes up diabatic and adiabatic processes.

  120. Kristan said:

    “Sorry, the temp would have to drop from ~288 to ~129 K from surface to tropopause”

    Actually, the 288k has to drop to about 3K at the boundary with space because that is the temperature of space.

    All sorts of convolutions occur on the way up due to differing diabatic processes at different heights.

    Arbitrarily selecting the tropopause is no use.

  121. Roger Clague says:

    dai davies says:
    October 4, 2016 at 1:15 am

    I drop a ball in vacuum from 11km. Its vertical velocity increases by 464 m/s 

    As Kristian points out molecules are not at v=0 at 11km

    At 11km the temperature of air molecules is 220K
    If at 298K v= 464m/s^2
    at 220K v= 464 x sqr(220/298)
    = 464 x 0. 86m/s^2
    = 399m/s^2
    The velocity increases by 65m/s^2 when moving from the tropopause to the surface

    This change in velocity is caused the change in gravity. ( you say minus 0.8% at 5.5km)
    I take top of the tropopause altitude as 20km at 220K
    reduction of g at 20 km is 0.7 % x 10m/s^2 = 0.007 x 10m/s^2 = 0.07m/s^2

    A molecule moved from a height of 20km to the surface

    change of v = sqr(2gh)

    = sqr(2 x 0.07 x 20 000)

    = 53m/s^2

    I agree with the equation you use but not the figures you put into it.
    Most of the atmospheric thermal enhancement(ATE) can explained using only Newtons Laws.

  122. Kristian says:

    The total energy of a parcel of air will be constant wherever it might be in the troposphere, as long as it holds the same amount of mass AND as long as it only warms and cools adiabatically OR through changes of phase. This, however, is NOT due to a continuous 1:1 conversion back and forth between the KE and PE of the air molecules. The pure change in the total molecular KE of the air is always much smaller than the change in its total system gravitational PE:

    ΔPE_grav >> ΔKE_mol →

    ΔE_tot ≠ ΔPE_grav + ΔKE_mol

    So it’s fairly easy to show mathematically that Stephen Wilde is simply wrong about his idea that air cools as it rises because each individual molecule somehow does work against gravity and thus has some of its KE converted into gravitational PE in a 1:1 fashion. The internal energy of the air – tied to the motion of each individual molecule making up the air, and thus directly associated with its temperature – isn’t related at all to its bulk movement up and down the atmospheric column, nor to its bulk position in Earth’s gravity field. You can get an understanding of this important distinction here:
    http://faculty.poly.edu/~rlevicky/Handout6.pdf

    Instead, the fact that the total energy of an ‘adiabatic’ air parcel is always conserved, is due to a quantity called “enthalpy”. Enthalpy is defined mathematically as H = U + pV, where U is the “internal energy” of the air parcel (for all intents and purposes, equal to its molecular KE), p is the equalised pressure at the border between the air parcel and its surroundings, and V is the volume of the air parcel.

    As the air parcel is lifted from surface to tropopause level (from 0 to 12 km of altitude), it expands. This means it takes up a larger and larger volume the higher up it gets. This requires work [W]. In order to occupy a larger volume, the air parcel needs to push aside the surrounding air somewhat. From the expression above (H = U + pV) it is clear that, if the air volume increases, then the enthalpy change of the air parcel will be larger than the change in its internal energy alone.

    So it is H that matches the change in gravitational PE, not the U (molecular KE).

    As a parcel of air wih total mass = 1kg is lifted from the surface to the tropopause level (0 to 12,000 m), its gravitational PE increases from 0 to (mgh=) 117,720 J. At the same time, the parcel cools from 288 to [288K – (12km * 6.5 K/km) =] 210K, 78K all in all, which means that its internal KE decreases by (5/2 kΔT=) 57,870 J, its total internal energy [U] by somewhat less (since the air is a real gas, not an ideal one, and so its internal PE will increase a little bit as the air expands and the average distance between its molecules grows; this change, though, is much smaller than the change in internal KE). Let’s say there are (118,000 – 57,000 =) 61,000 J missing from total energy balance. This is where the macroscopic expansion of the air parcel as it moves from 0 to 12,000 m of altitude comes in, the pV term (or, rather, the pΔV term). The amount of work [W] required to make room for the extra volume occupied by the air parcel (still with total mass = 1kg) at 12 km compared to at 0 km will equal the force (pressure, pA) with which the air parcel pushed the surrounding air aside, times the distance (d) it pushed it: W = pA * d = pV.

    Since the air parcel, by expanding against the external pressure of the surrounding air, did work ON its surroundings when rising from 0 to 12 km in the tropospheric column, the total work above means a LOSS of energy from the parcel. It spent energy when making more room for itself, because it had to do so against an external force.

    (This simple circumstance is why, if you allow air to expand as you heat it, it will warm LESS than if you do not allow it to expand as you heat it. In the second situation, ALL the energy you transfer to it as heat will go into increasing the internal energy [U] of the air, while in the first situation, some of the energy that you transfer to it as heat will rather go directly into simultaneous expansional work, and will therefore NOT show up as an increase in U. This is why the c_p (the specific heat capacity under constant pressure) of air is about 1.4 times as large as the c_v (the specific heat capacity under constant volume) of air: γ = c_p/c_v = 1.4, which means that, if you allow the air to expand freely while heating it, you will have to transfer 40% more energy as heat to it in order for its temperature to rise as much as if you didn’t allow it to expand while heating it.)

    And so, the reduction in internal KE (basically, ΔU) PLUS the reduction in energy as a result of expansional work (pΔV), is what will match the gain in gravitational PE as the air parcel rises from 0 to 12 km:

    ΔE_tot = ΔPE + ΔU + pΔV = ΔPE + ΔH

  123. You are a trier, Kristian 🙂

    I never said it was a 1:1 ratio per molecule.

    I pointed out that additional PE is generated when multiple gas molecules move apart so clearly more PE is generated relative to KE lost when molecules start off more densely packed.

    I carefully distinguished the gas scenario of multiple molecules moving apart from the scenario of simple lifting of a single solid object (or molecule).

    Is Kristian now accepting that KE does convert to PE during uplift and PE to KE in descent?

    Kristian said:

    “The total energy of a parcel of air will be constant wherever it might be in the troposphere, as long as it holds the same amount of mass AND as long as it only warms and cools adiabatically”

    As the parcel expands it holds less mass per unit of volume does it not?

    So, although more PE is created than KE lost you have less mass in the relevant volume which keeps the total energy (KE plus PE) within that volume the same.

    You can apply that to the entire atmosphere so that if you go anywhere and measure the same volume the combined KE plus PE within that volume will be constant (subject only to local distortions from separate diabatic processes).

    The amount of mass present within a given area declines in proportion to the creation of PE so if the density gradient is steep more PE is created relative to the reduction in KE than if the density gradient is less steep.

    But then we are back to work against gravity because gravity sets the density/pressure gradient.

    Anyway, if the process is fully reversible as it must be fur the adiabatic component then KE does come back from PE on the way back to the surface in descending columns which has been my point from the inception of this debate.

    Nothing in the purely radiative AGW energy budget accounts for any warming effect at the surface from the adiabatic warming of descending columns of air.

    That brings us full circle to the head post and Ned’s finding that the ATE is pressure related. The denser (higher pressure) the air is at a surface beneath the descending columns of a convecting atmosphere the more adiabatic warming will occur and the greater will be the ATE.

    Hence a large ATE for Venus a small ATE for Mars and Earth in between.

  124. Kristian says:

    Stephen, I’m not discussing this issue with you. You’re a lost case. You’re not just wrong. You’re obviously deluded.

    What I’m trying to do is convey the actual science of an adiabatic process in the atmosphere to people like dai davies here.

    You seem to think this is a matter of different opinions about how a particular physical phenomenon works. It’s not. All of thermodynamics, meteorology and atmospheric physics is correct on this issue. And you’re wrong. That’s a FACT. You clearly (from what you write) have absolutely NO real conception of any of the things you constantly pontificate about. People who are actually well-versed in any or all of the fields above either shake their head in disbelief or laugh out loud at your ‘opinions’ about how the adiabatic process is supposed to work in the atmosphere.

    The people I’m trying to reach are the ones who might let themselves befuddle by your rambling nonsense …

    You need to stop, Stephen. You’re approaching DC.

  125. I’ve been a member of the Royal Meteorological Society since 1968 having studied this stuff for 60 years.
    Kindly do not project your situation on to me.

  126. One last time:

    “The correct definition of an adiabatic process is one in which the system does not exchange energy with its surroundings by virtue of the temperature difference between them. In this regard, the term surroundings refers to the immediate environment of the air parcel (which of course has no defined boundaries but rather is a concept). In contrast, a diabatic process is one in which energy is added to or removed from the system (e.g., by radiation, latent heating due to phase change, turbulent mixing). I prefer to think of adiabatic processes as non-diabatic processes. ”

    from here:

    http://kkd.ou.edu/METR%201004/Adiabatic%20Processes.htm

    So, you can’t have the KE in a parcel that is rising adiabatically drained off into surrounding particles. That would be a diabatic process.

    The whole point about adiabatic uplift is that once the parcel detaches from the surface its temperature changes at the same rate as the temperature of the surroundings (barring diabatic effects) so that the temperature DIFFERENTIAL stays the same during uplift. Thus no energy moving out of the parcel to the surroundings. Nevertheless it still cools.

  127. Wayne Job says:

    Being but an engineer and not a scientist, the adiabatic lapse rate for me has always described the temperature of Earth told by my teachers about 60 years ago as average 14.7C @1013 Millibars nothing seems to have changed. The brave man of this post stating the obvious and trying for peer review in these days where the book 1984 is no longer a novel but an instruction manual,is brave indeed, cud’s to you sir.

  128. Roger Clague says:

    Kristian says:
    October 5, 2016 at 4:16 pm

    As a parcel of air wih total mass = 1kg is lifted from the surface to the tropopause level (0 to 12,000 m), its gravitational PE increases from 0 to (mgh=) 117,720 J. At the same time, the parcel cools from 288 to [288K – (12km * 6.5 K/km) =] 210K, 78K all in all, which means that its internal KE decreases by (5/2 kΔT=) 57,870 J, 

    Your calculation shows your theory is wrong
    Your theory is to apply the gas law to a parcel of air.
    The test is to calculate surface T

    you say:

    mgh = 5/2kchange in T ( from gas law)
    change in T = mgh / 5/2k
    = 120 000 x 0.4 /300
    change in T = 160K
    surface T = 210K + 160K
    = 370K
    measured 288K
    wrong by over 80K

    Why?

    1. Gas law does not apply to atmosphere. T must be constant throughout system and we are trying to calculate a change in T
    2. Your parcels increase in volume x5. So there is not enough space to fit them in as they expand.

    Applying Newtons Laws of Motion and K.E. Theory to molecules
    v = average velocity of a molecule

    change in v = sqr(2gh)

    dai davies 
    October 4, 2016 at 1:15 am

    uses g = 10m/s^2
    change in v = sqr( 2 x 9,8 x 11 000)
    = 464m/s

    v at 11 000 (210K)= 400m/s
    v at surface = 400 + 464 = 864m/s
    T at surface = 296 x sqr(864/464)K
    = 296 x 1.4K
    = 404K
    wrong by over 100K

    My method

    change in v = sqr(2gh)
    I use h = 20km because this is the height of the TOP of the tropopause at the equator. The pause is of T and so the troposphere up to 20km is the whole system
    I take tropopause T = 210K equatorial average surface day temperature = 296K

    I use the change in g from 0 to 20 km, to find change in v, that is 0.07m/s^2.

    change in v = sqr( 2 x 0.07 x 20 000)
    = 53m/s

    v at surface = 400m/s( v at 210K) + 53 m/s
    = 453m/s

    464m/s = 296K (Kinetic theory of gas)

    453m/s = 296 K x sqr(453/464)
    = 296K x 0.99
    = 293K

    My prediction is within 2K , that is <1% of measured surface T

  129. Ned Nikolov says:

    Thank you for the kind words, Wayne Job.

    We are simply trying to restore the integrity to climate science and make researchers remember that physical reality and observations are the source of all true insights and theoretical breakthroughs. We are in a desperate need of new Science Renaissance!

  130. erl happ says:

    Well said Ned. “Physical reality and observations are the source of all true insights and theoretical breakthroughs”

    For the physical reality: http://www.esrl.noaa.gov/psd/cgi-bin/data/timeseries/timeseries1.pl

    This graph shows a relationship between two variables that we should think about:
    https://i0.wp.com/reality348.files.wordpress.com/2016/10/sst-and-slp-s-hem.jpg?ssl=1&w=450

    For insights based on observation of the record and associated theory in what is a very complicated environment: https://reality348.wordpress.com/

    Nowhere in the discussion of lapse rates on this thread do I see any realisation that lapse rates are mainly a function of the ozone content of the air. Weather and climate is driven by what is happening in the interaction layer between the troposphere (very little but quite impactful ozone content) and the stratosphere (very impactful ozone content). Nowhere do I see an observation that lapse rates decline to the 100 hPa pressure level at the equator and the 400 hPa level at the poles in winter. It is between these pressure levels that the strongest winds are found.This includes the Jet streams. Lateral movements between these pressure levels bring together air streams from tropical, mid and polar latitudes, each with a very different temperature, density and ozone profile.The nature of these air streams changes over time. All aspects of surface climate are affected.

    Radiation theory is a waste of time. Surface temperature in the southern hemisphere is the same today in the month of December as it was back in 1948 despite the fact that air contains more CO2.

  131. gallopingcamel says:

    @erl happ,
    “Radiation theory is a waste of time. Surface temperature in the southern hemisphere is the same today in the month of December as it was back in 1948 despite the fact that air contains more CO2.”

    Robinson & Catling use radiation and convection in their model yet [CO2] has no significant effect in the troposphere as their equations include the effect of collision broadening which makes the lower atmosphere increasingly opaque (as pressure increases) to upwelling thermal radiation.

    What does matter is Cp exactly as John Pallister (my high school physics teacher) told me 60 years ago. Note that nitrogen accounts for ~80% of atmospheric pressure and it has the same Cp as CO2 which accounts for only 0.04%.

    Unlike the troposphere, the stratosphere is not a closed system. Thermal IR energy can be radiated directly into space which explains why temperature rises with altitude on most planets.

    On Venus the stratospheric temperature gradient is anomalous (temperature falls with altitude) owing to the high concentration of CO2 (~96.5%).

  132. Ned Nikolov says:

    @erl happ,

    This graph is indeed quite interesting showing sea-level pressure and sea-surface temperature rising together over the decades:

  133. erl happ says:

    This too should be of interest.

    Anomalies are with respect to the 1948-2016 average. The area is between the equator and 50° south latitude. There is a regular pulse of increasing surface pressure and sea surface temperature each winter.

    The variation in the monthly temperature experienced in the space of a few years is about double the long term term trend increase. The latter is seen to be well within the range of natural variability that is experienced from year to year.

    It can be shown that the increase in surface temperature is associated with increased geopotential height between the surface and 200 hPa and an increase in the temperature of the lower stratosphere.

    It can be shown that the increase in surface pressure north of the 50th parallel is associated with a decrease in surface pressure south of that same parallel.

  134. dai davies says:

    Kristian says:
    But is the molecular velocity of air 0 m/s at 11 km, davies? No. And that’s exactly my point. The individual molecules in air do NOT act like balls independent from all the other balls around them.

    The calculation was to demonstrate the relationship between PE at 11km and KE at ground level. The example was intended to show relative order of magnitude of PE and KE over that height range – said to be a typical tropopause height. Although the 3 digit match is, I admit, spurious. The maths is correct. In the above linked article I say that this calculation inspired me to look deeper. It wasn’t part of the central argument. My error here was mentioning it without explanation.

    I consider the situation of a molecule in a real atmosphere in detail. In between collisions a molecule is in free fall in a gravity field regardless of its velocity components. It is always gaining KE if it is going down (-ve vertical vel.) or v.v. I deal with changes to vertical velocity that are rapidly thermalised in subsequent collisions.
    I get an expression for the lapse rate that not only agrees with the one obtained by the thermodynamics of rising air packets numerically, but go on to show that they are theoretically identical. I was taught thermodynamics with a strong emphasis on the underlying molecular mechanics and have applied that here.

    The article is around 1500 words – too large to dump here. I assumed that anyone seriously interested in a new result (I think) would read it. That two quite different approaches – molecular mechanics and thermodynamics – give the same result is strong confirmation of the role of gravity in the abstract, underlying adiabatic lapse rate. It is not strictly adiabatic in Earth’s atmosphere because even without water vapour there is radiative transfer.

    Roger Clague:
    You say to Kristian: “The test is to calculate surface T
    We differ here. The lapse rate has nothing to do with surface temperature. It is the rate that temp drops with increasing altitude from that temp.
    In my detailed analysis I use an average value for g adjusted to 5.5km. It is a small correction.
    You have to partition the gravitational energy according to degrees of freedom of the molecules, which you don’t seem to do.
    For 20km I get 102 K from 296 K surface. The conventional thermodynamic ALR with measured cp (9.75 K/km) gives 101 K. With theoretical cp it is shown to be algebraically identical to my derivation.

    I can’t discuss it any further if you won’t read the article.

  135. Kristian says:

    dai davies says, October 9, 2016 at 1:12 pm:

    The calculation was to demonstrate the relationship between PE at 11km and KE at ground level.

    But that’s the point: There is no direct relationship between the gravitational PE of a parcel of air and the total KE of all its individual molecules. And I showed you and explained you exactly why. Air molecules do NOT slow down as the air parcel moves higher because gravity pulls on them, converting some of their KE into PE. You need to get this notion out of your head. And please read about the 1st Law of Thermodynamics and the adiabatic process.

    The maths is correct.

    The math is correct, but irrelevant. Because you’re treating each molecule as if it were a ball whose movements (direction and velocity) in the air column is controlled by gravity (as if it never collided with other molecules, just bounced freely and independently up and down from surface to tropopause), when evidently it’s not. The movement of the BULK AIR is controlled by gravity, but NOT its separate molecules.

    In between collisions a molecule is in free fall in a gravity field regardless of its velocity components. It is always gaining KE if it is going down (-ve vertical vel.) or v.v. I deal with changes to vertical velocity that are rapidly thermalised in subsequent collisions.
    I get an expression for the lapse rate that not only agrees with the one obtained by the thermodynamics of rising air packets numerically, but go on to show that they are theoretically identical. I was taught thermodynamics with a strong emphasis on the underlying molecular mechanics and have applied that here.

    I’m sorry, but you’re wrong. Your math doesn’t work. You cannot look at and follow an individual air molecule on its way up (or down) the air column to derive the adiabatic lapse rate. Read this comment:
    https://tallbloke.wordpress.com/2016/09/25/ned-nikolov-in-science-new-messages-mean-more-than-the-messengers-names/#comment-120235

    The vertical temperature falloff rate for rising air is due to the expansional work it does on its surroundings as it’s rising, draining the air of internal energy (KE). 1st Law of Thermodynamics.

    There is NO inherent negative temperature gradient from surface to tropopause to be found in hydrostatic equilibrium. Air molecules at the tropopause do NOT move slower than air molecules at the surface simply because they’re at a higher altitude. Claiming this shows a complete lack of understanding of how the atmosphere really works. Again, read up on the “adiabatic process”.

  136. Joe basel says:

    It seems that history is filled with scientists not getting published with out of the box ideas.
    This story should be heard and retold..

    Bose submitted his findings to a scientific journal but the paper was rejected. He knew he had discovered something significant and that motivated him to send the paper to Einstein, who read it, immediately recognised its importance and arranged for the article to be published in a prestigious German journal. That was in 1924.

    In the late 1920s, the idea that some sub-atomic particles possessed a property called spin was proposed by several physicists.

    Particle spin is a property that emerged from the mathematical work of particle physics. Paul Dirac (1902-1984) described two types of spin, half-integer and integer spin. Dirac unselfishly called particles with half-integer spin, fermions, after the great Italian physicist Enrico Fermi (1901-1954), and particles with integer spin, bosons, after Bose.

    http://www.stuff.co.nz/science/83285186/fundamental-particle-discovered-by-error

    Great conversations bring even greater insights.

  137. Roger Clague says:

    i davies says:
    October 9, 2016 at 1:12 pm
    I can’t discuss it any further if you won’t read the article.

    http://brindabella.id.au/downloads/Lapse_Rates.html

    This article says:

    a ball falling in a vacuum from a hight of 11 km has a velocity at ground level of 464 m/s, which is precisely the mean velocity of air molecules at 20 Cº
    dai davies says:
    October 4, 2016 at 1:15 am
    Kinetic energy (vertical component increase) = potential energy lost.
    mv^2/2 = mgh
    v^2 = 2gh, after dividing both sides by m/2. The mass cancels, ball or molecule
    v = √2gh = (2*9.78*11000)^0.5 = 463.85 m/s.

    increase from v= 390m/s not from v=0
    A molecule at 11km has a temperature of 210K and an average velocity of 390m/s^2
    So the velocity at the surface will be (390 + 464) = 854m/s^2
    according to K.E. theory T depends on v^2
    giving a surface T= 296 x (854/464)^2 = 1002K

    I agree with v = sqr(2gh)
    but I think the resultant affective gravity experienced by a molecule to be
    g = 0.07m/s^2 change in g from 0 -20km

    h = 20 km, T = 220K
    v = sqr(2 x 0.07 x 20 000)
    = 53m/s
    This 53m/s increase in v causes T to increase from 210 to 210 x sqr(443/390) = 286K
    near to the measured 296K

  138. Roger Clague says:

    Kristian says:
    October 9, 2016 at 8:03 pm

    Because you’re treating each molecule as if it were a ball

    That is using the kinetic energy theory of gas. (K.E.T.)

    whose movements (direction and velocity) in the air column is controlled by gravity (as if it never collided with other molecules,

    Molecules are affected by gravity and collisions

    just bounced freely and independently up and down from surface to tropopause), when evidently it’s not.

    KET does not propose this.

    ,i>The movement of the BULK AIR is controlled by gravity, but NOT its separate molecules.

    That is the continuum mechanics assumption for horizontal motion of gas. For vertical motion it is necessary to recognize that the bulk effect is caused by gravity acting on eachon each molecule independently.

    You cannot look at and follow an individual air molecule on its way up (or down) the air column to derive the adiabatic lapse rate.
    Agreed, but because of the large number of molecules you can use statistics to make statements about the average properties, such as velocity and hence temperature.

    Read this comment:
    https://tallbloke.wordpress.com/2016/09/25/ned-nikolov-in-science-new-messages-mean-more-than-the-messengers-names/#comment-120235

    you say:

    As a parcel of air wih total mass = 1kg is lifted from the surface to the tropopause level (0 to 12,000 m), its gravitational PE increases from 0 to (mgh=) 117,720 J. At the same time, the parcel cools from 288 to [288K – (12km * 6.5 K/km) =] 210K, 78K all in all, which means that its internal KE decreases by (5/2 kΔT=) 57,870 J, 
    The vertical temperature falloff rate for rising air is due to the expansional work it does on its surroundings as it’s rising, draining the air of internal energy (KE). 1st Law of Thermodynamics

    using your figures and equation
    mgh = 5/2kT
    you use k = 300j/K/kg
    Change in T = mgh/2.5k
    =120 00/750
    = 160K
    This is more than 100K to much

    There is NO inherent negative temperature gradient from surface to tropopause to be found in hydrostatic equilibrium.

    But there is in reality. Hydrostatic equilibrium assumes gas laws apply apply to the atmosphere. They don’t because of this negative temperature gradient from surface to tropopause

    In the atmosphere continuum mechanics can be applied horizontally. Vertically molecular theory must be used.
    Using molecular theory does not assume gas laws apply.

  139. Ned Nikolov says:

    Joe,

    You are right – pretty much every discovery in the history of science and engineering has initially been met with a strong resistance by the establishment. However, up to 1960s, this resistance was mostly due to people genuinely not understanding a new idea, simply because they have not heard it before. Nowadays, the lack of understanding is amplified by a phenomenon called ‘political correctness‘. The infusion of politics into science has raised certain physical theories to the status of unquestionable dogmas. It has become ‘unethical’ to even look at evidence that might contradict established theories. Such evidence has been actively suppressed by the establishment and researchers, who consider it and/or ask fundamental questions based on it, have been marginalized and called ‘skeptics’, ‘deniers’, ‘contrarians’ etc… The politicization of science and related preferential funding of a narrow line of thought has essentially curtailed fundamental breakthrough research with the result being an annual expenditure of billions of dollars to maintain the status quo … We need to restore the free inquiry of science by banning political influence and associated preferential funding of science research!