Ned Nikolov: Does a Surface Solar Radiation Dataset Expose a Major Manipulation of Global Temperature Records?

Posted: July 11, 2022 by tallbloke in climate, Clouds, Dataset, solar system dynamics

Once again it’s my pleasure to publish a new paper by Ned Nikolov and Karl Zeller at the Talkshop. In this study, we see the presentation of a climate conundrum, and recent surface solar radiation data which helps shed new light on the questions surrounding the ongoing adjustment of global temperature datasets. This new study applies theory developed in Ned and Karl’s previous paper to enable quantification of the global temperature drop during the “1970s ice-age scare”. This won’t be the last word on the topic, but it offers a solid grounding for further research.

A PDF version of this article can be downloaded here.

Implications of a New Gridded Dataset of Surface Solar Radiation
for the Evolution of Earth’s Global Surface Temperature Since 1960

Ned Nikolov, Ph.D. and Karl Zeller, Ph.D.
July, 2022

Abstract

A new data set of measured Surface Solar Radiation (SSR) covering six continents (Yuan et al. 2021) reveals that the Earth surface received annually 6.6 W m-2 less shortwave energy in 2019 than it did in the early 1960s, and that the average solar flux incident on land decreased by 8.2 W m-2 between 1962 and 1985. Since the Sun is the primary source of energy to the climate system, this pattern of SSR change over the past 60 years (oftentimes referred to as global dimming) suggests that the early 1960s were much warmer than the present. However, all modern records of global surface air temperature show a net warming of about 1.0 K between 1962 and 2019. We investigate this conundrum with the help of an independently derived model (previously verified against CERES observations) that accurately converts observed SSR anomalies into changes of global surface temperature. Results from the SSR-based temperature reconstruction are compared to observed global surface temperatures provided by UAH 6.0 and HadCRUT5 datasets. We find that the SSR-based global temperature estimates match quite well the UAH satellite record from 1982 to the present in terms of overall trend and interannual variability suggesting that the observed warming of the past 40 years was the result of a decreased cloud albedo and an increased SSR rather than rising atmospheric CO2 concentrations. The HadCRUT5 record also shows a satisfactory agreement with the SSR-based temperatures over the same time period. However, between 1962 and 1983, the SSR-based temperature reconstruction depicts a steep global cooling reaching a rate of -1.3 K/decade during the 1970s. This is drastically different from the mild warming claimed by HadCRUT5 over this time period. The cooling episode indicated by the SSR data is corroborated by more than 115 magazine and newspaper articles published throughout the 1970s as well as a classified CIA Report from 1974 all quoting eminent climatologists of the day, who warned the public that the observed worldwide drop of temperatures threatened the global food supply and economic security. Based on this, we conclude that researchers in charge of the HadCRUT dataset have likely removed the 1962 – 1983 cooling episode from the records before the publication of HadCRUT1 in 1994 in an effort to hide evidence contradicting the UN Resolution 43/53 from 1988, which proclaimed a global warming caused by greenhouse gases as a major societal concern, and urged Governments to treat it as a priority issue in climate research and environmental protection initiatives.

  1. Introduction

It is a matter of conventional wisdom now that the Earth was significantly cooler during 1960s compared to the 21st Century. Similarly, no one disputes that the planet’s surface temperature was 1.2oC lower in the beginning of the 20th Century compared to the present. This paradigm of climate change is based on surface temperature records maintained by several research teams that show remarkable consistency with one another. Figure 1 portrays global temperature anomalies based on 6 datasets supposedly constructed using different approaches summarized by Morice et al. (2021). All global records depict a nearly continuous warming since 1920 with a brief pause of the temperature rise between 1940 and 1980. No record shows a drop of global temperature between 1960 and 1980, which is at odds with a well-documented, decade-long discussion in the media about an ongoing rapid cooling during the 1970s currently known as the “1970s ice-age scare”.

Figure 1. Global surface temperature anomaly from 1850 to 2021 according to 6 official data sets. Note the remarkable consistency among various time series (borrowed from Fig. 8 of Morice et al. 2021).

Modern climate science dismisses as a myth the scientific consensus of the 1970s that the Earth was cooling and the widespread belief at the time that a mini-ice age was approaching (Peterson et al. 2008). Based on global temperature records depicted in Fig. 1, which show a 0.4oC warming between 1965 and 1980, the “ice-age scare” of 1970s is now explained away as a media hype not supported by actual science. This is in spite of the fact that in 1974, the US Central Intelligent Agency (CIA) issued a classified internal Working Paper/Report, now available at the Digital Library website of the US Department of Homeland Security, addressing an observed global cooling and its impact on the World’s food supply. The Paper entitled “A Study of Climatological Research as it Pertains to Intelligence Problems” was prompted by national security concerns about future food shortages resulting from ongoing worldwide crop failures in the late 1960s and early 1970s caused by cold and excessively wet weather or unusual droughts, and the emergence of social unrest in some developing countries as a result. We discuss this CIA Paper in more details in Section 4.1 below. During the 1970s, the press quoted numerous eminent climate scientists of the day from Stanford University, MIT, UK’s University of East Anglia, and other accredited academic institutions, who warned the Western society about a looming long-term cooling that could lead to a full-scale Ice Age. These facts are now being downplayed using the argument that a global cooling never took place in the 1970s. However, a new gridded dataset of measured Surface Solar Radiation (SSR) on land spanning the period 1961 – 2019 published in the Journal of Climate last year (Yuan et al. 2021) suggests otherwise. SSR is the shortwave flux (W m-2) incident on a horizontal plane at the Earth surface. The time series of globally averaged SSR anomalies estimated by Yuan et al. (2021) indicate the need to reassess the evolution of Earth’s global temperature from the early 1960s to the mid-1980s.

Yuan et al. (2021) retrieved ground-based SSR data from the Global Energy Balance Archive (GEBA) representing 1,486 monitoring stations spread across 6 continents. The authors employed a machine learning method called “random forest” to interpolate the observed monthly SSR anomalies at individual locations to a uniform 0.5° × 0.5° grid covering all land masses except for Antarctica, where no SSR observations were available. Random forest uses a multitude of regression decision trees, and has been shown to be vastly superior to conventional deterministic spatial interpolation techniques in terms of prediction accuracy and performance stability (Leirvik & Yuan 2021). The method utilized 15 predictors of SSR to spatially interpolate the point observations including 9 climatic variables, 2 geographical coordinates, and 2 temporal parameters (month and year of observation). The authors found that the diurnal temperature range and cloud coverage provided the greatest explanatory power for the SSR interannual variability. The study produced the first and only land-based gridded global monthly SSR data set spanning a period of almost 60 years. We contacted the lead author Dr. Meghan Yuan and obtained annual time series of SSR anomalies for each continent and the Globe as a whole. Figure 2 depicts these time series, which are also shown in Fig. 5 by Yaun et al. (2021).

Figure 2. Annual anomalies of Surface Solar Radiation (SSR) over the period 1961–2019 for each continent and the World according to Yuan et al. (2021). SSR is the total shortwave flux (W m-2) reaching a horizontal plane at the land surface.

The data reveal that SSR decreased significantly between 1961 and 2019 on four out of the six continents. Europe had a moderate SSR increase over this period, while Oceania experienced a nearly zero trend. On a global scale, SSR steeply declined between 1961 and 1985 at a rate of -3.73 W m-2 per decade (Fig. 3), which is often referred to as global dimming. This was followed by a partial SSR recovery from 1982 to the present at a rate of +0.33 W m-2 per decade, which could be characterized as a modest global brightening. Measurements depicted in Fig. 3 indicate that the land masses on Earth received on average 6.6 W m-2 more solar radiation in the early 1960s than they did in the 21st Century. Since the Sun is by far the main source of energy to the climate system, a higher SSR in the early 1960s implies a warmer Earth surface compared to today. To put the observed net SSR drop of 6.6 W m-2 from 1960s to the present into a perspective, consider that climate models predict a 3.0 K warming on average (with a range between 2.5 K and 4.0 K) in response to a 3.74 W m-2 radiative forcing attributed to a doubling of atmospheric CO2 concentration (IPCC AR6: Climate Change 2021: The Physical Science Basis. Summary for Policymakers, p. 11). However, while the CO2 “radiative forcing” is a modeled quantity that has not been observed in reality, SSR is a parameter measured by physical instruments such as pyranometers. Hence, one might ask: What global cooling could be expected from a 6.6 W m-2 decrease of measured SSR? We will answer this question here with the help of an independently derived, generic mathematical model that relates changes of global surface temperature to variations in absorbed solar radiation by a planet (Nikolov & Zeller 2022). The model was successfully verified against Earth’s reflected solar fluxes measured by the Clouds and the Earth’s Radiant Energy System (CERES) for the past 20 years (see Figures 3 and 4 in  Nikolov & Zeller 2022).

First, we will evaluate the potential of the new global SSR series to explain the observed warming since 1982. This will also serve as a test of whether or not the Nikolov-Zeller (NZ) albedo-temperature model could be trusted to correctly reconstruct the global temperature response to an observed SSR drop of 8.2 W m-2 between 1962 and 1985 shown in Fig. 3. 

Figure 3. Land-based global annual SSR anomalies with respect to the 1981 – 2010 reference period according to Yuan et al. (2021).


2. Method for Estimation of Global Temperature Variations from Observed Annual SSR Anomalies

Nikolov & Zeller (2022) derived the following analytical formula to compute the equilibrium sensitivity of a planet’s global surface temperature to changes in absorbed solar radiation:

where ΔT (K) is the departure of surface temperature from a baseline value  Tsb (K) in response to a change in absorbed solar radiation Δsa (W m-2); S is the top-of-the-atmosphere total solar irradiance (TSI, W m-2), and αb is the baseline planetary Bond albedo (faction) corresponding to Tsb. Note that, if Δsa = 0 then ΔT = 0 as well.

Equation 1 can be used to estimate changes of global surface temperature in response to observed SSR anomalies over land depicted in Fig. 3. To this end, one must know average values of the absolute global surface temperature Tsb and the Earth’s Bond albedo αb during the reference period 1981 – 2010. One must also have a time series of annual TSI data available spanning the period 1961 – 2019. Finally, the SSR anomalies depicted in Fig. 3 must be converted into anomalies of total absorbed solar radiation Δsa.

Jones & Harpham (2013) reported that the absolute average surface temperature of the World during the 1981 – 2010 period was between 13.9 and 14.2 °C. Taking the mean of this range we assumed Tsb = 287.2 K (14.05 °C). Based on an extensive review of Earth’s albedo estimates and their history conducted by Stephens et al. (2015), we adopted αb = 0.2942 for the period 1981 – 2010. Annual TSI values shown in Fig.4 were provided by Prof. Nicola Scafetta (personal communication) based on the AcrimSat observational record from 1980 to the present and proxy-based solar reconstructions prior to 1980.

Figure 4. AcrimSat record of total solar irradiance (S) at the top of the atmosphere (TOA) employed in this study.


The SSR anomalies ( Δsd ) can easily be converted into anomalies of total absorbed solar radiation by the Earth-atmosphere system ( Δsa ) required in Eq. 1, if one knows the Earth’s average surface albedo ( αs ) and the atmospheric fraction of absorbed solar radiation ( fa ), using the formula:

Wild et al. (2013) provided global estimates of the Earth’s energy budget parameters along with their uncertainty ranges from a surface perspective (see their Fig. 1). Their data suggest 0.116 ≤ αb ≤ 0.145 and 0.308 ≤ fa ≤ 0.378 . Based on these ranges, the following limits are obtained for the conversion factor in Eq. 2:

Thus, the total absorbed solar radiation by the Earth-atmosphere system is 23.6% to 42.1% greater than the corresponding shortwave flux received on a horizontal plane at the Earth surface. Figure 5 shows annual anomalies of the total absorbed solar radiation estimated by Eq. 2 using the SSR time series depicted in Fig. 3. On average, the Earth absorbed between 9.1 and 10.5 W m-2 more solar radiation in the early 1960s than it did during the 1981 – 2010 reference period. This was most likely a result of a reduced cloud cover/albedo during the 1960s. Even when compared to 2019, the early 1960s saw 8.5 W m-2 higher planetary absorption of shortwave radiation than the present. In terms of absolute values, this measured solar forcing is more than 2 times greater than the modeled (but never observed) radiative forcing of 3.74 W m-2 attributed to a doubling of atmospheric CO2 concentration (Gregory et al. 2004). Hence, based on this fact alone, it is reasonable to expect that the early 1960s were globally much warmer than the second decade of the 21st Century, and that a rapid and significant cooling took place in 1970s. Figure 6 depicts the dynamics of total absorbed solar radiation by the Earth-atmosphere system, which was obtained by adding the anomalies depicted in Fig. 5 to the average shortwave absorption Sa = 240.2 W m-2 during the 1981 – 2010 period. This average solar flux was calculated from mean values of TSI (S = 1,361.35 W m-2) and Earth’s Bond albedo ( αb = 0.2942) during the reference period using the formula:

Figure 5. Global annual anomalies of absorbed total solar radiation by the Earth-atmosphere system with respect to the 1981 – 2010 reference period estimated from observed SSR anomalies (Yuan et al. 2021) in Fig. 3 using Eq. 2.

.

Figure 6. Annual absorbed total solar radiation by the Earth-atmosphere system calculated from anomalies shown in Fig. 5 and an estimated average absorption of 240.2 W m-2 during the 1981 – 2010 reference period.


The annual anomalies of absorbed shortwave radiation ( Δsa ) depicted in Fig. 5 can be used to reconstruct the dynamics of Earth’s Bond albedo implied by the SSR measurements. To this end, one must first compute the albedo anomalies ( Δα ) using the relationship from Eq. 17 in Nikolov & Zeller (2022):

Next, the albedo anomalies are added to the average Bond albedo during the 1981 – 2010 reference period αb = 0.2942. Figure 7 displays the resulting time series of Earth’s total albedo. From these data, one can calculate the reflected solar radiation by the Earth-atmosphere system and compare these estimates to independent measurements by CERES, which we discuss in Section 3.1.

The time series depicted in Fig. 5 were used with Eq. 1 to produce upper and lower estimates of ΔT for each annual anomaly of absorbed solar radiation. The resulting temperature patterns are discussed in Section 3.

Figure 7. Reconstructed Bond albedo of Earth based on globally averaged SSR data over land reported by Yuan et al. (2021).


3. Results

Since physical and biological processes in Earth’s ecosystems are controlled by absolute temperatures rather than temperature anomalies, we converted the ΔT estimates obtained from Eq. 1 to absolute Kelvin temperatures by adding the average absolute temperature of the 1981 – 2010 reference period Tsb = 287.2 K (Jones & Harpham 2013) to the modeled series of temperature anomalies. We used a similar approach to also convert observed global temperature anomalies in the lower troposphere and at the Earth’s surface reported by UAH and HadCRUT5 to absolute surface air temperatures. Thus, all comparisons of temperature series in this Section utilize the absolute Kelvin scale. We begin with a discussion about reconstructed global surface temperatures from SSR anomalies for the period 1982 – 2019.

3.1 Reconstruction of Global Surface Temperature Dynamics During the Satellite Era (1982 -2019)

Figure 8 portrays the reconstructed dynamics of global surface temperature based on SSR data from 1982 to the present. The upper and lower temperature estimates were obtained from the corresponding upper and lower time series of absorbed shortwave-radiation anomalies shown in Fig. 5 using Eq. 1. The difference between the two estimates is rather small over this period, which is also reflected in the 37-year warming trend ranging from 0.12 to 0.14 K/decade.

Figure 8. Reconstructed dynamics of the average global surface air temperature during the period 1982 – 2019 based on SSR data provided by Yuan et al. (2021). The upper and lower estimates are calculated from time series of absorbed solar radiation anomalies shown in Fig. 5 using Eq. 1 and then converted to absolute surface air temperatures as described in the text.

Figure 9 compares temperature reconstructions based on SSR data to observed global surface air temperatures inferred from official institutional records provided by UAH (using a satellite-based microwave measurement platform) and HadCRUT5 (utilizing a ground-based network of thermometers). The SSR-based upper temperature estimate has a trend of 0.14 K/decade, which is identical to the UAH’s trend over this time period. The HadCRUT5 record shows a bit higher warming rate of 0.2 K/decade for the past 37 years. Note that the UAH record is inferred from satellite observation uniformly covering the entire Globe, while the HadCRUT5 series is based on non-uniformly distributed measuring stations mostly located on land with a rather sparse coverage of the ocean especially in the Southern Hemisphere. The steeper warming trend of HadCRUT5 appears to be a result of multiple adjustments done to temperature data after the fact. For example, the rate of global warming from 1950 to the second decade of the 21st Century has increased 28.2% between Versions 3 and 5 of the HadCRUT data set. Altering past temperature anomalies in an effort to generate more warming appears to be a routine practice by the Climate Research Unit (CRU) at the University of East Anglia. These periodic “adjustments” helped produce an “observed” global temperature record with a warming trend that matches the one simulated by CO2-driven climate models. However, a warming rate of 0.2 K/decade over the past 20 years is inconsistent with both CERES measurements of reflected solar radiation and land-based SSR data.

Figure 9. Comparison of 37-year global temperature trends between SSR-based reconstructions of this study and official institutional records based on direct observations. Upper Panel: SSR-reconstructed temperature and the UAH satellite record; Lower Panel: SSR-reconstructed temperature and the HadCRUT5 surface record.

A close examination of the data series in Fig. 9 reveals that observed global temperatures lag the SSR-based reconstructions by 1 to 4 years. This makes physical sense, if SSR were the driver of climate change for the past 40 years, because the Earth surface has a significant thermal inertia that delays the response of global temperature to perturbations in incoming solar radiation. The larger the interannual SSR perturbation the longer the expected lag as it is indeed observed. Figure 10 illustrates the match of interannual temperature variations between instrumental records and the SSR-based reconstruction after the reconstructed series has been shifted 1 – 3 years forward to account for a variable lag.

Figure 10. Comparison of interannual variations between instrumental records of global surface temperature and lag-adjusted SSR-based temperature reconstructions. The SSR-derived global temperature series was shifted forward by 1 year between 1982 and 1992 and by 3 years between 1993 and 2019 to reveal its alignment with observations. Upper Panel: SSR-reconstructed temperatures and the UAH satellite record; Lower Panel: SSR-reconstructed temperatures and the HadCRUT5 surface record.

Notice how well the SSR-guided global temperature reconstruction describes El Niño and La Niña events over the past 39 years. Yet, the overlap with global temperature records is not perfect, because these records represent averages that include ocean surface air temperatures, while the SSR-based reconstruction only relies on radiation measurements over land. The evidence presented in Figures 9 and 10 collectively points toward the following conclusions:

a) The overall upward trend and interannual variability of global surface temperature during the past 40 years have been caused by changes of cloud albedo and the resulting variations of SSR. This is in agreement with results from a previous analysis by Nikolov & Zeller (2022), which compared cloud-albedo variation measured by CERES to global surface temperature changes over the past 20 years;

b) Large El Niño events appear to be induced by synchronous changes of cloud cover and SSR over several continents at once 3 – 4 years before the event is registered by near-surface temperature measurements. Hence, the ENSO cycles are not triggered by heat fluxes periodically released from the Equatorial Pacific as currently believed (see this NASA webpage for a conventional explanation of ESNO), but are a result of changes in absorbed solar radiation by the Planet due to fluctuations of global cloud albedo.

The SSR-based reconstruction of Earth’s Bond albedo depicted in Fig. 7 can be used to estimate reflected shortwave radiation by the Planet and the results compared to independent measurements by CERES to provide yet another test of the hypothesis that observed changes in solar fluxes at the surface are caused by variations of cloud albedo. Reflected shortwave radiation ( Sr , W m-2) is a product of the albedo ( α ) and the planet’s average insolation ( S / 4 ), i.e.:

Figure 11 (Upper Panel) depicts the modeled evolution of reflected solar radiation between 1961 and 2019 based on reported SSR anomalies. The red curve represents independent measurements by CERES obtained after year 2000. The Lower Panel of Fig. 11 shows a close-up of the modern global warming period: 1985 – 2019. We chose 1985 as a start of the period, because this year marks an inflection point on the SSR curve between descent (dimming) and ascent (brightening) (see Fig. 3). Note that the 1985 – 2019 trend of reflected solar radiation derived from SSR data is quite similar to the 19-year trend of reflected shortwave fluxes measured by CERES in the 21st Century. This means that the CERES state-of-the-art observations are fully compatible with and confirm the downward trend of Earth’s cloud albedo implied by the SSR data. In addition, the SSR-based estimates of reflected solar radiation after year 2000 fall within 67% of the CERES measurement uncertainty (calibration error). Therefore, the comparison between SSR-inferred and CERES-observed fluxes of reflected shortwave radiation by Earth indicates that the SSR-based albedo estimates are robust, and one should trust model projections of a low planetary albedo and a high sunlight absorption by Earth during the early 1960s depicted in Figures 6 and 7.

Figure 11. Reflected solar radiation by the Earth-atmosphere system estimated from SSR data and measured by CERES. Upper Panel: during the 1961 – 2019 period; Lower Panel: during the recent period of global warming (1985 – 2019).

3.2 Reconstruction of Global Surface Temperature Dynamics during the 1962 -2022 Period

The ability of the NZ albedo-temperature model (Equations 1 through 3) to reproduce the overall trend and interannual variability of Earth’s global surface air temperature from measured SSR anomalies on land during the satellite era (Figs. 9 and 10) brings forth two conclusions: (a) The observed warming of the past four decades was most likely caused by a decrease of cloud albedo and a related increase of surface solar radiation, and not by rising atmospheric greenhouse-gas concentrations; and (b) The global temperature dynamics reconstructed from SSR data between 1961 and 1985 is most likely correct and should be taken seriously.

Figure 12 depicts the reconstructed dynamics of the global absolute surface air temperature from 1962 to 2022 based on SSR data. The discontinuity in the time series between 1993 and 1995 is a result of adjustments made to account for a variable time lag (see the caption of Fig. 12 for details). Note that the difference between lower and upper estimates is quite small compared to interannual and decadal temperature variations. SSR measurements suggest that the Earth cooled about 3.0 K between 1963 and 1985 and warmed approximately 0.6 K from 1985 to the present. Thus, the early 1960s were globally 2.4 K warmer than the present! This is a drastically different pattern of planetary climate change from the one portrayed in Fig. 1 and promoted by IPCC.

Figure 12. Reconstructed dynamics of the mean global surface air temperature during the 1962 – 2022 period based on SSR data provided by Yuan et al. (2021). The upper and lower estimates are computed from the two series of absorbed solar-radiation anomalies shown in Fig. 5 using Eq. 1. This is followed by a conversion of the resulting temperature anomalies into absolute surface air temperatures as described in the text. The reconstructed temperature series were then shifted forward 1 year between 1961 and 1992, and 3 years between 1993 and 2019 to account for an observed variable lag discussed in Section 3.1. This created a small discontinuity in the data between 1993 and 1995.

Figure 13 compares the reconstructed global temperature dynamics from SSR data to official temperature records from HadCRUT5 and UAH for the 1962 – 2022 period. The SSR-derived temperature series agrees quite well with observed global surface temperatures from 1983 to the present, i.e. over the past 39 years. However, prior to 1983, the SSR-derived estimates dramatically diverge from the HadCRUT5 record. Particularly notable is the rapid cooling evident in the SSR-based reconstruction between late 1960s and early 1980s, which stands in stark contrast to a mild warming claimed by the HadCRUT5 record for the same period. The SSR data suggest a 1.3 K drop of global temperature in a single decade, which gives credence to the “ice-age scare” documented in numerous reports by news media and Government agencies throughout the 1970s.

Figure 13. Global temperature dynamics reconstructed from SSR data and reported by UAH and HadCRUT5 datasets during the 1962 – 2022 period. Highlighted in light blue is the period of the “ice-age scare”, when news media, academic institutions, and Government Agencies intensely discussed an ongoing rapid cooling.


4. Discussion

The analysis of global SSR data provided by Yuan et al. (2021) strongly suggest that the 1960s were significantly warmer that the second decade of the 21st Century, and that a steep worldwide cooling of -1.3 K/decade took place during the 1970s and early 1980s, which is not present in any current official global temperature dataset (see Fig. 1). This begs the following questions: Is there evidence outside of the SSR data series corroborating this cooling? If such a cooling did occur, how can we explain its total absence from modern institutional temperature records? The Sections below address these issues.

4.1 Evidence for a Rapid Global Cooling from the Late 1960s through 1982

There are two lines of evidence supporting the occurrence of a major climate cooling between 1960s and 1982: (a) discussions in the “public square” about the impact of an ongoing cooling on agricultural production and the economy, and (b) tree-ring proxy temperature reconstructions.

During the decade of 1970s, more than 115 reports were published in newspapers, popular science magazines and by Government agencies discussing Earth’s rapidly cooling climate and a possible descent into a new Ice Age (see also this list of publications). These reports quoted prominent climate scientists of the day, who warned the Western society about the dire consequences of a prolong cooling for the World’s food supply. In the early 1970s, the cooling was attributed to human-induced air pollution (such as industrial emissions of particulate matter) blocking the Sun. Based on this belief, some experts called for outlawing of the internal combustion engine for vehicles and a strict control over all forms of fossil fuel burning in order to prevent the Earth from plunging into an Ice Age (e.g. The Owosso, Jan. 26, 1970). Ironically, western Governments now push for severely limiting the combustion of fossil fuels in an effort to “save the Planet” from overheating (see IPCC Special Report on 1.5oC Warming 2018)! However, by 1975, scientists admitted that they did not know, what was driving the observed cooling. Here are two prominent reports from that time that sounded the alarm about a cooling World.

In 1975, Newsweek published an article by Peter Gwynne, an Oxford graduate and award-winning science writer, entitled “The Cooling World” (see Fig. 14 for the full text of the article). In it, Gwynne quotes climate scientists from NOAA, Columbia University and the University of Wisconsin as well as reports by the National Academy of Sciences all confirming a significant cooling trend that has replaced 75 years of prior “extraordinary mild conditions”. He states that satellite photos have shown a sudden, large increase of winter snow cover in the Northern Hemisphere and that a NOAA study found a 1.3% reduction in the amount of sunshine reaching the surface of the Continental US between 1964 and 1972. This relative reduction of sunshine corresponds to about 3.25 W m-2 decrease in the mean annual SSR. For comparison, the dataset by Yuan et al. (2021) shows a SSR drop of 4.25 W m-2 over the North American Continent between 1964 and 1972. Gwynne writes that, although meteorologists may disagree about the cause and extent of the rate of cooling, they are “almost unanimous in their view that the cooling trend will reduce agricultural productivity for the rest of the century”. These are unusually strong statements, if there had been no cooling during 1960s and 1970s as claimed by modern climate records (Fig. 1).

Figure 14.The Cooling World”, an article by Peter Gwynne, an award-winning science writer and a former science editor of Newsweek, published in Newsweek on April 28, 1975.

By 1974, the cooling of the global climate had such a strong impact on the World’s economy that it became a national security issue and was addressed by a special classified Report of the US Central Intelligence Agency (CIA) entitled “A Study of Climatological Research as it Pertains to Intelligence Problems”.  The Report is currently available at the Digital Library of the US Department of Homeland Security. According to this document, an unfolding global crisis in the food supply chain triggered by a rapid cooling of Earth’s climate since 1965 made it urgent to develop methodologies capable of predicting future climate change. Such techniques were lacking at the time, the Report points out, because prior to 1960 the Planet was so warm and weather conditions so favorable for crop growth that forecasters viewed climate only as a minor factor in their agricultural projections. The 36-page CIA Report summarizes the state of climate science as it existed in the early 1970s by describing 3 main schools of thought (approaches) to understanding and predicting climate change. Notably, none of these schools considered atmospheric greenhouse gases as drivers of Earth’s climate! In fact, the Report does not even mention terms such as “carbon dioxide” (CO2) or “greenhouse-gas emissions”. This implies that the mainstream climate science of 1970s was not under the influence of the 19th-Century Greenhouse Theory, which now dominates academic research in this area. The Report also points out that the rapid cooling during the late 1960s and early 1970s triggered the preparation of a National Climate Plan as a joint effort of several US agencies. The Plan called for allocation of funding by the US Office of Management and Budget to establish a Center for Climate and Environmental Assessment at NOAA, which will be supported by the National Science Foundation and the US National Academy of Sciences. Other countries had also launched Climate Research Programs in response to rapidly deteriorating weather conditions from one year to the next, the Report states. Thus, the modern $2.5B annual climate research budget of the US, now mostly spent on studying anthropogenic global warming, originated from an unusually severe cooling episode during 1960s and 1970s. It is sobering to realize that, if it was not for the threat posed by a cooling climate on the global agricultural production some 50 years ago, we would not now have a lavishly funded climate science!

If the Earth’s global temperature followed a trajectory claimed by the HadCRUT dataset, which shows a 0.4 K warming between 1965 and 1980, why would western Governments and our society as a whole have engaged in extensive discussions about a global cooling for a full 10 years during the 1970s? Such discussions only make sense, if the global temperature had been on a trajectory indicated by the SSR-based reconstructions shown in Fig. 13.

The second line of evidence for a steep global cooling during the 1962 – 1985 period comes from proxy temperatures inferred from tree-ring chronologies. This evidence is articulated in email exchanges between top scientists at the University of East Anglia’s Climate Research Unit (CRU) as well as US researchers that have become public through the 2009 Climate Gate leak. In 1999, the CRU Director Phil Jones sent an email describing a “trick” he performed on a tree-ring proxy series belonging to the CRU Deputy Director Keith Briffa that showed a sharp decrease of ambient temperature after 1961. In order to hide the unwanted decline, Jones decided to truncate the problematic proxy temperature series at 1961 and splice instrumental temperature records to it that showed a continuous warming. This is the infamous “hide the decline” email sent by Phil Jones to several leading climate scientists on Nov. 16, 1999.  In a 2005 email, Prof. Jonathan Overpeck, an interdisciplinary climate scientist at the University of Michigan, stated that there “is a real issue” in “showing some of the tree-ring data for the period after 1950” presumably referring to the fact that tree-ring proxy data often show a prolong cooling after 1950, which cannot be explained by the assumed continuous rise of atmospheric CO2 concentration for the past 70 years. For more details about these and other email exchanges, please review these blog articles: Hide the Decline; McIntyre (2011a); McIntyre (2011b);

4.2 Why is the Cooling of 1970s Absent from Modern Global Temperature Records?

The numerical analyses and facts presented above suggest that the 1960 – 1983 global cooling did occur but has been removed from the official surface temperature records. Such a removal was likely greatly facilitated by the deliberate use of temperature anomalies in place of absolute temperatures in all global datasets (Fig. 1). Figure 15 illustrates the possible reason for this act: the pronounced multidecadal cooling is incompatible with the smooth, continuous rise of atmospheric CO2 reported by Charles Keeling and supposedly based on measurements taken at Mauna Loa HI starting in March of 1958. In order to make a case for the anthropogenic climate change endorsed by Resolution 43/53 of the UN General Assembly in 1988, atmospheric CO2 and global temperature had to follow the same trajectory and be highly correlated with each other. A 22-year worldwide steep cooling episode during the second half of the 20th Century critically undermines the “Greenhouse” theory, which is at the core of the human-caused global warming concept promoted by the United Nations; hence, such a cooling was likely seen as politically “unacceptable”.

Note how closely the HadCRUT5 surface temperature record tracks the Keeling CO2 curve in Fig. 15. This tight fit is a result of numerous after-the-fact “adjustments” made to the HadCRUT dataset over the past 25 years. It appears that the cooling of the 1970s has been removed from the records prior to the release of HadCRUT1 in 1994 (Parker et al. 1994). According to Wikipedia, the initial work on assembling a gridded dataset of surface temperature anomalies began at CRU in 1978, but nothing was published until 6 years after the UN’s Resolution emphasizing anthropogenic global warming and 4 years after the publication of the IPCC First Assessment Report (FAR). Thus, HadCRUT1 was released long after the political winds have shifted toward blaming the industrial greenhouse-gas emissions for climate change.

Figure 15. Global temperature dynamics reconstructed from SSR data and reported by HadCRUT5 compared to the Keeling CO2 curve derived from measurements made at Mauna Loa, HI.

The following scenario is hypothetical but quite plausible. Since the 1988 UN Resolution on Climate Change and particularly after the establishment of IPCC in 1990, Government funding for climate research in all Western countries became increasingly geared toward programs investigating the effect of atmospheric CO2 on global temperature. This new research trend could not have remained unnoticed by scientists at the University of East Anglia’s Climate Research Unit, who were tasked with the development of the World’s first global gridded surface temperature dataset. It is conceivable that these researchers (now well-funded under the new UN Agenda) quickly realized that the rapid cooling of 1960s and 1970s would create a major scientific and political problem if included in the global temperature record and juxtaposed with the ascending Keeling curve of atmospheric CO2 beginning in 1958. Such an inclusion would have invalidated the “Greenhouse” theory of climate change and compromised the UN Agenda from the start. Thus, the CRU scientists dealt with the issue “appropriately” by simply removing the cooling episode from the global series of temperature anomalies.

5. Conclusion

A new global dataset of Surface Solar Radiation (SSR) published by Yuan et al. (2021) shows a large decrease in the average solar flux reaching Earth’s land masses between 1960 and the present. This prompted a reassessment of the known climate-change pattern during the 20th Century. The analysis of globally averaged SSR data revealed a 22-year long steep cooling episode between 1962 and early 1980s that is absent from current institutional global temperature records (Fig. 1), but was a topic of intense public discussions during the 1970s. These findings have serious implications for the “Greenhouse” climate theory and the hypothesis that industrial emissions of carbon dioxide and other “heat-trapping” trace gases were responsible for the observed warming in recent decades. The results obtained in our study call for an independent investigation of the methods and procedures employed in the development of global temperature datasets portrayed in Fig. 1. Such an investigation could also shed light on whether or not the recent warming constitutes a “climate crisis”.

Comments
  1. craigm350 says:

    Reblogged this on WeatherAction News and commented:
    Hubert Lamb wrote of the discrepancy between the documented cold period of the 60s and 70s and the continuing rise of CO2.

    As Ned highlights, it had to be removed. They made sure the awkward moment of discovery never happened.

    That’s how the party keep power – rewriting history and language.

    This GWPF report covers Lamb’s work during that period:
    https://www.thegwpf.org/publications/hubert-lamb-and-the-transformation-of-climate-science/

  2. Ned Nikolov, Ph.D. says:

    @craigm350,

    Thank you for this comment! The statement made by Hubert Lamb many decades ago is now fully supported by the latest SSR data and our global temperature reconstruction based on those data… I was not aware that even Callendar, who promoted the nonsensical “greenhouse” theory in 1930s, actually noticed & acknowledged the discrepancy between atmospheric CO2 trend and observed temperatures.

    I hope our analysis will make people seriously re-examine the global temperature records and how they have been developed… I think the fraud surrounding the processing of World’s surface temperature data has been massive and likely occurred before 1994.

  3. oldbrew says:

    In a 2018 article, ‘GEO-ENGINEERING: IGNORING THE CONSEQUENCES’, Tom Harris and Tim Ball wrote:

    From 1940 to almost 1980, the average global temperature went down. Political concerns and the alleged scientific consensus focused on global cooling. Alarmists said it could be the end of agriculture and civilization. Journalist Lowell Ponte wrote in his 1976 book, ‘The Cooling’:

    “It is cold fact: the global cooling presents humankind with the most important social, political and adaptive challenge we have had to deal with for ten thousand years. Your stake in the decisions we make concerning it is of ultimate importance; the survival of ourselves, our children, our species.”

    Change the word “cooling” to warming and it applies to the alarmist threats today.
    . . .
    During the cooling “danger,” geo-engineering proposals included:

    * building a dam across the Bering Straits to block cold Arctic water, to warm the North Pacific and the middle latitudes of the Northern Hemisphere;

    * dumping black soot on the Arctic ice cap to promote melting;

    * adding carbon dioxide (CO2) to the atmosphere to raise global temperatures.

    All these actions would impact global climate in unpredictable ways. Now we know they would have exacerbated the predominantly natural warming trend that followed.

    https://www.heartland.org/news-opinion/news/geo-engineering-ignoring-the-consequences
    – – –
    At the time, few saw this coming…

    The 76/77 climate shift

    The climate shift is part of a phenomenon called the Pacific Decadal Oscillation
    http://ocp.ldeo.columbia.edu/res/div/ocp/arch/climate_shift.shtml

  4. Ned Nikolov, Ph.D. says:

    @oldbrew:

    Thank you! The quote from the 1976 book “The Cooling” indicates that people were really scared about human survival in 1970s and rightfully so, because global temperature was almost in a free fall between1965 and the early 1980s. We see this clearly in the data shown in Fig. 13 above.

  5. oldbrew says:

    2018 study:

    The tropical Indian Ocean sea level displayed decadal variations in response to Pacific Decadal Oscillation (PDO). Contrasting patterns of decadal oscillation in sea level is found during the opposite phases of PDO especially in the thermocline ridge region of the Indian Ocean (TRIO; 50°E–80°E; 15°S–5°S). Epochal mean sea level rise is observed over the TRIO region during the cold phase of PDO (1958–1977), whereas epochal mean sea level fall is observed during the warm phase of PDO (1978–2002). [bold added]

    https://link.springer.com/article/10.1007/s00382-018-4431-9

  6. Ned Nikolov, Ph.D. says:

    The severe global cooling between 1962 and the early 1980 revealed by our analysis of SSR data and discussed in the above article has serious implications for the pattern of Earth’s global temperature change from 1880 to the present. Properly accounting for the cooling episode completely changes the temperature dynamics of our Planet during the 20th Century. See this graph for details:

    I will later post a separate article describing, how the new (reconstructed) temperature curve on the graph was obtained.

  7. Nelson says:

    Ned, The CERES data shows a decrease in reflected SWR from the Earth’s TOA. At the same time, the CERES data shows an increase in outgoing LWR radiation. Many have used this data to claim that the temperature increase since the 1980s is solar-related. I guess you are showing the same thing. The world temperature data shown by the big 4 data sets make no sense. 1921 was an extremely hot year around the world, yet it’s shown as cooler than the 1970s. The sharp drop in temperatures after the 1940s is well documented. The fact that the world temperature data has been adjusted to get rid of the decline is just sad. I don’t know what to make of the WMO temperature data prior to 1970. Again, this data shows no worldwide 1921 hot spell that is clearly in the historic record. The more you look at the early records, the less confidence one has in the reported worldwide temperatures. The good news is that with the CRN and satellite data, it is increasingly hard to fudge the data. A cooling period in the NH won’t be missed as the sea ice data will tell the tale. Too many different groups measure the data for it to be fudged.

  8. Ned Nikolov, Ph.D. says:

    @Nelson,

    Thank you for this comment. It only makes physical sense that a decrease of reflected solar radiation as measured by CERES since year 2000 would be associated with an increased outgoing LW radiation. That’s because a higher absorption of solar radiation by Earth due to a reduced cloud albedo would warm the surface and hence boost the LW flux emitted to Space.

    One of the important findings in our study is that CERES measurements of reflected shortwave radiation since year 2000 agree quite well with reflected solar fluxes estimated from SSR data based on a completely different monitoring method. Furthermore, global temperatures reconstructed from SSR data since 1983 match the satellite-based temperature record. These facts unequivocally points to the conclusion that the mild warming of the past 40 years was caused by a decrease of planetary cloud albedo rather than an increase of atmospheric CO2.

    The removal of the steep cooling episode between 1962 and 1983 from the surface temperature records is perhaps the mother of all data manipulations (fraud) committed in the name of “climate change”.

  9. tallbloke says:

    The Swiss alpine glaciers being “in full retreat” during the 1930s, would seem to imply there was a warmup from a cooler T prior to that. The US heatwave and ‘100 degree days’ indexes shows much the same thing – unusual warmth in the 1930s. Low solar cycles in the 1970s and 1900-1920 are likely involved in the greater cloud cover during the cooler times, due to less intense solar wind unable to keep Svensmark’s cloud seeding cosmic rays out of the inner solar system.

    https://twitter.com/MathewMoisture/status/1546769082258804736?s=20&t=i9Y9Ar2x64z_OfnQ-PFFLg

  10. stpaulchuck says:

    for years now I’ve enjoined members of the Church of the Satanic Gases to do a little simple math. Subtract the raw temperature from the final ‘adjusted’ temperature and plot the ‘adjustment’ values. Next rescale and plot the CO2 concentration on the same graph.

    It becomes clear that the adjustments have been adjusted over time to track CO2 not instrumental error or other things we calibrate for. Thus the rise in temperatures which is all fake.

    Second: compare satellite, radiosonde, and surface temperature reports. Two of them track closely. Like they say on Sesame Street, “One of these things is not the same.” I leave it to you to look.

  11. Ned Nikolov, Ph.D. says:

    Satellite data can also be manipulated to get “more warming”. The RSS data set is a good example. After their “corrections” in 2017, RSS now shows as steep or even steeper warming trend since 1998 than the NASA GISS surface temp. record, which used to be the most “adjusted” and the worst of all. The RSS Inc team continue to use data feeds from satellites that are known to have warm biases due to deterioration of their orbits over time… The only global temperature record that agrees well with independent measurements by modern satellite platforms such as CERES is Roy Spencer’s UAH data set. Everybody else is off the mark!

  12. tallbloke says:

    Ned, the SSR leads the expression of its variations by 1-4 years (fig 10), due to “inertia in the climate system”. The only massive body with sufficient heat capacity and thermal inertisa which could hold that energy for so long is the ocean. 3.7 years is the average period between El Nino type spikes in surface T (this also happens to be 1/3 of the average solar cycle length). Could it be that the big el nino events, which tend to occur soon after solar minimum (e.g. 1998, 2010) are a ‘double whammy’ of the release of heat from the sub-surface Pacific Warm Pool, plus the slower transmission of energy from surface to space caused by increased cloudiness near solar minimum?

  13. Ned Nikolov, Ph.D. says:

    @tallbloke,

    The SSR data coming from land only and the 3-4 year lag of measured temperatures with respect to SSR-based reconstructed temperatures strongly suggest that big El Nino events are not the result of heat release from the “sub-surface Pacific Warm Pool”… Think about it, SSR over the continents increases 3-4 years before a big El Nino. This cannot possibly be caused by a heat release from below the ocean surface. It can only be explained by a decrease of the cloud albedo above the continents, and possibly over the ocean as well… I don’t think clouds can delay the transmission of energy to Space for years, not physically possible. Remember, clouds are not blankets!

    These new SSR data over land provide sufficient evidence to abandon the decades-old notion that ENSO cycles are driven by heat releases from the ocean.

  14. tallbloke says:

    Ned: “Think about it, SSR over the continents increases 3-4 years before a big El Nino. This cannot possibly be caused by a heat release from below the ocean surface.”

    I didn’t say it was. I’m saying the increased SSR energy (assuming it increases over the ocean as well as the land) gets stored in the ocean and gathered in the PWP for several years before getting released in the El nino event that occurs several years after the spike in SSR. Hence the lag.

    Where else do you think such a huge amount of energy could be stored for several years?

  15. Ned Nikolov, Ph.D. says:

    OK, I see now what you mean. Yes, the lag has to be caused by the heat capacity of the surface (land & ocean). The problem is that we do not know at the moment, how much SSR changes over the oceans and what is the spatial pattern of that change. So, what you are saying could be true, but we don’t have the data to reach a definitive conclusion… The one thing we can say for sure, though, is that El Ninos are initiated from above (the clouds), not from below (the ocean).

  16. Ned Nikolov, Ph.D. says:

    @tallbloke,

    Manipulating Arctic sea-ice data should not come as a surprise, because the 1988 UN Resolution to specifically fund research on “anthropogenic global warming” without any evidence that such a thing was even real, opened a flood gate to intellectual corruption and organized institutional fraud.

  17. rowjay says:

    The CET dataset that I downloaded a couple of years ago clearly shows the depressed temps that caused the “Ice age” scare. There is a link to a plot of the dataset that shows some interesting potential relationships between the Dalton Minimum and the less extreme modern 1960-1980 version.

    https://rowjayinoz.wordpress.com/2022/07/16/the-central-england-temperature-dataset/

  18. Ned Nikolov, Ph.D. says:

    @rowjay:

    Thank you for the comment and the link. Keep in mind that the solar radiation regime over Europe is quite different from the regime over the rest of the World on average as illustrated by the SSR plots in Fig. 2 of our article. This is the reason why the CET record does not show a steep drop during the 1962 – 1983 period as does the global average temperature.

  19. […] no catastrophic consequences from warming of the kind they are warning about now.  They’re also faking their data on a regular basis, one recent example being the removal of declining temperatures from the 1962 […]

  20. tallbloke says:

    More historical evidence for the decline in temperature post 1960

  21. tallbloke says:

  22. tallbloke says:

    Dr Spock (Leoneard Nimoy) Ice age doco clip 1979.

  23. oldbrew says:

    Tallbloke says: July 23, 2022 at 7:20 am
    – – –
    Guardian climate science correspondents have come a long way since that article 🙄

  24. Ned Nikolov, Ph.D. says:

    Thank you, Roger, for posting this 1974 article!

    There is no doubt that the Planet experienced a pronounced cooling in 1960s and 1970s, which has been erased from the modern global temperature records. Two main findings from our analysis of the new SSR dataset deserve repeating:

    1. The mild warming from 1982 to the present was caused by a decrease of cloud albedo and a subsequent increase of surface solar radiation; hence, it had nothing to do with human CO2 emissions;

    2. The rate of global warming during recent decades (~0.14 K/ decade) pales in comparison to the rate of global cooling (-1.3 K/decade) that took place between late 1960s and the early 1980. As a result, the Earth must have been 2.4 K warmer in the early 1960s than in 2019. This pattern of global temperature change completely refutes the AGW claim, which is why it has been redacted.

  25. Ned Nikolov, Ph.D. says:

    Based on a Wikipedia article about the history of the Climate Research Unit (CRU) at the University of East Anglia (https://en.wikipedia.org/wiki/Climatic_Research_Unit), I speculate that the deletion of the 1962 – 1983 cooling episode from temperature records probably occurred, when Tom Wigley was the CRU Director (1978 – 1993). More precisely, this unprecedented data manipulation was likely carried out during the early 1980s, when the “greenhouse” concept of climate change began to be pushed internationally by leading scientists.

    The first scientific paper claiming a “rapid warming” since 1975 and record high global temperatures in early 1980s compared to the prior 130 years without any mentioning of the 1960 – 1980 cooling episode was authored by Phil Jones, Tom Wigley and P. Wright and published in the journal Nature in July of 1986: https://www.nature.com/articles/322430a0. The paper entitled “Global temperature variations between 1861 and 1984” has an Abstract that reads:

    Recent homogenized near-surface temperature data over the land and oceans of both hemispheres during the past 130 years are combined to produce the first comprehensive estimates of global mean temperature. The results show little trend in the nineteenth century, marked warming to 1940, relatively steady conditions to the mid-1970s and a subsequent rapid warming. The warmest 3 years have all occurred in the 1980s.

    The period between 1940 and 1975 characterised by the authors as having “relatively steady conditions” is when a global cooling of ~1.5 K actually took place according to Surface Solar Radiation (SSR) records. Contrary to the authors’ claims, the SSR data also suggest a continuous climate cooling until at least 1983 without any sign of net warming between 1975 and 1984 (see the graph in my previous comment).

  26. tallbloke says:

    Ned: 2. The rate of global warming during recent decades (~0.14 K/ decade) pales in comparison to the rate of global cooling (-1.3 K/decade) that took place between late 1960s and the early 1980. As a result, the Earth must have been 2.4 K warmer in the early 1960s than in 2019.

    Ned, I think the Earth’s near surface air temperature is buffered by the high heat capacity of the global ocean. This would mean that energy was released from the ocean during periods when cloud amount is higher. We see some evidence for this with the big El Nino events that tend to occur either near solar minimum, when more cloud seeding cosmic rays enter Earth’s atmosphere, or in the aftermath of large volcanic eruptions when sulfur dioxide reflects more sunlight back to space from the stratosphere. The El nino following the eruption of El Chichon in 1983 is an example of this effect, along with the big el ninos in 1998 and 2010.

    That release of ocean heat content would ameliorate the effect of an increase in cloud causing a drop in solar surface radiation. Which means the fall in temperature might not be so big, especially in the tropics and temperate zones. It probably did cause a big swing in the higher latitudes though.

    I think your analysis is great so far as it goes, and an excellent basis for further research, but I also think that there’s more to this than a single variable.

  27. Ned Nikolov, Ph.D. says:

    @tallbloke,

    Roger, I agree with you that oceans would provide some buffering to global temperature variations through their large heat capacity. However, the 1982 – 2021 period (where the SSR-based temperature reconstruction matches so well the UAH dataset) suggests that this buffering is limited to within 0.3 K with a lag not exceeding 4 years (see Fig. 10, Upper Panel). The cooling between early 1960s and the early 1980s indicated by the SSR data is in the order of 3.0 K (see Fig. 12) and continued for over 20 years. As a matter of fact, this cooling began in 1940. So, according to these new estimates, between 1940 and 1985, the Earth’s global temperature dropped ~3.4 K. Taking a generous oceanic buffering effect of 0.4 K into account, we are still left with a 3.0 K cooling to explain.

    Yes, let’s the research continue on this crucial topic and let’s have the evidence take us wherever it may…

  28. tallbloke says:

    Thanks for your reply Ned. Clouds have several climatic effects. They shade the earth and reflect incoming solar radiation, as this study shows. They can also keep night-time surface temperatures warmer than they would be under a clear sky due to radiative properties. Location, altitude and thickness all vary the balance between these effects.

    Could it be that the sensitivity of surface T to SSR might vary between periods when lower SSR causes the ocean buffering effect we discussed, and periods when higher SSR prevails? Or that regional differences in the quality of surface records might confound the issue? Just thinking out loud here about how else we might partially account for the strong disagreement between SSR and surface T (adjusted) records.

    This 1974 article from the ‘Radio Times’ (a weekly magazine carrying TV and radio programme schedules), indicates that the drop in temperature might have been more pronounced at higher latitudes.

  29. oldbrew says:

    The Wikipedia entry headed ‘Impacts on the overall greenhouse effect’ has a table ranking the most important compounds, which is preceded by a comment in brackets: ‘Failed verification’.

    Source — https://en.wikipedia.org/wiki/Greenhouse_gas#Impacts_on_the_overall_greenhouse_effect

  30. tallbloke says:

    More historical news articles to screenshot and some AMO discussion in this video from Tony Heller.

  31. Ned Nikolov, Ph.D. says:

    @tallbloke (July 24, 2022 at 7:47 am):

    Roger,

    1. As you know, the net effect of clouds on surface temperature when averaged over all nights and days of a year is cooling, not warming! Since our analysis uses global annual data, the occasional observation that cloudy nights in some locations and times of year could be warmer than clear nights is irrelevant to our discussion here.

    2. I’m not aware of a mechanism that would cause variable sensitivity of the global surface temperature to absorbed solar radiation depending on the magnitude of SSR change. The specific heat capacity of water varies with temperature, but this variation is minuscule over the range of ocean temperatures encountered on Earth (see: https://www.engineeringtoolbox.com/specific-heat-capacity-water-d_660.html). The only change in the temperature sensitivity to SSR would be the logarithmic dependence of ΔT on Δsa in Eq. 1

    3. I would not be surprised, if the 1960 – 1980 cooling was more pronounced at higher latitudes compared to the Tropics. Variations of cloud cover, if indeed driven by Earth/solar magnetic fluctuations, are expected to be stronger at higher latitudes, where the Earth’s magnetic field is weaker.

  32. Ned Nikolov, Ph.D. says:

    Wow! Tony Heller got finally suspended from Twitter… It was about time! I got suspended last year.

    Twitter has become the Nazi’s Gestapo of social media, a platform managed by young loony tunes with fascist ideology, who are obsessed with censorship like Jack Dorsey:

  33. Ned Nikolov, Ph.D. says:

    This recent well-sourced article by William Walter Kay posted on the Friends-of-Science blog is historically quite informative and complements the findings discussed in our SSR paper above:

    Margaret Thatcher and the Rise of the Climate Ruse

    The anthropogenic global warming (AGW) hoax was created & funded in the early 1980s by Margaret Thatcher (UK’s PM from 1979 to 1990) in an effort to defeat the UK’s National Union of Miners (NUM), and close Government-operated mines, since they were not profitable. Under the advice of Sir Crispin Tickell to explore “Climate Change” as a promising anti-coal pretext, Thatcher funded (in early 1980s) the Climate Research Unit at the University of East Anglia led by Dr. Tom Wigley to produce evidence that coal burning was warming the Planet by emitting CO2. UK miners went on a national strike in 1984. In 1986, Phil Jones and Tom Wigley published a paper in the journal Nature (https://www.nature.com/articles/322430a0) that presented the first global temperature record from 1861 to 1984 showing no cooling between 1960 and 1982, and claiming that the warmest 3 years in the 20th Century have all occurred in the 1980s. This record matched quite well (in terms of trend) the Keeling CO2 curve, which was more than 22 years long at that time.

    This is how the largest manipulation of global climate data came to be! If Tom Wigley and Phil Jones had the moral integrity of objective scientists and kept the 1960 – 1982 cooling episode in the temperature record, the AGW hoax would have disintegrated in 1980s, and the World would not be in a “climate crisis” now. 🙂

  34. oldbrew says:

    FYI – possible link between solar cycle and albedo on Venus.

    Long-term Variations of Venus’s 365 nm Albedo Observed by Venus Express, Akatsuki, MESSENGER, and the Hubble Space Telescope (2019)

    5.3. Comparison with Other Planets

    Regardless of the cause of the observed albedo changes, the range of albedo variation on Venus is surprising. On the Earth, clouds play a considerable role as a buffer of possible climate variations and are also a regulator of the solar energy distribution (Stephens et al. 2015). However, the clouds on Venus are different; rather than supporting a stable solar heating rate, drastic variations of solar heating seem to occur as inferred from the 365 nm albedo. The astounding nature of the albedo variation results we present here is further emphasized by results derived from other planetary albedo studies in the solar system, where weaker long-term albedo variations were observed.

    https://iopscience.iop.org/article/10.3847/1538-3881/ab3120

  35. Ned Nikolov, Ph.D. says:

    @oldbrew,

    Interesting long paper (https://iopscience.iop.org/article/10.3847/1538-3881/ab3120)! I noticed that it only discusses the Venusian albedo in a specific wavelength of the UV spectrum (365 nm). I wonder what is the variation of the total shortwave albedo. I expect it to be quite small…

  36. David A says:

    Ned says
    I’m not aware of a mechanism that would cause variable sensitivity of the global surface temperature to absorbed solar radiation depending on the magnitude of SSR change.”The specific heat capacity of water varies with temperature, but this variation is minuscule over the range of ocean temperatures encountered on Earth (see: https://www.engineeringtoolbox.com/specific-heat-capacity-water-d_660.html). The only change in the temperature sensitivity to SSR would be the logarithmic dependence of ΔT on Δsa in Eq. 1″

    Would not the SSR also change with the solar cycles, and would not the variation of solar insolation W/L affect the depth of, and quantity of solar insolation entering the oceans, where, for an undetermined time that energy is lost to the atmosphere, the disparate ocean residence time of said insolation being dependent on the spectral W/L flux.

    There are only two ways to change the energy content of a system in a radiative balance, either a change in the input, or a change in the residence times of that input. A small change in a very long residence time input can accumulate, positive or negative, to a great deal of energy.

  37. Ned Nikolov, Ph.D. says:

    @David A

    Changes of TSI (the above-atmosphere Total Solar Irradiance) are already taken into account by our calculations. See Fig. 4 and the related discussion in the above article. Insolation is not important when the mean annual global absorption of solar radiation is considered as it is in our case. The ability of oceans to absorb solar radiation coming to the surface should not change from one year to the next. In other words, the oceanic absorption of solar energy should only depend on the incoming solar flux, since the radiative properties of water are not expected to vary over time on a global scale.

  38. David A says:

    Ned N, thank you…
    “In other words, the oceanic absorption of solar energy should only depend on the incoming solar flux, since the radiative properties of water are not expected to vary over time on a global scale.”

    That makes sense to me. Say a solar cycle variance where more IR causing increased evaporation at the water surface (causing even less ocean surface W/L penetration) and less Blue & UV so even less energy deposited as heat 20 to 200 feet down in the ocean.) (actual penetration can be up to 800′ with said energy having a very long residence time.)

    It is curious how every season when, during the SH summer, the earth experiences plus 90 Wsq M insolation, yet the atmosphere cools! As always input energy “residence time” within the system (atmosphere, oceans and land) is the key, and we know residence time is reduced from increased NH albedo, yet the oceans also likely hide energy away from the atmosphere, for whatever residence disparate solar W/L has as it enters the oceans, more then then they do in the NH summer. (much more ocean receiving more intense SH summer insolation.) I have not heard any formal conclusion to the question; Does the earth ( Land oceans and atmosphere combined) increase or decrease its total energy content in the SH summer?

  39. dennisambler says:

    “The cooling episode indicated by the SSR data is corroborated by more than 115 magazine and newspaper articles published throughout the 1970s…”

    Even James Hansen was onboard, from 1981:

    Click to access hansen81a.pdf

    “The most sophisticated models suggest a mean warming of 2° to 3 .5°C for doubling of the C02 concentration from 300 to 600 ppm . The major difficulty in accepting the theory has been the absence of observed warming coincident with the historic C02 increase. In fact, the temperature in the Northern Hemisphere decreased by about 0.5°C between 1940 and 1970, a time of rapid C02 build up.

    In addition, recent claims that climate models over-estimate the impact of radiative perturbations by an order of magnitude, have raised the issue of whether the greenhouse effect is well understood. “

  40. oldbrew says:

    Interesting Hansen paper, Dennis A. Here’s another quote:

    Melting of the world’s ice sheets is another possible effect of C02 warming. If they melted entirely, sea level would rise – 70 m. However, their natural response time is thousands of years, and it is not certain whether C02 warming will cause the ice sheets to shrink or grow. For example, if the ocean warms but the air above the ice sheets remains below freezing, the effect could be increased snowfall, net ice sheet growth, and thus lowering of sea level.

  41. Ned Nikolov, Ph.D. says:

    DennisAmbler,

    Thank you for this 1981 reference by Jim Hansen. I have it in my climate repository folder, but have not looked at it for several years. Yes, back in the 1980s and even up to 1999 Hansen was acknowledging the mismatch between CO2 and temperature trajectories evident over several decades until they solved the problem by altering observed temperatures to make them match the CO2 record.

    NASA still shows on its website this 1999 Science Brief by Jim Hansen, where he admits that the US has experienced a net cooling between 1930s and 1999 and that this pattern of temperature evolution does not agree at all with the “rapidly increasing greenhouse gases“:

    https://www.giss.nasa.gov/research//briefs/1999_hansen_07/

  42. Ned Nikolov, Ph.D. says:

    Regarding Hansen’s description of how the “greenhouse” warming works, when CO2 increases, presented in his 1981 paper (https://climate-dynamics.org/wp-content/uploads/2016/06/hansen81a.pdf), I analyze this mechanism in my video “Demystifying the Greenhouse Effect” (starting at 19:49 min mark) and show that it’s based on no known physics! In fact, this mechanism was first proposed as a pure conjecture (speculation) by Nils Ekholm in 1901. It became a “settled science” through mere repetition in the complete absence of empirical evidence to support it.

  43. oldbrew says:

    ‘Greenhouse Gas Effect Does Not Exist,’ a Swiss Physicist Challenges Global Warming Climate Orthodoxy

    Allmendinger’s experimental tests found no significant differences between the IR absorption capabilities of CO2, O2, N2, or Ar when thermal absorption was measured instead of spectrographic wave absorption. “As a consequence, a ‘greenhouse effect’ does not really exist, at least not related to trace gases such as carbon dioxide.”

    The global warming orthodox scientific community has rejected Allmendinger’s work as utter nonsense, arguing that he “is currently not affiliated with any reputable research institute or university.”

    ‘Greenhouse Gas Effect Does Not Exist,’ a Swiss Physicist Challenges Global Warming Climate Orthodoxy


    – – –
    Of course they know full well it’s likely to be career suicide to be on any academic payroll and denounce or even question IPCC climate theories, no matter what evidence is put forward.

  44. […] Ned Nikolov: Does a Surface Solar Radiation Dataset Expose a Major Manipulation of Global Temperatur… […]

  45. Stephen Richards says:

    I heard an interesting remark from one of the UKMO forecasters about 2 days ago.

    ” temperatures will rise as the pressure rises in the next few days “

  46. oldbrew says:

    Scientists: The Global Warming Since 1985 Cannot Be Attributed To CO2 Forcing
    By Kenneth Richard on 8. August 2022

    Cloud modulation of shortwave radiation and greenhouse effect forcing has largely been the determining factor in the global warming of the last 45 years. Not CO2.

    https://notrickszone.com/2022/08/08/scientists-the-global-warming-since-1985-cannot-be-attributed-to-co2-forcing/

  47. Nelson says:

    Ned, do you have any thoughts on the weakening of the earth’s magnetic field strength? It seems to me that there is an empirical test of your ideas playing out in real-time.

  48. Ned Nikolov, Ph.D. says:

    Hi Nelson,

    Yes, the Earth’s magnetic field has been weakening for at least 200 years now, and we don’t know why. Also, satellite observations show that Earth’s magnetic field is much more complex than the dipolar model often used illustrate it in textbooks. See this article about the recently discovered Southern Atlantic magnetic anomaly:

    https://www.esa.int/Applications/Observing_the_Earth/FutureEO/Swarm/Swarm_probes_weakening_of_Earth_s_magnetic_field

    I think our current understanding of what controls Earth’s magnetic field is in its infancy. I also think that the interplay between Earth’s magnetic field and solar wind is crucial in determining the rate of atmospheric loss to Space, which in turn controls variations of Earth’s atmospheric mass and total surface air pressure through geologic time. IMO, fluctuations of total surface pressure are responsible for the observed changes of Earth’s global temperature on time scales of thousands to millions of years.

    Earth’s magnetic field probably influences the flux of galactic cosmic rays reaching lower-troposphere, which impacts cloud formation and the short-term (multidecadal) variations of global temperature.

  49. Ian F says:

    Interesting article about NASA tampering with the raw temperature data for Iceland’s capital Reykjavík at:

    https://electroverse.co/snow-hits-alaska-bering-strait-sea-ice-refuses-to-melt-nasa-noaa-erase-arctics-1940s-warming/

  50. Mark says:

    Climate scientist, Dr. Ian Holmes also talks about pressures effect on global temperature and he does not believe that CO2 or CH4 cause measureable warming.

  51. Ned Nikolov, Ph.D. says:

    Mark,

    Holmes took the pressure idea from our published research. He even adopted our main new term “Atmospheric Thermal Enhancement”. He made one mistake in a paper he wrote a few years ago, where he tried the use the Gas Law and nothing else to predict the average surface temperatures of several planets and moons:

    https://www.sciencepublishinggroup.com/journal/paperinfo?journalid=161&doi=10.11648/j.earth.20180703.13

    The problem (which I brought up to him in an email) is that one cannot just use the Gas Law for such prediction, because the Gas Law is one equation with 2 unknowns: temperature and air density. Solving deterministically for 2 unknowns requires 2 equations. Holmes used air density as a predictor of temperature, which is physically incorrect, since in the chain of causality, air density is always a function of temperature and pressure. Thermodymically, air density in a planetary atmosphere never controls the temperature!

  52. Mark says:

    Good to know Dr. Nikolov! Thanks

  53. Nelson says:

    Ned, if I use the molar form of the ideal gas law, I can solve for T in terms of observable variables. If I plug in data from the Antarctica, I get the average temp. Same holds at the Equator. No surprise. What has always bothered me is that if I adjust for an increase in CO2 from 400 to 800 ppm, I don’t get much of a change in T. To get a 3-4 degree increase as some suggest, P has to increase significantly. Why would a 400 ppm increase in CO2 cause a big increase in measured pressure? It seems to me that the belief in a large positive ECS is inconsistent with the Ideal gas laws

  54. Ned Nikolov, Ph.D. says:

    Nelson,

    Yes, the standard atmospheric thermodynamics does not support at all a 3 K surface warming per 280 ppm CO2 increase as predicted by climate models. That’s because, CO2 is thermodynamically no different from any other non-condensing gas in the atmosphere such as nitrogen or oxygen. The real effect of CO2 on surface temperature is only through its contribution to total pressure, which is minuscule in the Earth’s atmosphere.

    In our article on climate sensitivities from May 2022, we quantify the exact response of Earth’s global temperature to a unit change of total pressure. It’s 0.161 K/kPa, which implies that a CO2 doubling compared to pre-industrial level (i.e. a 280 ppm net increase) would only produce a 0.0044 K surface warming provided that Earth does not lose any atmosphere to Space during the time of CO2 doubling, which is unlikely:

    Ned Nikolov & Karl Zeller: Exact Calculations of Climate Sensitivities Reveal the True Cause of Recent Warming

  55. Ned Nikolov, Ph.D. says:

    In regard to using the Ideal Gas Law to calculate ambient temperatures, it’s important to remember that air density is NOT a predictor of temperature. So, rearranging the Gas Law in the form T = P/(R ρ), where ρ is the molar density of air (mol m-3), is not physically correct, because ρ physically depends on P and T. It’s the temperature T that determines in part the air density (ρ), not the other way around!

  56. dennisambler says:

    Interesting series at Judith Curry site, this is the first of four papers.

    The Sun-Climate Effect: The Winter Gatekeeper Hypothesis (I). The search for a solar signal

  57. Nelson says:

    Ned, thanks for your responses. Your caution about solving for temperature is well taken. Forget what’s causing what, we can measure all of the variables. If the gas laws work, it better be the case that the observed temperature equals what comes out of the gas law equation when we plug in observables on the RHS. Given that the temperatures match is no surprise, we use the term “law” for a reason. What has always bothered me is what has to happen when you plug in an additional 400 ppm of CO2 and want temperatures to rise by 4C.

    I want to call your attention to a post at NoTrickZone. https://notrickszone.com/2022/08/18/new-studies-claim-the-more-co2-in-the-venus-atmosphere-the-colder-it-gets/

    It is not so much the review of the research that I found interesting, it’s the comments.

    Here is part of the first comment by the author, which is one of many.

    LOL@Klimate Katastrophe Kooks 18. August 2022 at 11:05 PM | Permalink | Reply
    That’d be because the so-called ‘greenhouse gases’ (polyatomic radiative molecules) are actually net radiative atmospheric coolants at prevalent Earth temperatures, and it is the monoatomics and homonuclear diatomics which are the ‘greenhouse gases’ (see below)… the climastrologists have flipped reality, flipped causality, on its head to bolster their CAGW scam… because the easiest lie to tell is an inversion of reality. One needn’t invent new physics to explain their lie, and most people can’t tell the difference between reality and flipped causality because they’re not technically inclined. They cannot ascertain that the climastrologists’ claims violate 2LoT in the Clausius Statement sense, the Entropy Maximization Principle and Stefan’s Law, amongst many other physical laws.

    They can only get away with this sophistry because they claim that radiative energy transfer is the predominant mode of energy transfer on this planet.

    That’s another of their half-truths hiding the bigger lie of CAGW.

    It is true that the only way our planet, for this instance considered to be a system (and space an infinite heat sink), can remove energy from the system is via radiative emission to space… but they neglect the fact that conduction (air atoms and molecules contacting the surface to pick up energy), evaporation and convection remove ~76.2% of all surface energy, that energy being convectively transited to the upper atmosphere, where it is then radiatively emitted, with the vast majority of it being rejected to space due to the mean free path length / air density / altitude relation.

    That is why, for example, water vapor acts as a literal refrigerant (in the strict ‘refrigeration cycle’ sense) below the tropopause…

  58. Nelson says:

    As I read through the comments on the Venus thread, several questions pop that I think are relevant to your work or you might have insight into.

    1) I liked the Grand Canyon example. I’ve hiked to the bottom, it’s hot. How do you explain the canyon floor being 10 degrees hotter when it only gets 75% of the insolation?

    2) I did not understand the explanation that the fact that the wet lapse rate is lower than the dry lapse rate shows water vapor is a coolant. I understand that the assumption is that using an emissive height of 5km and 255k and working down to the surface temperature using the lapse rate, the surface temperature is hotter in the dry world. Wouldn’t the emissive heights in wet and dry atmospheres be different? I am confused by this.

    3) The chart linked to from a Maria Hukuba presentation is interesting. It shows that in the tropics,both H2O and CO2 are coolants throughout the atmosphere. The exception is CO2 warms around the tropopause. Hard to claim that GHGs are warming the tropics when the data shows they cool it.

    There are many more interesting nuggets. I would also be curious what you think of Willis E’s recent posts on the radiative flows at WUWT.

    Finally, suppose you and others like you are correct on the physics. Can the physics world switch gears given all they have invested in the GHE.

  59. Mark says:

    Dr. Nikolov, are you aware of this multipart series by Javier Vinós & Andy May?

    The Sun-Climate Effect: The Winter Gatekeeper Hypothesis (VI). Meridional transport as the main climate change driver

    What do you think? I know it’s long. Omg!

  60. Ned Nikolov, Ph.D. says:

    Hi Mark,

    This is indeed a long article. It’s well written, but it provides little new information. It regurgitates a number of old and incorrect ideas/claims (despite mounting evidence to the contrary) such as:

    – Climate being controlled by the outgoing longwave radiation (OLR) at the top of the atmosphere (TOA). In reality, climate is driven by changes in cloud albedo on multi-decadal time scales, and atmospheric mass & surface air pressure on millennial time scales and beyond. Variations of surface solar radiation provide diabatic forcing, while changes of total pressure force the climate system adiabatically. There is still no understanding of the fact that the atmosphere warms the surface adiabatically (through pressure), not radiatively through LW radiation, and that non-condensing trace gases have no measurable impact on the global climate!

    – Ice ages being caused by Milankovitch orbital cycles. Data clearly show no relationship between variations in eccentricity and/or obliquity and the cycles of Ice Ages over the past 800 ky.

    – El Nino events being caused by the release of heat from Equatorial Pacific. Satellite observations of cloud cover and ground measurements of surface solar radiation (SSR) indicate that ENSO cycles are caused by variations of cloud albedo and the resulting changes of SSR.

  61. Mark says:

    Thanks for responding Dr. Nikolov. Do you have any interest in responding to that article at that website? At least it may cause the authors to look at pressure effects.

  62. Ned Nikolov, Ph.D. says:

    Mark,

    Yes, this might be a good idea, IF the authors actually read and care about feedbacks…

    Since our adiabatic concept of the atmospheric thermal effect was published in 2017, I have sent the link to our paper to over 200 climate scientists and the publisher’s website indicates that the paper has been viewed over 34,000 times and had over 3,500 downloads:

    https://www.omicsonline.org/open-access/new-insights-on-the-physical-nature-of-the-atmospheric-greenhouse-effect-deduced-from-an-empirical-planetary-temperature-model.php?aid=88574

    Yet, the gravy strain of the “greenhouse” theory continues to roll full speed ahead, as if no new science had been published. The confusion & delusion are so profoundly deep that I think it may take another 30 years for the science establishment to start realizing the truth.

  63. Mark says:

    Dr. Nikolov, I think your research needs to get out of the academic sphere and into the public sphere. Write a popular press book, maybe with a coauthor, that will get on the New York Times best seller list. Steven Koonin’s Unsettled comes to mind.

  64. Ned Nikolov, Ph.D. says:

    Yes, I’ve been planning to write a popular book after we get a few more papers published over the next 12 months… I had over 27,000 followers on Twitter until last summer, when the Nazi CEOs of Twitter suspended my account. So, the word about our climate research and discovery has been getting out, but more needs to be done…

  65. hunterson7 says:

    Why, yes. Yes the data bses have been corrupted by the fear mongers. They lie like their fortunes depend on it.

  66. https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2021GL094888

    The article says last 20 years albedo changed, 0.5W/m2 by CERES and 1.5W/m2 by Earthshine. Authors diss 1.5. Say 0.5 plus 0.6W/m2 from CO2 modelling gives a match to GISS standard.

    Why did they dismiss Earthshine?

    If Earthshine is correct, IPCC is toast.
    Back in Galileo’s day, the church authorities wouldn’t look through his telescope. Said what was visible was bullshit designed to confuse Believers. Is that why 1.5 is dismissed out of hand?

    It’s so perplexing. 2 movies on 1 screen. And it’s taking longer than we’ve thought to see the downturn in any temperature dataset.

    [mod] their ‘highlight’: Press Release—Earth is dimming due to climate change

  67. Ian F says:

    I made a simple calculation using N&Z earth data.

    The surface atmospheric pressure of earth is 98550 pascal. This is the same as an energy density of 98550 joules per cubic meter. If the mass density of air at 286K is 1200 grams per cubic meter and the specific heat capacity is 1.007 joules per gm K, then if:

    Heat capacity (C) = Q joules/mass (m) x Change in Temperature

    Change in Temperature = Q/m x C = 98550/1200 x 1.007 = 81.5K

    This says that atmospheric pressure increases temperature by 81.5K. If the no atmosphere temperature is 197.3K (same as moon) the surface atmospheric temperature should be about 197.3 + 81.5 which is 279K. If this calculation is physically correct, it quantifies the thermal effect of the work energy imparted to earth’s atmospheric mass by gravity. Thus, there is no need for the physically impossible AGW “back radiation” hypothesis to explain why the earth’s surface temperature is much higher than the no atmosphere temperature.

  68. craigm350 says:

    Ned thank you for your reply back in July. I forgot to check back!

    I thought you may find some of these old articles which reflect Lamb’s comments. I’ll put over two comments so as not to trigger WordPress’s spam filter.

    1963 Lamb comments on the NH and intimates the cooling over the NH

    Thomas Karl in 1989 saying no warming from 1921-1979

  69. craigm350 says:

    Slipped fingers on lastpost! Pls delete. Thanks.

    Karl & Jones noting UHI bigger in US than the temp trend. Not sure of year but mid 80s possible.

    This Times article from 1981 points out recent warming but does say there had been cooling.

    In a different article from 1978 Lamb is refers to a global cooling period from 1950-73 (possibly ongoing as he states).

    Lamb noted the funding into computer models in the late 70s, noting before his death

    the large share of the funds has gone to support the theoretical modelling of all aspects of the carbon dioxide problem and other human disturbances of the environment

    Writing in 1997 he wrote

    Since my retirement from the directorship of the Climatic Research Unit there have been changes there… My immediate successor, Professor Tom Wigley, was chiefly interested in the prospects of world climate being changed as a result of human activities,…After only a few years almost all the work on historical reconstruction of past climate and weather situations, which had first made the Unit well known, was abandoned

  70. I don’t understand why we have so many apparently valid arguments against specific and critical aspects of the science behind Climate Change, and yet they go nowhere.

    I can understand if they fundamentally are wrong. But if they are EQUALLY correct for only a portion of the warming, and other bits are also equally correct, the narrative falls down. CO2 has to be the only controlling variable per narrative. A weakening of that one variable collapses the crisis. I don’t know why even the skeptics can’t agree as a group on the one or two key problems.

    After 34 years we are still tilting at windmills, each to our own. This should be an alarming situation for all serious skeptics.

  71. craigm350 says:

    Douglas – Lindzen mentions in this podcast with Tom Nelson about how it should have been dismissed out of hand rather than countering all the invidual bits of flak they threw up. Tbh it’s with most infiltrated fields where the Mao adherents where you waste all your breath arguing against their nonsense thereby giving their lunacy credence.

    As Piers Corbyn commented of it “I remember talking to #RichardLindzen then at a meeting in The Royal Society, saying “It’s crazy”. He said something like: “Don’t worry it’ll all blow over”.
    I said much later that we should have just kicked it out on basics. Arguing the details of their nonsense gave it cred.”

  72. And yet it works.

    I follow American politics. The activists in power scream impossible ridiculous things, repeat outright lies and dismiss out of hand any position that the other side takes. And suffer no consequences. The covid19 foolishness was just an expression of this strategy to hold and grow power for, all I can see, the pleasure of holding power …. or is it the profits of holding power? Maybe different sides same coin.

    And it’s successful.
    The mean kids of high school became our masters, the picked upon became the legally oppressed and the rest of us, we learned to look away. Perhaps nothing has changed.

    Until the Sheep Look Up (novel?), is this our future?

  73. craigm350 says:

    Indeed it works, possibly because whilst we are, in good faith, expecting the other side to be reasonable and engaging with them so we are busy playing whack a mole with their numerous claims it’s just a distraction to drain our energy. They are busy setting and enacting the agenda – and more importantly policy. It’s as if we’re playing by the Marquess of Queensberry rules whilst they are kicking us in the nether regions. Our agenda is usually truth, theirs is, as you say, power – so rules don’t apply.

    It really did get rolling with climate and climate has been a great boon for many in the era of 24/7 weather porn to view at our leisure. Climate was the proving ground for the playbook in my lifetime and well covered here at the Talkshop I might add.

    We have to understand who we are dealing with at times. Some fully onboard with Borg/narcissistic mind virus percieve anything that they don’t agree with as a threat and act accordingly. We live in a very narcissistic culture that fuels such behaviour. Like the privileged idiot who set his arm alight the other day.

    However, I don’t believe arguing for truth is a losing game, it’s just we have to recognise who we are dealing with. Tony Heller, Ben Pile and myself for example were on-board (to varying degrees) until the late 00s but pretty much self woke from it because we valued scientific truth over the message. For Millenials and younger, the whole system has been cultivated to facilitate certain existing power structures so it’s harder, but not impossible, to have a truth awakening moment and break the programming. However, once in a system you are rapidly siloed and institutionalised. It affects your thinking. Dissenters and questioners fall rapidly to the wayside. The climate religious system has been churning them out for years now and up they go the ladder. I’m sure many higher up do know what they are doing but those on the spectrum of cluster B personality disorders tend to find it beneficial to promote a lie and actually don’t care. My Lew Paper and Mannian observations comes to mind there. There is so much money and power in many areas of the gravy train and, for some, that has a magnetic attraction. The rest just follow.

    It is the Military-Industrial Complex at large as Eisenhower warned:

    Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

    The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present

    and is gravely to be regarded.

    Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientifictechnological elite…

    But I do not believe the truth is a lost cause. Like the koof shenanigans, people can wake up if we plant the seeds of doubt, but we can’t expect seeds to grow wherever we throw them as the parable tells.

    My question from all this relates to what Ned said a couple of weeks back:

    Yet, the gravy strain of the “greenhouse” theory continues to roll full speed ahead, as if no new science had been published. The confusion & delusion are so profoundly deep that I think it may take another 30 years for the science establishment to start realizing the truth.

    Unfortunately that won’t be allowed to happen with the current lunatics at the administrative wheel. I can’t say I relish the looming climate lockdown and associated digital CCP style enslavement nor the looming winter of discontent. How much time do we really have?

  74. I really can’t see a solution without the realism brought upon by catastrophe.

    Shellenberger has a brilliant synopsis of the reasons why progressives rruon cities and refuse to fix the problems of homelessness, addict, crime and public disorder in Sanfransicko, summing ip in pp247+.

    It’s (I boil it down to) a terminal bug of a philosophy that sees driving by intent and goals as unchallengeable. Pragmatism, empiricism and result-oriented approaches are simply not acceptable. It’s the communist student problem of denying the concept of communism is flawed, blaming failure only on the implementation.
    It is observable that all these progressive “fixes” affect Others, not the fixers, just as Kerry’s fulmination against a carbon heavy lifestyle don’t affect his or his associates jet-setting and enjoyment of wealth. For the top, I agree with you it’s self-serving and hypocritical but for the rest I wonder if there is a pathological blindness based in cognitive dissonance. What will it take to open their eyes?

    History is not supportive of eyes opening easily. Russia, China, North Korea and Cambodia should rise up but they won’t. The Pilgrims had to flee Europe. Would blackouts and freezing in the dark do it? Disasters like that didn’t work in the 20th century. With modern communication it now work? I’m not convinced.

    I hate to be a downer, but I’m not impressed with our ability to bring reason and the concerns of the little people to the forefront of political policies

  75. Ned Nikolov, Ph.D. says:

    @craigm350 (September 25, 2022 at 8:49 pm ):

    Thank you for your great comment! I think the truth about climate change will eventually come out while the falsehood of the AGW narrative becomes fully apparent, when mother nature stops “cooperating” and a major global cooling sets up. So, the climate “establishment” is up against not just the scientific facts, but nature as well… 🙂

  76. Ned Nikolov, Ph.D. says:

    @craigm350 (September 25, 2022 at 1:14 pm):

    Thank you for these old news reports. They show how in the midst of a lack of understanding about natural climate drivers, opportunistic scientists have invoked a completely unphysical & ridiculous concept from the 19th-Century, the “greenhouse” theory to explain climate change… Also, the 1981 paper by Hansen et al. (https://pubs.giss.nasa.gov/docs/1981/1981_Hansen_ha04600x.pdf) provides the earliest evidence for a major temperature data manipulation to support a false theory by removing the steep 1960-1980 cooling episode from the global record. This cooling is now documented by a new Surface Solar Radiation dataset (Yuan et al. 2021) discussed in my blog post above.

  77. Ned Nikolov, Ph.D. says:

    Speaking of fraudulent data manipulations, we recently uncovered robust numerical evidence that the global atmospheric CO2 record (a.k.a. the Keeling CO2 curve) is actually a result of a model simulation rather than real measurements! In other words, the bedrock of the AGW narrative, i.e. the claimed exponential increase of atmospheric CO2 concentration since 1959 (see https://gml.noaa.gov/ccgg/trends/) is a fabrication produced by a simple & unrealistic model entirely driven by industrial CO2 emissions that has no response to global temperature variations!

    We are now working on a paper documenting our discovery and data analysis.

  78. That would be an extraordinary finding. It would have to include the last 30 years , to say that nobody today was simply measuring the CO2 in air samples. Actually including right now.

  79. Ned Nikolov, Ph.D. says:

    @douglasproctor,

    Yes, it is an extraordinary finding that is also readily demonstrable. Most amazingly of all, the clue leading to our discovery that the official CO2 record was a model fabrication came from a 1975 paper by Wallace Broecker, a highly respected researcher in modern climate science and an adamant proponent of AGW, who died in 2019. In his 1975 paper “Climatic Change: Are We on the Brink of a Pronounced Global Warming?” published in the journal Science, Broecker presented results from a CO2-prediction model based on very simple and highly unrealistic assumptions, where atmospheric CO2 is only a function of industrial emissions and accumulates indefinitely in the atmosphere, while exhibiting no response to temperature. The predictions made by his model from 1900 to 2000 match remarkably well the CO2 concentrations supposedly “measured” decades later both in the atmosphere between 1975 and 2000 and in ice cores for the period from 1900 to 1960.

    Using Broecker’s unrealistic model assumptions and reported industrial CO2 emissions, we were able to accurately reproduce the “measured” official CO2 record from 1890 to 2021. This implies that the ice-core based CO2 data for the 1890 – 1960 period, which were finalized in 2013, have also been manipulated to fit this ridiculous model.

    Our paper will present a lot more details regarding this discovery…

  80. Ned Nikolov, Ph.D. says:

    This graph compares “measured” atmospheric CO2 concentrations with model calculations based on highly unrealistic assumptions by Broecker (1975). The close agreement between modeled and measured CO2 values over a period of 140 years indicates that the official CO2 record is likely produced by a model rather than derived from actual measurements (note that, after 1959, the modeled green curve completely overlaps the measured blue curve):

    The plot below compares annual atmospheric CO2 measurements during the 1959 – 2021 period provided by NOAA to our model estimates utilizing Broecker’s unrealistic assumptions and reported global industrial CO2 emissions. The almost perfect match between the two curves suggests that the NOAA atmospheric CO2 record is simply a model simulation driven by human carbon emissions!

  81. Back in early 2020 I graphed the covid data coming from China. Within 2 months I knew it was fake because the data plotted to an r of 0.98. I’ve worked with real data for decades, poorly collected, biases and not geopgraphically non-statistically valid. Which the covid data must have been. But wasn’t. There were also sudden overnight jumps in data ratios. Had to be created not collected.

    The CO2 data is clearly perfect. CO2 is, however, perfectly mixed at a planetary scale in the middle of the pacific ocean. HOWEVER, it is collected on top of an old volcano (been there) and across the valley from another. And surrounded by an ocean that absorbs and degasses CO2. Maybe not as much as others that have a lot of phytoplankton, but some. So I wonder how much the CO2 is planetary vs local.
    I can’t recall if other stations collect and analyze CO2. Do they? That would be your evidence of fraud.

  82. Just thinking. Perhaps you have cause and effect reversed.

    The NOAA belief bout atmospheric CO2 I think has two assumptions:
    1) without fossil fuel, atmospheric CO2 would be a steady 280 ppm. All “natural” CO2 production is equalled by its consumption, including wood burned and grown, ocean degassing and absorption.

    2) as a consequence of 1), any increase in CO2 MUST be from fossil fuels.

    The stary then goes we estimate the amount of fossil fuel usage by modeling the Keeling curve and back calculate coal, NG, leakage, agriculture and deforestation using best estimates of RATIOS of those sources. We bulk them up once done to match the curve.

    Keeling started the CO2 data collection long before Climate Change corrupted the science. My bet is the science uses the curve to “correct” the fossil fuel usage rather than using the usage to create the curve.

    To check, see if claims of historical usage 1750 to 1990 has changed significantly since the 1980s.

  83. Ned Nikolov, Ph.D. says:

    @douglasproctor,

    I did not elaborate on all details in my comments above, but the evidence clearly indicates that emissions have been used to generate the official CO2 curve, and that this curve could not possibly be a result of real measurements.

    As for how this fabrication has been maintained for over 6 decades, I can only speculate at this point. However, it’s a fact that the official global CO2 record comes from just 4 observatories, all managed by NOAA and located in Alaska, Hawaii, American Samoa, and the South Pole. They also have a few Tall Towers with incomplete records, but these are of secondary importance. So, the global CO2 dataset really comes through a relatively “narrow gate”. The CO2-gas calibration tanks used to adjust the infrared spectrometers are prepared by 2-3 entities also controlled by NOAA… Our research focused on investigating the validity of the CO2 record, not the logistic of its creation.

  84. The correlation is the smoking gun, right?

    The measurements seem straightforward. The estimates of CO2 from fossil fuel use are IMHO questionable. I can’t believe the estimates are in lockstep with the readings back in 1975 to 1990 unless one or the other was fudged to match.

    You’d need to see how and what historical fossil fuel CO2 was estimated to be in the earliest days to see if the usage was matched to the Keeling or the Keeling to the usage. It would be unreasonable to think the usage estimated from 1750 was the same in 1975 as in 2022. The first work is always cruder than the later. Temperatures have been changed after all.

    If historical CO2 production in 1975 to 1975 is the same to 1975 in 2022, I say the usage was determined from the Keeling curve from the beginning. If usage estimates changed and yet the curves matched from the beginning I’d also say the curve determines the usage.

    If the curves are constructions from modeling, what else are the models based on but estimates of usage?

    There should be a mismatch between CO2 measurements and historical usage. Since you don’t find one, I’d question the usage data before claiming the curve is made to match the usage.

  85. Ned Nikolov, Ph.D. says:

    douglasproctor,

    The conclusion that the CO2 curve is calculated (modeled) from emissions is based on these 2 independent facts:

    1. This curve shows no sensitivity to inter-annual global temperature variations, which is impossible, if it were based on real measurements, because CO2 fluxes from any natural source/sink are strongly dependent on temperature (i.e. photosynthesis, respiration, and CO2 solubility in water);

    2. The exponential shape of the CO2 curve requires that the air-borne fraction of industrial carbon emissions stays and accumulates in the atmosphere forever, while isotopic carbon studies have revealed that the CO2 residence time in the atmosphere is only 4-5 years (and at most 15 years).

    Hence, the Keeling CO2 curve could not have possibly been measured!

  86. I sure agree it’s suspicious in its perfection.
    It’s the weirdest thing to question a curve based on what we understood was simply taking a sample of air and analyzing its content.

    It’s a conspiracy of the basest sense if true.

    My skepticism doesn’t go that deep. I bet two things: the usage data has been massaged to better reflect the curve AND the take-up rate has been modeled as increasing with time and concentration to fit the curve perfectly.

    If you were to argue about the oceanic pH claims, I’m all over that.

    I look forward to your paper. I want to see how you demonstrate the modeling of CO2 buildup is accurate while the Keeling curve is manufactured to match modeled buildup.

  87. Ned Nikolov, Ph.D. says:

    douglasproctor,

    I see that you still do not fully grasp my argument: The CO2 buildup in the atmosphere is the Keeling curve itself, and it is totally unrealistic, because atmospheric CO2 has a short residence time (4-5 years) and human carbon emissions represent less than 5% of the gross natural CO2 fluxes coming out of and going into the surface (land and oceans). Variations of global temperature can (and do) perturb the global balance between natural emissions and natural uptake of CO2 from year to year. These perturbations can easily offset the annual human emissions, which means that real CO2 measurements should show a fluctuating (up and down) atmospheric CO2 concentration throughout the decades instead of an exponential (ever increasing) CO2 levels for 140 years as evident in the official record, of which the Keeling curve is an essential part!

    In other words, I’m arguing that the suggested CO2 buildup in the atmosphere is NOT real at all, and that the Keeling curve is a model simulation that has little to do with actual measurements! Luckily, we do have a long record of real CO2 measurements in the atmosphere made between 1800 and 1960 using chemical methods. These measurements were carried out by independent teams and labs, and some by Nobel Prize winners in chemistry. Beck (2007) summarized 90,000 such measurements in a paper published in Energy & Environment. I have downloaded and processed his dataset. This plot shows the result and compares it to the ice-core based CO2 record (note the decadal fluctuations of atmospheric CO2 in the Beck dataset!):

    The Beck dataset has the look & feel of real measurements characterized by noise and potential outliers, while the Keeling curve lacks such characteristics and looks totally synthetic.

    This graph compares Beck’s 5-year smoothed CO2 curve between 1870 and 1960 to the official CO2 record:

    None of the chemically-based CO2 measurements conducted over a period of 180 years have been included in the official CO2 record or were considered by IPCC. I think you can figure out why… A CO2 atmospheric concentration of 400 ppm in the 1800s and during 1940s completely invalidates the narrative of a human-controlled CO2, which is essential to the anthropogenic global warming claim.

  88. Now I get you. I am shocked at the data variability. I assumed winds would mix the atmosphere so quickly the local changes would immediately disappear. I guess they haven’t.

    Sorry I’m dense. Moana Loa “analyzes” the air mukti0le times a year, I understood.

    I had wondered about the outgassing of the volcanoes in Hawaii and the adjacent ocean. Adjacent ocean’s elsewhere.

    The detailed CO2 curve does show the seasonal change with summer/winter. The curve we are shown has that removed. There must be far more powerful smoothing or I dare say homogenization going on.

    Maybe those 5 sites each has a radically different local nature that is area balanced in the end.

    The processes of getting the Keeling curve aren’t public?

  89. Ned Nikolov, Ph.D. says:

    The Beck dataset shows how real CO2 measurements should look like.

    The Mauna Loa observatory is supposed to measure atmospheric CO2 hourly. They have published detailed measurement protocols such as this one: https://gml.noaa.gov/ccgg/about/co2_measurements.html

    However, this means nothing, since the evidence clearly shows that the final product does not have the expected characteristics of an actual measured CO2 record as I’ve explained above.

    The 4 NOAA Observatories providing data for the official global CO2 record report very similar time series that mainly differ in the seasonal amplitudes of CO2 variations as shown on this plot:

    Note that all 4 records have the same synthetic look that is indicative of model-generated data series…

  90. Ned Nikolov, Ph.D. says:

    This plot illustrates the total lack of a temperature response of the official ice-core based CO2 record compared to the Beck CO2 series. The Beck curve has been shifted 3 years backwards to account for an observed 3-year lag of air-measured CO2 with respect to global temperature:

    The Beck CO2 series makes perfect sense in terms of its features: It follows variations of global temperature with a 3-year lag as expected from an ecological & biophysical standpoint of view, since temperature controls the rates of all natural CO2 sources and sinks.

  91. craigm350 says:

    @Ned – your finding that the co2 curve may be a modeled not observations is not surprising to me – I’ve seen temperature fitted to the co2 curve in real time this past decade. As you say it should vary based upon where, when (season, time) etc for sampling as seen in earlier records, which I never looked into in my alarmist days as I took it upon faith. Averages and anomalies can hide so many things if your intent is nefarious, ideological or opportunistic.

    I happened to listen to a podcast the other day with Tom Nelson where Will Happer more or less stated he believed in the co2 record at Mauna Loa which was essentially due to consistency with other global sites and what I would say is faith in the process including Keeling’s work. In computing, engineering and life you often need to go back to the basics, the foundational blocks to find the cause. A 1974 address by Feynman captures it well;
     

    We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

    Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of–this history–because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong–and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that…
    […]
    When I was at Cornell, I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this–it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do A. So her proposal was to do the experiment under circumstances Y and see if they still did A.

    I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person–to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know the the real difference was the thing she thought she had under control.

    She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happened.

    Nowadays, there’s a certain danger of the same thing happening, even in the famous field of physics. I was shocked to hear of an experiment being done at the big accelerator at the National Accelerator Laboratory, where a person used deuterium. In order to compare his heavy hydrogen results to what might happen with light hydrogen, he had to use data from someone else’s experiment on light hydrogen, which was done on different apparatus. When asked why, he said it was because he couldn’t get time on the program (because there’s so little time and it’s such expensive apparatus) to do the experiment with light hydrogen on this apparatus because there wouldn’t be any new result.

    A view of science worth reflecting upon

    To believe that science has resolved this issue was naive at best by Feynman. It’s known as hubris and a lack of humility. If multimillion and billion dollar companies can make fundamental and catastrophic errors so can science – except it can’t of course. As I have discovered over the past decade, scientists are not human because to be human is to be flawed and prone to navel gazing.

    Errors and assumptions easily become involuble and, like a lie, strengthened by repetition. It’s human nature “we’ve always done it like this.” No one questions the fundementals and in certain fields questioning the fundamentals is blaspheme once politics and money become the lifeblood of a theory.

    Besides the settled science mantra something I hear repeatedly in climate science “the physics is sound” “built upon physical models”, yet those models are based upon dry air models that exclude the spectral powerhouse of water vapour.

    In the decade plus I’ve followed the temperature data as it has been manipulated (always in one direction), to a point where it no longer agrees with the observations reported at the time but miraculously matches the co2 record. At every step there has been reward for this behaviour. Those like Lamb were isolated, excluded and forgotten and had their records rewritten in the digital age because they pointed to the discrepancy between theory and experiment;

    According to the US Standard Atmosphere 1976 carbon dioxide [CO2] had a “fractional volume” of 0.000314 [314 ppmv] in dry air at sea-level whilst the latest atmospheric CO2 graphic for the Mauna Loa Observatory [Hawaii] shows CO2 at [about] 332 “parts per million” for 1976.

    Atmospheric Science: US Standard Atmosphere 1976

    I came across Beck’s work in my awakening phase where I spent hour upon hour looking into core assumptions based upon posts here at the Talkshop and at Tim Cullen’s Malagabay site. Seeing the sparcity and paucity of the measurement record, it is difficult to understand the certainty that followed , yet each IPCC washed away the error bars and increased the certainty further. I think AR7 is gonna give 110٪ and fully evolve into a sporting meme.

    This is a linked post but the original paper which is linked may prove useful for your research Ned;

    “The third surprise was that the US Standard Atmosphere Supplements 1966 included exosphere density and temperature data from the Explorer I satellite in association with data relating to solar radiation and geomagnetic activity.”

    US Standard Atmosphere Supplements 1966

    God speed sir.

  92. Ned Nikolov, Ph.D. says:

    @craigm350,

    Yes, questioning the fundamentals of “established” theories is not popular at all in today’s science, and is openly reprehended in politically charged fields of research such as climate science. This is why no scientists dare to express doubts about the “greenhouse” theory in the peer-reviewed literature nowadays despite a mountain of evidence pointing toward the physical insolvency of this 19th-Century concept. I presented the main portion of this evidence in my video “Demystifying the Atmospheric Greenhouse Effect“.

    As for data manipulations aimed at making measured time series match the CO2 record, global temperature is not the only parameter subjected to such “adjustments”. Records of sea-level rise (SLR) and ocean heat content (OHC) have also been heavily altered to fit the Keeling CO2 curve. In fact, these manipulations have become so arrogant lately that they’ve produced physically ridiculous results. For example, look at these slides showing correlations between SLR and surface OHC on one hand and atmospheric CO2 and sea-surface temperature (SST) on the other… SLR and OHC show very poor correlations with SST, while exhibiting a >0.96 correlation with atmospheric CO2 which, of course, is physically absurd!

  93. oldbrew says:

    Another slice of climate debate – Monckton vs. Spencer

    Lord Monckton Responds to Spencer’s Critique
    October 5th, 2022 by Roy W. Spencer, Ph. D.

    https://www.drroyspencer.com/2022/10/lord-monckton-responds-to-spencers-critique/

  94. Ned Nikolov, Ph.D. says:

    I just posted a comment to Spencer’s blog article about Christopher Monckton:

    https://www.drroyspencer.com/2022/10/lord-monckton-responds-to-spencers-critique/#comment-1377415

    Both Roy Spencer and Christopher Monckton are confused about climate sensitivities.

  95. oldbrew says:

    Doug Cotton has popped up again at Spencer, under the guise of ‘Retired Physicist’ this time 🙄

  96. Ned Nikolov, Ph.D. says:

    Yes, this guy is really confused in his physical thinking… 🙂

  97. oldbrew says:

    CMIP6 GCM ensemble members versus global surface temperatures
    Nicola Scafetta
    Accepted: 31 August 2022

    ‘In fact, the performance of the models seems to increase as the ECS decreases.’

    Click to access s00382-022-06493-w.pdf

  98. Ned Nikolov, Ph.D. says:

    This is a nice paper by Scafetta: https://link.springer.com/content/pdf/10.1007/s00382-022-06493-w.pdf

    However, I wonder, why is he still attributing any amount of measured warming for the past 40 years to a CO2 increase, when satellite and surface measurements of reflected and surface solar radiation clearly show that the ENTIRE warming is explainable by the observed decrease of cloud albedo and the corresponding increase of surface solar radiation?

    Ned Nikolov & Karl Zeller: Exact Calculations of Climate Sensitivities Reveal the True Cause of Recent Warming

  99. Ned Nikolov, Ph.D. says:

    Continuing to peddle the unfounded notion that the “climate sensitivity” to a CO2 doubling is anything but ZERO is doing a disservice to real science. It’s time to expose this fallacy!

  100. oldbrew says:

    This seems odd…

    OCTOBER 13, 2022
    Strengthening cold ocean current buffers Galápagos Islands from climate change
    by Kelsey Simpkins, University of Colorado at Boulder

    “There’s a tug of war going on between our greenhouse effect causing warming from above, and the cold ocean current. Right now, the ocean current is winning—it’s not just staying cool, it’s getting cooler year after year,” said Kris Karnauskas, lead author on the study, associate professor in the Department of Atmospheric and Oceanic Sciences and fellow in the Cooperative Institute for Research in Environmental Sciences (CIRES).

    https://phys.org/news/2022-10-cold-ocean-current-buffers-galpagos.html

    Really?

    “There’s clear evidence that shows all the way back to 1982 that this current has been strengthening and the cold water on the western shores of the islands has been getting colder,” said Karnauskas.

  101. Ned Nikolov, Ph.D. says:

    @oldbrew,

    I don’t know what will it take (perhaps an act of God) for the mainstream climate science to realize & acknowledge that CO2 cannot and does not warm the Earth surface. That’s because there is no radiative “greenhouse effect” in reality, since the atmospheric LW radiation is just a byproduct of atmospheric temperatures and does not constitute an extra energy flux to the surface as believed for 200 years! Atmospheric temperatures, on the other hand, are determined by the absorbed solar radiation (diabatic heating) and air pressure (adiabatic thermal enhancement)…. Ocean currents are getting colder most likely as a result of an increased cloud albedo over some regions of Earth.

  102. Ian F says:

    My understanding is that it is quite well known that the CO2 molecule is a net coolant in our atmosphere. Because CO2 is a polyatomic, unlike homonuclear diatomics N2 and O2, it has a vibration quantum state mode. It is able to absorb translational kinetic energy, which we see as temperature, into its vibration bonds by collision with other air molecules at atmospheric temperatures above 288K. This absorbed translational kinetic energy can then be released as radiative (photon) energy to space at higher, cooler, less dense altitudes with the consequent cooling effect. In fact CO2 was used as a refrigerant before the introduction of CFC’s, HCFC’s and HFC’s as preferred refrigerants because of their higher number of degrees of freedom (DOF) and molar heat capacity than CO2.

  103. Ian F says:

    From the comments I have read from various sources there seems to be some confusion in the climate debate about the different effects in our atmosphere between atmospheric pressure and the radiative flow of energy through our atmosphere.

    If our planet did not have an atmosphere then the temperature of the space surrounding it would be zero. The addition of atmospheric gases in the space close to the planet surface adds molecular mass to the space and gravity applies a force to that mass which we measure as pressure. Pressure (F/A) is the same as energy density (F/A x d/d = F x d/ V = Work/V).

    Work energy is the same kinetic energy:

    Kinetic Energy: E = MV2 = Dimensionally: [M] × [L1 T-1]2 = M1 L2 T-2

    Dimensionally
    Energy: [M1 L2 T−2]

    Force: [M1 L1 T-2] Force (N) = Mass x Acceleration Acceleration = m•s^2
    Length: [M0 L1 T0] Displacement—m
    Work: [M1 L2 T-2] Work = Force x Displacement | J = N•m

    So, kinetic energy and work have the same units so they are the same thing. Though there are some that say that it is not.
    Work: [M1 L2 T-2]
    Energy: [M1 L2 T−2]

    Bearing in mind that at constant pressure any change in kinetic energy (temperature) will adjust the volume and the change in volume will adjust the mass density. A simple calculation can be done to determine the ambient temperature as a result of the atmospheric mass and gravity (pressure).

    The surface atmospheric pressure (P) of earth is 101.325 kpascal. This is the same as an energy density of 101.325 kjoules per cubic meter.

    Density of air @ 288K ρ = 1.225 kg/m^3 Pressure = 101.325 kpascals

    P = RnT / V = TρR Therefore: T = P / ρR

    T = 101.325 / (1.225 x 0.287) = 288.2 K = 15oC

    This is the average ambient temperature set by the atmospheric pressure.

    Turning to energy flow. Variations of the energy flow is managed by the complicated interactions of conduction, evaporation, convection and radiation processes within the atmosphere together with the energy distribution of the oceans. Depending upon the short wave radiation (SWR) input these processes manage the rate of outgoing longwave radiation so that the incoming shortwave radiation energy is equal to the outgoing longwave radiation energy (LWR) and the net energy change is zero. So we are left with the ambient temperature plus or minus a degree or so as the atmosphere adjusts for changes in albedo and solar input etc.

    24% of the absorbed SWR energy is emitted as LWR directly to space. 76% of the absorbed SWR energy is transported to the upper atmosphere for LWR emission. This is done by air conduction and water evaporation which absorb surface energy before adiabatic convection then carries the energy to the upper atmosphere by adiabatic expansion cooling where the energy is emitted to space as LWR from the water vapour and to a lesser extent CO2 molecules. The cool air then gravitationally returns to the surface where it is adiabatically compressed back to the surface ambient temperatures where the cycle repeats. The so called back radiation that is seen is likely the result of the rising air temperature (thus radiation) due to the compression heating of the descending air parcels. This must be so because energy cannot flow from a cool body to a warm body which the back radiation hypothesis implies happens.

    Looking at the effect of temperature differences at different planet locations. At Antarctica the air temperature is about 228K on average at the surface. The tropopause is lower there because the atmosphere is contracted so that the volume is less and the consequential mass density is higher.

    Density of air @ 228K ρ = 1.548 kg/m^3 (measured) R = 0.287 kjoules/kg.K

    P = RnT / V = TρR P = 228 x 1.548 x 0.287 P = 101.295 kpa = 101.295 joules/m^3

    Therefore, regardless of a change in temperature the energy density set by force of gravity on the atmospheric mass (pressure) is not changed because of an adjustment by volume change.

    So to me, conduction, evaporation, convection and radiation is the FLOW of energy through the atmosphere while pressure IS energy within our atmosphere.

    This all pretty basic stuff so when I read that our university students are being taught the opposite, I was glad that I am not at university now because I would be wasting my money.

  104. Ned Nikolov, Ph.D. says:

    @Ian F

    It’s important to remember that pressure is not energy, but an essential & necessary component of energy. Pressure as a force only has a relative impact on the temperature and the energy content of a system meaning that pressure adiabatically enhances the available energy in the system.

    For example, Earth’s atmosphere has an average surface pressure of 98.55 kPa, which enhances about 1.45 times (or 45%) the energy received from the Sun. At Earth’s orbit, this enhancement manifests as an absolute thermal effect of ~90 K. However, if Earth is moved to the orbit of Titan (at a distance of 9.58 AU from the Sun), then the 45% enhancement will manifest as an absolute thermal effect of only ~29 K. That’s because the solar radiation reaching the orbit of Titan is about 92 times less than the radiation reaching Earth’s orbit. This is explained in our 2017 paper, in section “Effect of pressure on temperature” on p. 14:

    New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model

    Many people do not understand this RELATIVE effect of pressure on temperature, and misrepresent our findings as a result of it.

  105. Mark says:

    Former climate activist now claims the whole thing is a scam and says what needs to be looked at and exposed is the “science” behind climate change.

  106. Ian F says:

    Thanks for your feedback Ned. I have all of your papers at hand and I noticed I had already highlighted the particular clause you refer to about RATE. This usually means that I need to do some homework to properly understand it. So I will get to it.

  107. Ned Nikolov, Ph.D. says:

    @Mark,

    I fully agree with what Tom Harris stated in this interview with Laura Ingraham. I’ve been saying for years that the “greenhouse” climate theory is physically insolvent and exposing this fact is key to resolving the “climate crisis” and getting off the path of societal self-destruction through insane energy policies.

    Simply put, what we currently have is a completely false climate concept supported by manipulated or fully fabricated data.

  108. Ned Nikolov, Ph.D. says:

    @Ian F

    Start with this slide:

    Note that pressure only explains the Tsb/Tna ratio across planetary bodies, which is the Relative Atmospheric Thermal Enhancement (RATE). So, pressure has no universal relationship to the absolute temperature of an object!

  109. oldbrew says:

    OCTOBER 17, 2022

    Small sulfate aerosol may have masked effects of climate change in 1970s
    by Hokkaido University

    https://phys.org/news/2022-10-small-sulfate-aerosol-masked-effects.html

    Is this an attempt to explain the cooling scares of the 1970s?

    Hilariously the article says ‘It is well known that carbon dioxide is the most common greenhouse gas’ — ignoring water vapour which is orders of magnitude more common.

  110. Ned Nikolov, Ph.D. says:

    @oldbrew

    Yes, junk climate science proliferates without limits…

  111. Ian F says:

    @ Ned Nikolov

    With respect Ned, I have never said that pressure is energy, only that work energy has the same units as kinetic energy. What I have said is that pressure sets the energy density. At constant pressure the relationship is between temperature and volume. If temperature goes up volume goes up and visa versa but energy density remains the same. I contend that gravity forces molecules into a smaller space so that the thermal energy per unit volume is increased along with the thermal heat capacity.

  112. Ned Nikolov, Ph.D. says:

    @ Ian F

    Yes, one can view pressure as “energy density” per unit volume, although this is a non-conventional interpretation that can cause confusion. A better and more logical way of describing the thermal kinetic energy (E, Joule) of the lower atmosphere of a planet is to say that E = PV, where P is set by the atmospheric mass (M, kg) above a unit area (A, m^2) and gravity (g, m s-2), i.e. P = (M/A)*g, while V is set by the solar heating (i.e. the absorbed solar flux). The atmospheric temperature is simply an intensive property of the thermal kinetic energy E.

    You are correct that, in planetary atmospheres, where the mean surface air pressure is independent of temperature, the atmospheric volume is proportional to temperature in accordance with Charles’ law of Thermodynamics describing isobaric systems: https://en.wikipedia.org/wiki/Charles%27s_law

  113. Ian F says:

    @ Ned Nikolov

    Perhaps it is unconventional but I prefer to think in terms of energy. P=E/V or as you say PV=E. Importantly E in this case is work energy and while this has a thermal energy equivalent RnT (thermal effect), it is work energy.

    If the planet did not have an atmosphere there would be no work energy or thermal energy only radiation energy emitted as a result of the planets absorbed solar irradiation. So adding an atmosphere adds work energy as a result gravity and thermal energy as a result of the absorption of some of the planet’s emitted radiation energy into the gas molecules (mass).

    I make the point that if you only include radiation energy in the energy balance as it seems the AGW models do, then they are discounting the effect of the work energy.

    From that, is it correct to say that E in your equation 10b (2017 paper), which is a function of pressure, reflects the effect of this work energy in the planet’s atmosphere?

  114. oldbrew says:

    ‘A study from the University of California, Davis, and published in the journal Nature Geoscience shows that air temperature and cloud cover are strongly influenced by the buoyancy effect of water vapor, an effect currently neglected in some leading global climate models.’
    . . .
    “It is worth spending more effort to understand how water vapor buoyancy regulates Earth’s climate.”

    https://phys.org/news/2022-10-vapor-heft-global-climate.html

    Regulates?

  115. Ned Nikolov, Ph.D. says:

    @oldbrew

    The lead author of this study is quoted saying: “The biggest challenge in accurately predicting future climate change is clouds, so we have to get vapor buoyancy right.

    And also: “In climate models without vapor buoyancy, the low cloud cover can be off by about 50% in certain regions.

    So, then a simple question would be: How does the effect of cloud-cover variations on the surface energy balance and temperature compare to the imaginary radiative forcing attributed to CO2?. In other words, can the observed cloud-cover variations explain the warming of the past 40 years without invoking a CO2 radiative forcing?

    Yet, no one is asking such obvious, commonsense questions! Why?

  116. Mark says:

    Omg, expert on cloud cover related issues implies that the science isn’t close to being settled and talks about short coming of current cartoon like models of clouds.

    An Interview with Top Climate Scientist Bjorn Stevens

  117. oldbrew says:

    Post: Vapour buoyancy flaw leads to inaccurate simulations of cloud distributions in climate models, study finds
    https://tallbloke.wordpress.com/2022/10/26/vapour-buoyancy-flaw-leads-to-inaccurate-simulations-of-cloud-distributions-in-climate-models-study-finds/
    – – –
    The UC Davis article headlines:

    The Lightness of Water Vapor Adds Heft to Global Climate Models

    Climate Models Without the Lightness of Water Vapor Risk Uncertainty in Cloud Simulations
    by Kat Kerlin October 24, 2022

    https://www.ucdavis.edu/climate/news/lightness-water-vapor-adds-heft-global-climate-models

    UCD: Quick Summary

    Study adds a missing piece to the climate science puzzle of simulating clouds.

    Lightness of water vapor influences the amount of low clouds.

    Some leading climate models don’t include this effect.

    Including vapor buoyancy into climate models helps improve climate forecasting.
    – – –
    ‘Some’ = 6 of 23 in their article.

    Paper: Substantial influence of vapour buoyancy on tropospheric air temperature and subtropical cloud [Oct. 2022]

    https://www.nature.com/articles/s41561-022-01033-x

  118. Ian F says:

    If anyone is not aware of it and is interested there is a detailed mathematical explanation of the “thermal enhancement” effect of External Forces on the Macroscopic Properties of Ideal Gases at:

    https://www.researchgate.net/publication/341297940_Effect_of_External_Forces_on_the_Macroscopic_Properties_of_Ideal_Gases

  119. Ned Nikolov, Ph.D. says:

    This interview with Dr. Bjorn Stevens (Director of the Max Plank Institute for Meteorology in Germany) is quite illuminating, for it shows the great confusion existing among top climate researchers about the role of clouds in climate:

    An interview with top climate scientist Bjorn Stevens

    Dr. Stevens is a top expert on clouds and climate sensitivity. Yet, he cannot even ask the right research question. Instead of asking “What do the clouds do when the climate warms up?“, which is backward to physical causality, he should ask: “How do observed cloud changes impact the surface energy balance and temperature of Earth?

    I find it puzzling that no one in the climate establishment dares to evaluate the independent effect of the observed decrease of cloud albedo since 1982 on global temperature. Are they afraid of what they might find out? 🙂

  120. oldbrew says:

    “It’s the biggest question—there is no bigger question,” said Professor Bjorn Stevens, a director of the Max Planck Institute for Meteorology in Germany and Dr. Bony’s co-leader on the EUREC4A project which set out to investigate these fluffy white clouds.

    “For 50 years, people have been making climate projections, but all of them have had a false representation of clouds.”

    These projections, he says, have suffered from an inadequate understanding of the factors determining how cloudy climate will be and have not been properly represented in the models.

    https://tallbloke.wordpress.com/2020/11/10/cloud-shapes-and-formations-impact-global-warming-but-we-still-dont-understand-them/

  121. Ned Nikolov, Ph.D. says:

    @oldbrew

    Climate model are designed to produce warming in response to increasing atmospheric CO2. That’s their primary purpose! As long as this remains a top priority for these models, clouds will always be looked at as something of secondary importance, and there always be this “talk” that the cloud dynamics is too complex to be adequately represented in models… The simple truth is that seeing cloud-cover changes as a primary driver of Earth’s climate on multi-decadal time scales will likely never be acknowledged publicly, because such a view undermines the AGW Agenda set by politicians and bringing hefty grants to climate scientists…

  122. Ian F says:

    @ Ned Nikolov

    It would be appreciated if I could get your assessment of my understanding of the thermal enhancement referred to in your papers:

    The atmosphere is in constant adiabatic compression due to gravitational force. Force on a gas alters the distribution of the velocities and positions of molecules in the system. This manifests as a proportionate increase in the kinetic energy (temperature) at molecular level. Any increase/decrease in solar energy input is also proportionately amplified/attenuated. This makes the surface temperature more sensitive to changes in surface solar radiation. The difference between the no atmosphere temperature (197K) and the average surface equilibrium atmospheric temperature (286K) is 89K. So the proportionate temperature increase due to gravitational force (pressure) is 89K or 45%.

    Thanks

  123. Ned Nikolov, Ph.D. says:

    @Ian F

    You’ve got it right, Ian!

    You are also correct that the climate sensitivity (measured as a change in the global surface temperature) to solar radiation (both to TSI and the absorbed solar flux) is proportional to the global surface temperature (Tsb). We quantify this in Equations 8b and 15 of our climate-sensitivity article:

    Ned Nikolov & Karl Zeller: Exact Calculations of Climate Sensitivities Reveal the True Cause of Recent Warming

    For equal distance from the Sun, a planetary body of higher atmospheric pressure will be more sensitive to changes in incoming solar radiation than a body of lower surface atmospheric pressure. Moon and Earth are a good example in this regard (see Table 1 in our sensitivity paper).

    Without an atmosphere, the Earth’s global temperature will be about 197.1 K. Earth’s current absolute global temperature is 287.4, which yields an ATE = 90.3 K. If you use the Earth’s baseline temperature for the past 2,000 years (286.4 K), then the baseline ATE is 89.3 K.

  124. Ian F says:

    Thanks again for your feedback Ned, it’s reassuring. Properly understanding the cause and effect of the ATE is important to me.

  125. Ned Nikolov, Ph.D. says:

    @Ian F

    You are welcome!

    Now that you understand the physical nature of ATE, explain it to colleagues & friends, so they can grasp the falseness of the “greenhouse” climate theory and everything built on it!

  126. Mark says:

    “Models with higher EFFECTIVE DIMENSIONS tend to produce more uncertain estimates.”
    Models that can’t be accessed for how “good” they are include pandemic models and climate models …

  127. gbaikie says:

    “This makes the surface temperature more sensitive to changes in surface solar radiation. The difference between the no atmosphere temperature (197K) and the average surface equilibrium atmospheric temperature (286K) is 89K. So the proportionate temperature increase due to gravitational force (pressure) is 89K or 45%.”
    This depends upon the surface. 197 K seems close to Lunar surface.
    The Moon has about month long, day, if Moon had 24 hour day, it would have higher
    average temperature.
    Also the top 1 meter of lunar surface doesn’t heat up much with days sunlight when the Sun is near zenith with top surface temperature reaching 120 C.
    The top 1 meter [or meters below it] don’t heat up because lunar surface is very good insulator of heat. Sand is similar on Earth but lunar surface is far better insulator as compare to sand on Earth.
    With Earth 70% of surface is ocean, most of the sunlight warms the top couple meter of ocean surface.
    Earth has two things going for it, a transparent ocean and fairly short day.

    I would say, about 80% of sunlight reaching Earth entire surface, passes thru the top 1/4″ of our ocean surface.
    [[Mainly because 80% of tropics is ocean and more the 1/2 sunlight reaching entire Earth surface occurs in the tropics.]]

  128. Ned Nikolov, Ph.D. says:

    @gbaikie

    Don’t forget that without an atmosphere, there will be no liquid oceans on Earth! Oceans are a consequence of Earth’s atmospheric Thermal Effect, which depends on total air pressure and solar radiation reaching the planet.

    Also, the Earth’s atmosphere directly absorbs about 33% of the solar energy entering the Earth’s climate system, which means that the surface only absorbs 67%.

  129. gbaikie says:

    “Don’t forget that without an atmosphere, there will be no liquid oceans on Earth! ”

    You don’t need an atmosphere with 10 tons of air per square meter, it could be 1 ton per square meter.

    “Oceans are a consequence of Earth’s atmospheric Thermal Effect, which depends on total air pressure and solar radiation reaching the planet.”

    I think “Earth’s atmospheric Thermal Effect” is caused by the warmer ocean surface, and 1/10th of 1 Atm at 1 AU from the Sun, would have more solar radiation passing thru the top of the surface of the ocean.

    Mars atmosphere is 1/100th Earth’s pressure, if twice atmosphere- or 1/50th of Earth atmosphere pressure, Mars could have liquid oceans.

    “Also, the Earth’s atmosphere directly absorbs about 33% of the solar energy entering the Earth’s climate system, which means that the surface only absorbs 67%.”

    Earth’s atmosphere absorbs or reflects 33%- mostly reflecting- and diffusing sunlight.

    Which is another factor adding up to the 80%, the ocean absorbs both direct and indirect sunlight.

  130. Ned Nikolov, Ph.D. says:

    @gbaikie

    Well, your numbers are demonstrably wrong, because they are made up instead of being the result of calculations based on real data. Look up some actual scientific publications on this topic before voicing an opinion…

  131. gbaikie says:

    Earth global average land temperature is 10 C, it’s ocean average surface temperature averages about 17 C which gives the average global surface air temperature of about 15 C

  132. David Petrásek says:

    Dr. Nikolov, what is your take on this study: https://scienceofclimatechange.org/wp-content/uploads/Harde-Schnell-2022-Verification-GHE-Experiment.pdf
    Authors say: “we present the first demonstration of the atmospheric greenhouse effect in a laboratory experiment”.
    They find there is a greenhouse effect, but no emergency.

  133. catweazle666 says:

    To David Petrásek :
    A very interesting and encouraging paper, especially:

    Already the presented measurements and calculations demonstrate the only small impact on global warming with increasing GH-gas concentrations due to the strong saturation.Therefore, we strongly recommend not to further provoke crying and jumping kids by fake experiments and videos only to generate panic, but to teach them in serious science with realistic demonstrations and information about the impact and also benefits of GH-gases.

    The takedown of the egregious charlatan Gore is brilliant.

    Thanks!

    To Tallbloke: May I suggest this very interesting paper is deserving of a post of its own?

  134. Ned Nikolov, Ph.D. says:

    @David Petrásek,

    Thank you for pointing to this paper (https://scienceofclimatechange.org/wp-content/uploads/Harde-Schnell-2022-Verification-GHE-Experiment.pdf). It’s quite interesting as an experimental design. I would describe it as a very sophisticated version of the CO2-in-a-bottle experiments typically used to “prove” the greenhouse effect, which I discuss in my video “Demystifying the Greenhouse Effect

    The main problem with these closed-box lab experiments is the artificial suppression of conduction and convection, which are the main modes of energy transfer (cooling) in the real atmosphere. The authors of this experiment explicitly state that they “have developed an advanced laboratory set-up, which allows to largely eliminate convection or heat conduction…” in order to “study the direct influence of greenhouse gases under similar conditions as in the lower troposphere“. However, by eliminating conduction and convection, one changes the system to a state that no longer bears a thermodynamic resemblance to the real troposphere. That’s because, in the real system, conduction/convection are nonlinearly coupled with radiative transfer, and convection is many orders of magnitude more effective in cooling the surface than radiation. As a result, the findings obtained in a lab experiment such as the one described in this paper do not apply to the real atmosphere!

    Furthermore, their experimental design departs from reality by using unrealistically high greenhouse-gas concentrations and by placing the warm plate (at 30 C) representing the Earth surface above the cold plate (at -11 C) representing the cold atmosphere. This inverse plate arrangement further suppresses the convective cooling of gases, since warm air expands and rises, while cold air shrinks and sinks.

    Yet, even in this unrealistic experimental set-up, where the convective cooling of gases has been suppressed, the authors report a very small radiative thermal effect of so-called “greenhouse gases” on the temperature of the warm plate despite using unrealistically high GHG concentrations. In the Conclusion section they write:

    … the greenhouse effect contributes to some warming of the Earth’s surface and by this also to some additional convection, but not to any remarkable direct warming of the air temperature… This finding is of particular importance since air warming is a necessary prerequisite for the alleged CO2-water vapor feedback, without which there would be no threatening Earth warming.

    These results indirectly point to a conclusion that, in the real atmosphere dominated by convection and advection (i.e. heat transfer via fluid motion), the radiative effect of “greenhouse gases” on Earth’s surface temperature is virtually zero. This is also our finding based on an objective analysis of NASA planetary data from across the Solar System (Nikolov & Zeller 2017).

  135. Yes Ned, few understand thermodynamics and few take note of the lapse rate by which everywhere in the atmosphere is cooler than the surface. Taken together with the 2nd law of Thermodynamics (simplified -heat only flows from hot to cold) there are no greenhouse gases as described by the IPCC. There are clouds (of water droplets and ice particles) which can reduce the in-coming radiation from the sun. On Titan there also are no “greenhouse gases” but methane (CH4) can form clouds of liquid and solid CH4. To my knowledge no one has explained the clouds on Venus but there are no greenhouse gases there also.

  136. gbaikie says:

    “To my knowledge no one has explained the clouds on Venus but there are no greenhouse gases there also.”
    Most of sunlight with Earth are absorbed within the ocean. Venus rocky surface gets no heat from sunlight, instead the Venus clouds are the Venus surface which are heated. And lapse rate make the rocky surface hot.
    Or another way to say it, is air molecules are same velocity at the cloud level as rocky surface, but the air is denser.
    Similar thing happens on Earth when there lower elevations [below sea level] or when Mediterranean sea dried up.
    The issue of global wind effectively making Venus day 4 to 5 Earth days is a factor in this, also. And call the fast upper winds as a runaway heating effect- Venus would much colder without it.

  137. FerdiEgb says:

    Dear all, I am just aware of this discussion as that was spread a few days ago within the Clintel community.

    I am only reacting on the point of the “manipulation” of the CO2 data, as that simply is impossible and does not any good for the reputation of the CAGW skeptics…

    To begin with, Ned Nikolov makes a lot of errors in his assumptions. If you are accusing a lot of people of data manipulation, then at least have your basic facts right.

    – NOAA has ten “background” stations which they monitor, not four: https://gml.noaa.gov/dv/iadv/
    Worldwide there are about 70 “background” stations from several countries and maintained by different organizations with hundreds of people involved. Plus several hundred local monitoring stations, including “tall towers” to measure in and out fluxes of CO2 over land: fields, forests,…
    In the past 60 years, nobody ever complained (even not when retired) about the huge data manipulation?
    – NOAA prepares the calibration mixtures to regularly calibrate all CO2 monitoring equipment all over the world. They “can” manipulate these mixtures when prepared, but can’t manipulate them while used, thus any CO2 increases measured during the many months of use are real.
    – Scripps did made the calibration mixtures until begin 2000’s, when the WMO decided to get the NOAA as worldwide provider of the calibration mixtures. Scripps still have their own calibration mixtures/scale and still takes their own flask samples at Mauna Loa and other places, independent of NOAA. Their results are within +/- 0,2 ppmv of NOAA.
    – The Japanese also have their own calibration mixtures and sampling independent of NOAA.

    That for a start, which makes it extremely unlikely that the CO2 measurements are manipulated, as too many people are involved.

    So, why does the prediction of Wallace Broeckner gets exactly the same as the real increase of CO2 in the atmosphere?
    The only guess that Broeckner did was expecting that fossil fuel use and thus CO2 release, would increase linearly every year, which is exactly what happened.
    Fossil fuel use and its CO2 release is exactly known within narrow borders, thanks to the different governments eager to collect taxes on fossil fuel sales: maybe a little underestimated due to the human nature to avoid taxes, but certainly not overestimated!

    That shows a straight forward ratio between accumulated CO2 emissions by humans and the accumulation of CO2 in the atmosphere, here for Mauna Loa up to 2011. Needs some update, but recent ratio’s are not that different, even the Covid pandemic had little influence and is already completely compensated for by last year emissions:

    “Furthermore, the model assumes an infinite residence time of the airborne CO2 fraction.”

    Of course not, the fact that not 100% of human emissions (as mass, not the original fossil molecules!) remain in the atmosphere is a clear sign that the residence time is not infinite.

    The uptake of CO2 by the ocean surface and vegetation does not depend of the yearly human emissions of CO2 but is directly proportional to the extra CO2 pressure (pCO2) in the atmosphere compared to the average pCO2 of the ocean surface for the current average ocean surface temperature. The latter is about 295 µatm per Henry’s law for seawater.

    For a more or less linear response to a disturbance, there is a simple formula to calculate the e-fold decay rate of any extra CO2 in the atmosphere:

    Tau = disturbance / effect to get 1/e or about 37% of the initial disturbance without further extra input.

    In graphic form:

    The e-fold decay rate of any extra CO2 of any source above the dynamic equilibrium per Henry’s law is somewhere between 47 and 57 years. That is too slow to remove all human CO2 (again as mass!) in the same year as released, but much faster than the hundreds of years that the IPCC’s Bern model predicts. The Bern model includes a saturation of the different sinks, which is only true for the ocean surface waters but by far not even in sight for vegetation or the deep oceans. The only constraint is the relative small exchange rate between atmosphere and deep oceans via the polar sinks and the limited real growth of more permanent vegetation.

    At last, I have had years of discussion with the late Ernst Beck about the historical CO2 data until his untimely death in 2010. While I respect the tremendous amount of work he has done, he lumped all available data together, the good, the bad and the ugly without much quality control.
    The possibility of a huge “peak” in the 1940’s is physically impossible and violates several other proxy’s of that time…
    See: http://www.ferdinand-engelbeen.be/klimaat/beck_data.html

    Thus if anyone wants to show that the CO2 data are manipulated to get the result of some model, one need to come with much stronger arguments than the plot of a very good guess by a smart scientist of the past…

    Best regards,

    Ferdinand Engelbeen

  138. Ned Nikolov, Ph.D. says:

    @FerdiEgb,

    Let’s get this straight:

    1. I’m well aware of the published procedures for taking CO2 measurements and the instrument calibration protocols, since these are listed on the NOAA’s website. Pointing to these and also to the fact that “too many” people are involved in conducting the CO2 measurements is NOT an argument for the authenticity of such measurements!

    2. What’s important is the FINAL RESULT! It can be shown using a simple numerical procedure that the official record of mean annual CO2 concentrations from 1890 to 2021 is fully reproducible by simply accumulating a fraction of annual human carbon emissions. The emission data are taken from official records by ORNL CDIAC and the Global Carbon Project (Fig. 1 below). The summation assumes (following Broecker 1975) that the air-borne fraction of industrial carbon emissions (~0.54) from each and every year remains in the atmosphere forever. This key assumption made by Broecker (1975) is completely unrealistic, because the residence time of CO2 in the atmosphere is only 4-5 years (Starr 1993; Harde 2017). Another unrealistic assumption adopted by Broecker (1975) is that the air-borne fraction of emissions is not subjected to temperature modulations. The effect of natural CO2 sources & sinks on the air-borne fraction is completely ignored. This simple accumulation procedure is responsible for the exponential rise of atmospheric CO2 concentration evident in the official records between 1890 and the present (see Fig. 2 below). In Fig. 2, note that the modeled CO2 concentrations (green curve) almost completely overlaps the annual values supposedly measured by NOAA (blue curve).

    The fact that such a simplistic and highly unrealistic model can accurately reproduce the official CO2 record over a period of 140 years (consisting of ice-core data prior to 1960 and the Keeling curve after 1959) suggests that this record could not possibly be a result of real measurements, and is most likely a product of a model simulation driven by human carbon emissions!

    FIGURE 1:

    FIGURE 2:

  139. FerdiEgb says:

    Ned Nikolov, the final result of a linear increase of a disturbance over time for a linear response function is ALWAYS a fixed ratio, no matter the actual data. Thus it is no wonder that you see such a nice curve as result of the nice curve that the original emissions show.

    Broeckner didn’t “assume” a fixed ratio of airborne fraction, neither of a fraction “forever”. He did observe (!) a fixed ratio 1900-1970 in the older measurements (probably from Callendar) and the data 1958-1970 from Mauna Loa. All what he did is extrapolating the expected emissions and fixed ratio to 2010, with remarkable result.

    The numerical proof of a natural fixed ratio is as follows, based on realistic sink rates of any excess CO2 in the atmosphere:

    Starting with only the emissions, without looking at the results in the above graph, the disturbance over the 60 years 1960-2020 is based on accurate human emissions (thanks to governmental taxes):

    Ea = 1.2 ppmv + 0.06 ppmv * ys where ys = years after 1960

    Accumulated as if there was no CO2 removal of the extra CO2 at all:

    Ea(acc) = accumulated up to 1960 + 1.2 * ys + 0.5 * 0.06 * ys^2
    or
    from 35 ppmv in 1960 to 35 + 72 + 108 = 215 ppmv in 2020

    In graph form (again, only the CO2 accumulation without any natural sinks is of interest up to now):

    The NET natural sinks are proven linear sinks in direct ratio to the extra CO2 pressure (pCO2) in the atmosphere compared to the average pCO2 of the ocean surface for current ocean surface temperatures (actually around 295 ppmv).
    That is a dynamic equilibrium: if the pCO2 of the ocean waters is higher than in the atmosphere, CO2 is released and reverse, leading to a lot of CO2 circulation between ocean waters near the equator and near the poles, where CO2 with the polar waters sinks into the deep oceans and returns some 1000 years later near the equator.
    The NET CO2 transfer is in ratio to the ΔpCO2 between atmosphere and the (area weighted) average of the ocean surface.
    See further:
    https://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtml and next sections.
    Besides the huge influence of wind speed, the sink rate is in direct ratio to the pCO2 difference between atmosphere and ocean surface.

    That means that NET the sink rate is a simple ratio of the CO2 increase in the atmosphere compared to the pCO2 of the ocean surface, which hardly changed over the past 60 years:

    sink rate = fr * (35 + 1.2 * ys + 0.5 * 0.06 * ys^2)
    where fr = the fraction that is absorbed by oceans and plants in ratio to the total increase in the atmosphere above equilibrium
    and thus the “airborne fraction” gets:
    increase rate = (1 – fr) * (35 + 1.2 * ys + 0.5 * 0.06 * ys^2)

    The net result in the atmosphere thus is a fixed 1 – fr over the full 60 years of interest, whatever fr may be.
    Of course the real fr is in ratio to the real increase in the atmosphere, not the theoretical if all human emissions remained in the atmosphere. That only changes fr, but still fr and 1 – fr are constants over the past 60 years and by extension the past 170 years.
    The real fr, as measured, is an about 2%/year of the extra CO2 above equilibrium over the past 60 years as can be seen in the second graph.

    The only points that are needed for such a fixed ratio is a linear increase of the disturbance over time and a linear response of the process to such a disturbance.
    If that is the case, then any process in this world will give a fixed ratio between disturbance and response.

    Thus you have cause and effect completely reversed: Broeckner saw a fixed response to the increasing CO2 emissions and simply extrapolated that to 2010. Nobody needed to change any measurements to show what nature already did itself, if you accept that the natural response in sink capacity to an increased CO2 level in the atmosphere is a simple ratio to that increase…

  140. Ned Nikolov, Ph.D. says:

    @FerdiEgb,

    The real world does not work in the simplistic way you describe it. In reality, there cannot ever be a fixed ratio between emissions and sinks on a global scale over a period of 60 years! This kind of nonsense can only come from models!

    Read carefully Broecker (1975)! On p. 462, he explicitly states that he assumed 50% of human carbon emissions to stay and accumulate in the atmosphere for the foreseeable future, which of course is incorrect, because the CO2 residence time is only 4-5 years.:

    The global temperature increase due to CO2 in Fig. 1 is calculated on the basis of the following assumptions: (i) 50 percent of the CO2 generated by the burning of chemical fuels has in the past and will in the near future remain in the atmosphere;

    He further assumed that:

    (iii) for each 10 percent increase in the atmospheric CO2 content the mean global temperature creases by 0.3 C.

    The latter assumption was totally unwarranted in 1975, because the World was cooling from 1941 to about 1980. Thus, while Keeling was reporting a linear increase of atmospheric CO2 at Mauna Loa between 1959 and 1975, the global temperature was actually falling. Broecker (1975) acknowledges this global cooling in the very first paragraph of his paper:

    ” The fact that the mean global temperature has been falling over the past several decades has led observers to discount the warming effect of the CO2 produced by the burning of chemical fuels.

    It is clear, therefore, that Broecker’s belief in the warming effect of CO2 was based on an unsupported theory rather than empirical evidence. But, this is not how real science is supposed to work!

    In regard to the CO2 record, it’s possible that Charles David Keeling began cooking the books on CO2 data from the start of the so-called “measurements” at Mauna Loa in 1959. How else could we explain the fact that in the midst of a steep multidecadal cooling, Keeling was reporting a steady increase of atmospheric CO2 for nearly 20 years?

  141. FerdiEgb says:

    Ned, every linear process in this world behaves like what can be seen of CO2 in the atmosphere as result of a linear increasing disturbance. That is not “simplistic”, but a proven physical behavior of all linear processes.

    All what Broeckner did expect is that human emissions would linear increase further for the coming decades and thus the net result: a fixed ratio between increase in the atmosphere and human emissions.

    You are using the residence time of 4 years as argument, but the residence time has nothing to do with the removal speed of any extra CO2 mass in the atmosphere above equilibrium. The residence time is a matter of EXchange speed between CO2 of different reservoirs, but that doesn’t REmove any CO2, only moves a lot of CO2 from one reservoir to another and back. The real removal speed is about 50 year e-fold rate or around 35 years half life time for any excess CO2 above equilibrium, whatever the source.

    It is the same difference as between the throughput of goods (thus capital) through a factory and the gain (or loss) of capital of the same factory at the end of the fiscal year, also completely different items…

    Not only him, but many skeptics of climate catastrophes like me are sure that CO2 is a greenhouse gas that adds to the warming of the planet, be it far less than what Broeckner assumed. Even in the time of Broeckner, the warming effect of CO2 was proven (already by Tyndall) and accepted by most in the scientific world.

    “In regard to the CO2 record, it’s possible that Charles David Keeling began cooking the books on CO2 data from the start of the so-called “measurements” at Mauna Loa in 1959. How else could we explain the fact that in the midst of a steep multidecadal cooling, Keeling was reporting a steady increase of atmospheric CO2 for nearly 20 years?”

    Keeling did not cook the books, how could he and why should he? Everybody today can measure CO2 everywhere on earth with a simple hand held CO2 meter. Thus if you can prove that the current CO2 levels anywhere over the oceans or at the seaside with wind from the sea or above 1000 m over land are completely different from what Mauna Loa today shows, be my guest…

    Further, the influence of temperature on CO2 levels is very modest: 5 ppmv/K for the seasonal amplitude, 3-4 ppmv/K for year by year variability (maximum +/- 1.5 ppmv around the 90 ppmv trend for 1991 Pinatubo and 1998 El Niño). 13 ppmv/K since the LIA, 16 ppmv/K very long term between glacial and interglacial periods. That is all.
    Even the cooling 1958-1970, the period Mauna Loa period that Broeckner used for his “best guess”, was good for a drop of maximum 3 ppmv or so, while over the same period, CO2 emission levels were already 15 ppmv in total. No wonder that CO2 levels kept increasing…

    The in/decrease of the solubility of CO2 in seawater with temperature per Henry’s law is exactly known with the formula of Takahashi, based on hundred thousands of sea water samples:
    ∂ln pCO2/∂T=0.0423/K or about 4% per K
    https://www.sciencedirect.com/science/article/abs/pii/S0967064502000036
    The current dynamic CO2 equilibrium between atmosphere and ocean surface for the current average ocean temperature would be around 295 ppmv. Not 415 ppmv…

  142. Mark says:

    Maybe everyone has seen what this article which references a paper in Nature about us entering a global cooling phase.

    https://dailysceptic.org/2023/01/23/temperatures-in-northern-hemisphere-due-to-fall-over-next-25-years-according-to-six-top-international-scientists/

  143. Ned Nikolov, Ph.D. says:

    @FerdiEgb,

    You keep repeating the same stuff without paying attention to what I said:

    1. The modeled green curve, which matches so well the so-called “observation” for 140 years, is the result of a continuous accumulation of the air-borne fraction of industrial C emissions FOR 140 YEARS! In other words, the air borne-fraction from 1890 is assumed by the model to still be in the atmosphere in 2021. It’s this assumption that allows the model to accurately reproduce the evolution of atmospheric CO2 for the entire period since 1890. So, your 50-year removal time of CO2 from the atmosphere is simply NOT evident in the data! That’s because the CO2 curve of the past 140 years is MANUFACTURED, not measured!

    2. There is no process in the real World that would produce such a clean nearly linear response for over 6 decades as seen in the Keeling curve!

    3. The claim that “the influence of temperature on CO2 levels is very modest” is based on biased CO2 ice-core data, which do not represent the true dynamics of CO2 in the atmosphere, since ice cores do not preserve decadal high-frequency fluctuations of atmospheric CO2.

    4. The 19th-Cent radiative “greenhouse” theory is completely refuted by modern satellite observations. I discuss this evidence at length in my video:

  144. FerdiEgb says:

    Ned,

    1. and 2. Again, that the increase in the atmosphere is in exact ratio to human emissions is the result of two conditions:
    A. The response of the net natural sinks is a simple linear function of the extra CO2 pressure in the atmosphere above the long time dynamic equilibrium with the ocean surface. Which is proven by over a million seawater samples nowadays.
    B. Human emissions show a linear increase over the past 170 years. Which is based on the sales figures supplied by different governments that are eager to collect taxes…

    The reason that part of the human emissions (as mass, not the original molecules!) stay in the atmosphere is that the removal time is much longer than the supply time and only when these two catch up, there will not be any further increase in the atmosphere.
    That is only possible if the increase in supply is not above the removal rate, the latter depends of the extra CO2 level above the dynamic equilibrium, not of the supply of one year. The proven (!) e-fold removal rate is between 45-55 years, as can be calculated from the net sink rate in ratio to the extra CO2 pressure above equilibrium.

    When both the above conditions are met, every linear response process on this earth will show an exact steady ratio between disturbance and effect.
    If you don’t believe me, simply ask that at any engineer with some practical experience.

    3. The claim that the influence of temperature is very modest is not only based on ice cores:
    A. Even ice cores with the worst resolution would show the current increase of 120 ppmv over a period of 170 years: the Vostok ice core has a resolution of 600 years and would show a “peak” of 34 ppmv if that had happened 420,000 years ago.
    The high resolution ice cores of Law Dome have a resolution of only 8 years and do show an increase of 20 ppmv CO2 in the period 1968-1988 within 2 ppmv of the direct measurements at the South Pole, independent of each other (or these are also “faked”?)
    B. Independent of CO2 measurements, the 13C/12C ratio and the oxygen measurements can be used to calculate the natural CO2 fluxes between atmosphere and vegetation and as difference the CO2 fluxes between atmosphere and ocean surface. These too show a very modest influence of temperature on CO2 levels.
    The latter also are measured both theoretically as gas transfer rate and real life as increase of CO2 and derivatives in the ocean surface.
    Here for the seasonal amplitude in the NH:

    Mauna Loa reflects about the seasonal average amplitude of the NH and as the SH acts in opposite direction, the average global amplitude is about +/- 5 ppmv for +/- 1 K global temperature change, where the NH seasons are dominant.
    Together with the change in O2, one can calculate the different seasonal fluxes, which are about 60 PgC/season for vegetation in both directions and 50 PgC/season for the ocean surface in opposite direction, leading to an amplitude of +/- 10 PgC (+/- 5 ppmv) over the seasons if averaged globally.
    Note that the CO2 levels drop (!) with increasing temperatures in NH spring/summer and increase with dropping temperatures in fall/winter.

    For the year by year response of CO2 to temperature, here the plot:

    Again, vegetation is the dominant respondent to temperature variations, as δ13C and CO2 are changing opposite of each other. If the main changes were from the ocean surface, δ13C and CO2 changes would parallel each other.
    Note that the CO2 levels increase with temperature, opposite to the seasonal changes…
    Anyway the amplitude of the change is not more that 3-4 ppmv/K and in the trends not more than +/- 1.5 ppmv for the extremes (Pinatubo, El Niño) around the 90 ppmv trend over the past 60+ years. That is all.
    Over many thousands of years (glacial/interglacial) also not more than 16 ppmv/K globally…

    4. The influence of CO2 on the radiation balance was already proven by satellite measurements before 2005 and are clearly described by Jack Barrett in E&E:
    http://www.warwickhughes.com/papers/barrett_ee05.pdf Figure 9 and the following page with the explanation.
    And the influence is exactly measured in the downwelling 2000-2010 at two ground stations:
    https://escholarship.org/content/qt3428v1r6/qt3428v1r6.pdf Figure 1.

    I have seen some discrepancy between your opinion and that of Joseph Postma, but I will not go further on that point, as I have better things to do for the moment: I am writing a comment on the post mortem publication of the late Ernst Beck about his 1942 CO2 “peak” which is physically impossible and violates several proxy’s and measurements…
    The radiations balance discussion will be for another time…

  145. FerdiEgb says:

    The figures didn’t come through, here they are:

    Seasonal:

    Year by year:

  146. FerdiEgb says:

    Sorry wrong picture for the year by year δ13C and CO2 changes…

    Here the right one:

  147. Ned Nikolov, Ph.D. says:

    @FerdiEgb

    The 2 conditions you assumed above (A) and (B) are simply wrong, and I have shown this to you repeatedly…

  148. FerdiEgb says:

    Come on Ned, the sales figures from the different governments only may be underestimated, certainly not overestimated and for each type of fossil fuels the CO2 emissions from burning them are known. So the CO2 emission figures from the human use of fossil fuels are exactly known within reasonable limits, somewhere between -0% and +10%.

    And I have shown you that any linear process in this world will give a constant ratio as result of a linear increase of a disturbance…
    Again, if you don’t believe me, simply ask around and any engineer worth his money will confirm that.

    And secondly, just measure CO2 levels yourself, away from local sources and sinks (thus not on land, except in a -ice- dessert or high on top of a bare mountain or coastal with wind from the seaside) and look what these levels are. If they differ more than a few ppmv +/- the accuracy of your hand held device, then you “may” have a case.

    Without that, you are accusing hundreds of scientists of tens of organizations in tens of countries to all cheat with the literally millions of CO2 data (*), without any clear purpose, as it doesn’t make any difference if the remaining CO2 levels in the atmosphere are just following human emissions with a similar curve or show a linear or asymptotic increase in the atmosphere…

    (*) Mauna Loa alone samples 10 second ambient CO2 levels during 40 minutes + 2 x 10 minutes calibration gases per hour, continuously. Over a year that is already over 2 million ambient CO2 measurements…
    In the early days all hand calculated, as the data were on many meters long paper rolls with the analog data stamped (be it at a much lower frequency – probably once a minute).

  149. Mark S. says:

    Hello Dr. Nikolov.
    Sabine Hossenfelder just posted a video on misunderstands with the Greenhouse theory here.

    I added a comment with a link to your Vimeo video. Perhaps this is an opportunity for you to comment on her Youtube and get attention from a wider audience.

  150. jeremyp99 says:

    Maybe time to unpin this now, Rog? Been up for months…

  151. Mark S, the lady has little understanding of the the engineering subject Thermodynamics (eg 2nd law of thermodynamics) and has quite a few things wrong.
    Jeremy, agree – worrying that there are still those commenting without reading Dr Niklov’s post.

  152. catweazle666 says:

    “and has quite a few things wrong.”

    Like just about everything, apart from using the insult “deniers”, which negates her whole rant.

  153. Ian F says:

    In her video Sabine Hossenfelder at 5mins:20secs says that the no atmosphere temperature is -18C. However, the International Standard Atmosphere Table shows that the measured temperature at about 5km altitude is also -18C with an atmospheric pressure of about 52kpa. So how can it be that the earth’s no atmosphere temperature is the same as the earth’s temperature with an atmosphere at 52kpa? Beats me!

  154. Jopo says:

    Hi Ned and Roger

    Always learn something new everytime I watch your “Demystifying the Atmospheric Greenhouse Effect”. Last month I managed to get extra RAOB data 56 years worth and today after watch a portion of the video I realised I could test the claims made by Dr Pierrehumbert without even setting up the spread sheet calculations.
    In your video Dr Pierrehumbert claims that the radiating height increases with CO2.

    That was easy to prove. I set about providing two charts.
    1. Providing the Geopotential height at a temperature of 255Kelvin.
    2. Providing the Temperature at a GPH of between 7890 meters and 7910 meters.
    This GPH was taken from a visual of the GPH in Chart 1.

    I have provided the days between matching criteria in the chart.

    You may noticed as to what appears to be a pattern that is similar to the solar cycle in those charts. i wont call it. But it is similar

    The data is sourced from NOAA. I think it was from a link provided for in one of the Connelly papers

  155. Ian F says:

    In a detailed mathematical explanation of the “thermal enhancement” effect of external forces on a gas in a 2020 paper by Hugo Hernandez “External Forces on the Macroscopic Properties of Ideal Gases”at:
    https://www.researchgate.net/publication/341297940_Effect_of_External_Forces_on_the_Macroscopic_Properties_of_Ideal_Gases
    His results show, amongst other things, that force on a gas alters the distribution of the velocities and positions of molecules in the system which manifests as a proportionate increase in the kinetic energy (temperature) at molecular level. He concludes that “when the potential energy is larger than the thermal energy, the overall system temperature becomes proportional to the potential energy provided by the external force”. This seems to imply that the ratio of force to kinetic energy determines the amount of thermal enhancement.

    In the Ned Nikolov video “Demystifying the Atmospheric Greenhouse Effect: Towards a New Physical Paradigm in Climate Science” at: https://vimeo.com/602819278 18mins in shows that from the physical measurements (by satellite) of the surface temperature of the earth and the moon that the atmospheric thermal effect (“ATE”) increases towards the poles. This would seem to verify the conclusion by Hugo Henandez. The earth’s atmosphere is isobaric (constant force) while solar energy induced to the atmosphere reduces towards the poles which results in an increase in the force to kinetic energy ratio. This is likely the reason for the higher ATE at the poles.

  156. Steve Keppel-Jones says:

    Ferdinand, you referred to to AERI “downwelling longwave infrared power measurements”:

    “And the influence is exactly measured in the downwelling 2000-2010 at two ground stations:
    https://escholarship.org/content/qt3428v1r6/qt3428v1r6.pdf Figure 1.”

    As I have pointed out to you before, these measurements are made at liquid nitrogen temperatures or lower. Do you think that these measurements tell you anything about “downwelling longwave infrared power” at normal surface temperatures on Earth? And therefore about the effect of atmospheric CO2 on radiative energy loss from the surface, or surface temperature itself?

    To be more specific, did you fall for the lie that downwelling longwave infrared power at the surface is a positive number? Measured in hundreds of Watts per square meter?

  157. FerdiEgb says:

    Steve, the AERI equipment works at ambient temperature (-30 to +40 C), not with liquid nitrogen. See the specs at Table 1 and 2 of the second part:

    Click to access amtd-4-6411-2011-print.pdf

    These are full spectrum analyzers, where every small band of incoming wavelength is counted as mW/m2 and summed. The analyzer doesn’t use thermal equipment, it simply counts the sum of photons that hits a chip as a voltage, whatever its own temperature.

    And indeed, I am sure that the downwelling radiation is around 300 W/m2, as that is what is measured with the above equipment. They also measure the upwelling radiation from the ground (at 1 m height) and that still is higher than the downwelling, so we are not at Venus here, but a lot warmer than without downwelling radiation…

    The fact that the equipment did measure the change in downwelling effect of the +/- 10 ppmv seasonal CO2 variation gives a lot of trust that they can measure the effect of 22 ppmv CO2 increase in 10 years time…

  158. catweazle666 says:

    Hi Ferdi, I would be interested in your comments on the visualisation here:
    https://earth.nullschool.net/#2022/06/23/0300Z/chem/surface/level/overlay=co2sc/orthographic=-59.22,35.49,462/loc=93.695,-87.065

    It shows very significantly different CO2 concentrations at different locations, for example on 2023-03-25 01:30 Local UTC we see:
    21.11° N, 156.67° W 426 ppm
    54.33° N, 122.23° W 456 ppm
    5.32° S, 26.49° E 408 ppm

    These seem to me to be widely divergent for what is described as a well-mixed gas.

    I am not aware of how the measurements are acquired, hence their accuracy, however.

  159. FerdiEgb says:

    catweazle666, three points:

    1. There is a huge exchange between the atmosphere and the other two main reservoirs (oceans and vegetation) over the seasons: about 25% of the total amount of CO2 is exchanged from the oceans to vegetation in spring/summer and reverse in fall/winter, passing through the atmosphere. With a slight better performance of vegetation in the NH which gives a global amplitude of +/- 5 ppmv over a year.

    2. Besides the above there is a lag of up to 2 years between the NH and the SH, due to the fact that 90% of human emissions are in the NH and the ITCZ allows only some 10%/year exchange in air masses between the hemispheres…

    3. The main problem is with the CO2 levels near ground over land: one can find levels between 150 ppmv during sunshine, caused by photosynthesis and 600 ppmv at night under inversion, when bacteria release a lot of CO2 from inside the soils… That gives a lot of extra CO2 in the first few hundred meters over land.
    Not so over the oceans and not above a few hundred meters or even lower, if there is sufficient wind speed to mix everything up.

    That means that in 95% of all air mass the levels are within +/- 10 ppmv over all oceans and also over land if you measure with balloons or airplanes over a few hundred meters.

    I suppose that the measurements used were from the CO2 monitoring satellites from NASA. I don’t know if they measure the full column of air up to space or focus on near-earth CO2 levels.

    See some background at:
    http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html
    That page need some fresh-up, already from 2007, but the background still is right.

  160. Steve Keppel-Jones says:

    Ferdinand, did you miss this part from the document you linked to?

    “The detectors are housed in a dewar and cooled below 70 K by a linear Stirling-cycle cryo-cooler”

    (see “Instrument design” and “Instrument hardware” on page 4)

    Sure, you can measure 300 W of downwelling IR if your detector is below 70 K… but not at 288 K. That would be a violation of the 2nd law of thermodynamics, of course.

  161. catweazle666 says:

    FerdiEgb, if you play with the visualisation setup (click on “earth” at the bottom left), you can change the date of the display and various other parameters and it makes a big difference to the CO2 values month on month in various parts of the globe.

    I suspect the data is derived from NASA’s OCO2 satellites, when it was initially released I believe it didn’t show what it was expected to show, and the data – although apparently still available – is no longer readily available in the graphic form it was originally, I’m informed it is available as a huge spreadsheet in an obscure format…funny that…
    https://www.nasa.gov/jpl/oco2/nasas-spaceborne-carbon-counter-maps-new-details

  162. FerdiEgb says:

    Steve, indeed missed that, but that only reduces the noise of the instrument itself on the detectors. The temperature of the detector is not important, as the detection is completely independent of its temperature, it only counts the number of photons at each wavelength that hit the chip, translated into a voltage and that is independent of the temperature of the chip itself.

    If you look at the calibration of the apparatus: they use two blackbodies at different temperatures at the beginning and end of each outside sky run. The radiation spectrum of these BB’s is exactly known from the S-B equation and if that fits, all is fine, if not, an alarm is given.

  163. FerdiEgb says:

    Steve, BTW, there is no violation of any law, because the 300 W/m2 is simply “recycled” energy: part of the outgoing radiation is sent back to the surface and adds to the total energy (SW from the sun + LW from GHGs) that warms the surface.
    The only way to get rid of that extra energy is by increasing the temperature of the surface and thus sending more LW out to get everything in equilibrium.

    With 210 W/m2 of SW energy from the sun alone, we would live on “snowball earth”…

  164. Steve Keppel-Jones says:

    Ferdinand, I had a suspicion that you were going to say “you were right, but it’s not relevant” 🙂

    Yes, you can detect individual long-wave photons by fiddling with the entropy of your detector and cooling it down to cryogenic temperatures. But that is not an “accident” or just “noise reduction”, and you cannot take the resulting power measurement and pretend that you would get the same measurement at a (much) higher instrument temperature.

    Here is an experiment you can try: take one of these AERI instruments, turn off the cryo-cooler, wait for the detectors to reach room temperature, and then see how much power you can measure from the colder atmosphere. The answer will be “a negative amount”. No amount of black-body calibration will fix that for you.

    The pyrgeometers that NOAA operates as part of its SURFRAD network, for example, do operate at ambient temperature. And they measure *negative* power (which they then have to “fiddle with”, or “adjust”, in order to report a positive fake number). This is because the detector is warmer than the atmosphere it is measuring.

    There is no such thing as “recycled” energy, you simply made that up (and got the units wrong while you were doing it – energy is not measured in Watts). Energy flows from hot to cold (low entropy to high entropy), doing thermodynamic work and developing power as it goes (and increasing total entropy in the process). It never flows the other way. No passive instrument will ever measure positive power coming from a colder source. That’s not how thermodynamics works. It is a straightforward consequence of the 2nd Law.

    Just out of curiosity, how much theoretical physics have you studied? Who was your professor? I’d like to have a chat with him, because it sounds like he did a terrible job. You haven’t grasped the fundamentals of thermodynamics at all. What do you think “power” means? Or “energy”? Or “entropy”?

  165. FerdiEgb says:

    Steve, I have no AERI instrument in my workshop, thus can’t do that experiment.
    But if the detector is capable of giving a small voltage pulse from each photon of all important wavelengths of interest at every temperature of itself, then cooling down to very low temperatures only reduces the noise of all gases in the equipment itself (as they even detected a small signal of the remaining water vapor and CO2 during calibration at this extreme low temperature).

    Such a detector always counts the exact number of photons that is pushed on its surface by the line-by-line beam splitter, of course if the construction of the chip allows that for all temperatures from 70 K to ambient. Which seems the case:
    https://en.wikipedia.org/wiki/Mercury_cadmium_telluride
    Measuring at very low temperatures only broadens the detection band width and reduces the noise.

    Thus there is no influence of its own temperature on what is measured from the atmosphere, except for the part that is outside the IR range at higher temperatures or maybe some influence on the detection limit.

    I did use W/m2 not Watt and W/m2 is anyway an energy flux. That flows from hot to cold ánd from cold to hot, as any black body emits ánd absorbs in all wavelengths, whatever the temperature of the sender or receiver. A photon doesn’t include any information that shows the temperature of the sender.
    The only point is that a colder black body sends less radiant energy to a warmer black body, thus the NET effect is that a colder black body is warmed by a warmer one. But also that the warmer black body is cooling less fast with the cold one in the neighborhood than when the colder body was not nearby and all outgoing radiation was lost to space…
    No physical law is broken, all energy is conserved…

    After a few years of discussions with the “Slayers”, I have made an Excel sheet that shows that a heated plate (like the earth’s surface by the sun) does warm up if you insert a cold plate between the hot plate and any cool surrounding. That is what I called “recycled energy”. Everything is calculated: energy fluxes and balances and temperatures.
    Everything can be initialized for different start conditions:
    http://ferdinand-engelbeen.be/klimaat/klim_xls/slayers.xlsx
    Please read the “read me” section first…
    If you can find anything wrong with it, I am all ear…

    Indeed it is over 60 years ago that I had my physics classes and only sporadically used something of radiation, but the discussions with the “Slayers” did sharp my knowledge…

  166. Steve Keppel-Jones says:

    Hi Ferdinand, good discussion so far. A couple of points to keep in mind:

    1) Making a device that is cold enough to detect individual photons is an accomplishment in and of itself, but you need to be careful to distinguish between “quantum” energy phenomena (individual photons) and “classical” energy phenomena (heat, power, work, and entropy). These two domains are distinct, and there is no known bridge between them.

    2) When you wrote “W/m2 is anyway an energy flux. That flows from hot to cold ánd from cold to hot”, this tells me that you have not grasped the concepts of “energy” and “power” properly. It may be time to go back to that elementary physics textbook you studied 60 years ago, and review it, starting from the basics. Remember that energy is measured in Joules, or degrees, and power is measured in Watts. Be careful not to mix these up. Energy represents the potential to do work; work is the process of entropy being rearranged/increased/decreased (in thermodynamics this means temperature being changed or material phase being changed); and power is the rate of doing work per unit time. Energy can, of course, be present without work being done (or power being developed). Energy will “flow” (“flux”) if there is an entropy gradient for it to slide down. But only then; not at any other time. (Make sure you distinguish between “energy flow” and “radiation”, too, because these are not the same concept.) In an equilibrium situation, as the simplest case, energy is not “flowing”, or moving – no work is being done, and no power is being developed. (Radiation will be present, though.) So, in particular, energy does not “flow” (produce work) from cold to hot. That cannot happen. Entropy does not allow it. A colder object cannot perform “work” upon a hotter object, and therefore cannot deliver “power” to it. Only the other way around.

    Of course the rate of energy transfer between two objects depends on the temperature of both objects, as shown by the S-B law. So your experiment with adding extra plates to a two-body system will alter the rate of energy flow, and thus equilibrium temperatures, in the system. But this does not change the fact that a colder object will not increase the temperature (energy) of a hotter object.

    You can do simpler experiments that don’t require an AERI, just by obtaining a thermocouple. These are very cheap, and you can measure the voltage on them with a voltmeter. The thermocouple has its own temperature based on its surroundings, and it will develop a voltage based on the radiant input or output power. That, in turn, depends on the temperature of whatever the thermocouple is pointed at. Then you can confirm that the thermocouple voltage will be positive if the target is warmer than the thermocouple (meaning that the incoming power is positive), and negative if the target is colder (negative incoming power, i.e. positive outgoing power). This is how IR thermometers work.

    From this experiment you can learn that, as I said, the incoming or outgoing radiant power measured by an instrument does indeed depend on the temperature of the instrument (as well as its surroundings). That is a direct consequence of the S-B law. So a room-temperature power-measuring device, at night, pointed upward at the atmosphere, will measure negative power. This is what the SURFRAD pyrgeometers do.

  167. FerdiEgb says:

    Steve,

    I have the impression that you are going far too far in the matter… The discussion is about the measurement of energy fluxes by radiation, independent of the case if that leads to temperature (changes), power, work,… The latter is the result of all energy fluxes coming in and going out, no matter if that is by conduction, evaporation or radiation…

    About 1)
    Every photon is a fixed package of energy, depending of its wavelength: higher frequency / shorter wavelengths photons contain more energy than longer wavelengths. The energy content in Joule is exactly known, here for the red light waves at 700 nm that some chlorophyll uses for photosynthesis:
    3 × 10^-19 Joule per photon.
    https://en.wikipedia.org/wiki/Photon_energy

    There is zero distinction between the energy flux from a 1000 W electrical plate heating a water kettle and a 1000 W infrared lamp focused on the same kettle through a vacuum and IR transparent glass windows (assuming no losses)…

    Neither is there any distinction between using a direct gas flame to melt steel and a CO2 laser at 10 micrometer wavelength doing exactly the same.

    About 2)
    The difference between Watt and Joule is only the time: 1 J = 1 W.s
    Thus a constant beam of 100 W gives you 100 J/s and therefore is an energy flux.

    The difference between conduction and radiation is in the following:

    For conduction:
    Qt=kA(T1−T2)/l: all depends of the difference between T1 and T2, which is one-way.

    For radiation (in this case for a globe inside a globe):
    Qnet/t=σeA(T1^4−T2^4): all depends of the difference of the fourth power of the temperatures.
    As there is no connection between the two at all, the real transfer rate is the difference in radiation between the two individual objects to each other:
    Qnet/t = σeA(T1^4) – σeA(T2^4)
    As already said: for radiation, the energy fluxes are as good from hot to cold as from cold to hot, but the NET energy flux is always from hot to cold…

    Further, all the above is true for objects without internal of external heating: in the case of a heated plate, any insertion of a cold plate above 0 K that radiates some energy back to the hot plate will increase the temperature of the hot plate. The hot plate has its fixed energy supply and adding any extra energy will give an increase in temperature until a new equilibrium is reached where the outgoing energy = the sum of the heater energy plus the incoming radiation of the cold plate. If that was not the case, you are destroying energy…

    The same for the earth’s surface: even if only 1% of the outgoing energy is “recycled” back to the surface, that adds to the total energy hitting the surface (SW + LW) and the surface must increase its temperature to get a new equilibrium.

    I have a hand-held IR temperature device (probably an IR-only LDR – light dependent resistance or diode or transistor) that currently shows -1°C up the sky, but it is cloudy. With a clear sky (if I remember well), that got down to -10°C. So that works pretty well.
    Didn’t you wonder why that device doesn’t show -270°C? That is the temperature of space (about 3 K)…
    That is because with -10°C, the sky still sends 271 W/m2 down to the surface (if it was a black body, which it isn’t, but the principle of existing downwelling radiation is the same)…

    All what the pyrgeometers do is measuring the difference between their own outgoing radiation and what is coming in as radiation from the sky and then have to compensate for their outgoing radiation due to the difference in own temperature and the temperature where they were calibrated for…

    The AERI devices and many other devices (from single photon counters to CCD IR camera’s – at room temperature) work with a total different principle that is independent of their own temperature. The principle, not necessary that there is no influence of temperature at all.

  168. Steve Keppel-Jones says:

    Ferdinand, it looks like you have grasped some of the principles involved here, but not all of them. You have correctly identified power as a rate of energy transfer per second. But I don’t think you have grasped what “energy” is, and so you still write the incorrect statement “the sky still sends 271 W/m2 down to the surface” (not counting emissivity etc.)

    Let’s just take a closer look at that claim. The sky is colder than the surface. If the sky is sending 271 W/m^2 to the surface, that means it is transferring 271 Joules/sec/m^2 to the surface. That means the sky is losing energy at a rate of 271 Joules/sec (getting colder), and the ground is gaining energy at the same rate (getting warmer). Does that sound like the way the sky and ground usually behave? If you put a cold glass of water next to a hot mug of tea, does the water get colder and the tea get hotter?

    Another way to look at it is the S-B equation which you posted: Qnet/t=σeA(T1^4−T2^4)
    What do you get if you plug in the ground temperature as T1 (around 288 K) and an average sky temperature as T2 (around 255 K, say?) Which direction is the power flowing? (again just set e to 1 for simplicity, it’s the direction we’re concerned with here) Is it positive (ground to sky) or negative (sky to ground)?

    Apart from other problems in your interpretation of thermodynamics, it sounds like you think it is possible for Joules to travel in two directions along the same entropy pathway at the same time. That isn’t how Joules work. Remember, energy (Joules) constitute “the potential to do work”. Actual work only occurs when the Joules can travel DOWN the entropy slide. They never travel UP the entropy slide. It is exactly the same as water flowing downhill. Water never flows uphill. Neither do Joules. And water flowing downhill is not “net flow downhill”; that is a made-up idea. There is no “water flowing uphill” to subtract from a “larger amount of water flowing downhill” to get “net flow downhill”. So you will not see the phrase “net power” in your physics textbook. The climate scientists made that phrase up, specifically so that they could claim that there is an enormous amount of power being radiated from the air to the ground. There isn’t. You can tell, because no one can measure it.

    Since it’s been 60 years since you studied physics formally, did you learn everything else you think you know from the climate scientists? They are bad teachers. Remember, they are professional liars. Anything they tell you is more likely to be false than true.

  169. FerdiEgb says:

    Steve,

    The crux of the matter is in:

    “Which direction is the power flowing? (again just set e to 1 for simplicity, it’s the direction we’re concerned with here) Is it positive (ground to sky) or negative (sky to ground)?”

    For radiation it is both. That is the difference between conduction and radiation…
    BTW, as said before, the surface is heated by an external source, the sun, thus any back radiation will cause extra warming, even from an ice cube. Or what would the earth’s temperature be with only 163 W/m2 direct solar energy at the surface?

    There is nothing that prevents an object at 255 K to radiate energy in all directions, including another object that surrounds it that is warmer. And there is nothing that prevents a warmer object to receive all the radiant energy from a colder object that it encloses. Or the opposite. Only the net energy flow is from the warmer to the colder object… As there is zero connection between the two objects and zero information in the photons, except their own energy, the difference between the two 100% separated energy flows gives the net energy flow…

    That is the case for the earth’s surface and the atmosphere (with GHGs): the atmosphere (mostly by water vapor and CO2) receives about 400 W/m2 from the surface and send 340 W/m2 back to the surface (globally). The rest is send to space. So the atmosphere doesn’t cool down, as long as there is a balance between incoming and outgoing energy. See:

    I didn’t remember “net power” in any textbook, but recently saw “net energy flux” in a lot of energy transfer calculations which used both the “net energy flux” calculation and the separate fluxes to come to the same result…
    Need to find that back (used in another discussion…).

    Anyway, I am very aware of what the IPCC says (or doesn’t say), but in this case I am in good company of Dr. Happer and Van Wijngaarden, which fully support the back radiation as measured and calculated.
    Van Wijngaarden calculated the outgoing radiation to space reduction at 3 W/m2 for a CO2 doubling and the increase in back radiation at the surface at around 4 W/m2…

  170. Steve Keppel-Jones says:

    Ferdinand, who taught you this? “For radiation it is both. That is the difference between conduction and radiation…” Certainly no physics professor told you that, nor did a physics textbook.

    The logical fallacy you are engaging in here is known as “special pleading”. You are attempting to claim that electromagnetic radiation energy is somehow “special” and does not obey the normal laws of thermodynamics.

    Having failed to produce experimental evidence for your assertion, and being unwilling to admit that you were wrong, all that’s left is logical fallacies.

    Did you learn this from the “climate scientists”? Remember, as I mentioned they are all liars…

    (Note that Dr. van Wijngaarden did not measure any “back radiation”. He played games with MODTRAN and determined that CO2 absorbs infrared energy, which is true, but that’s all he did.)

  171. FerdiEgb says:

    Steve, you are still thinking at radiation as if it is conduction…

    Take the net energy transfer between a hot and cold object in a vacuum:
    Qnet/t=σeA(T1^4−T2^4)
    As there is no physical connection between the two objects, the above formula is only a calculation of the net effect and has no physical meaning.
    The real transfer is:
    Qnet/t = σeA(T1^4) – σeA(T2^4)
    Each object just sends energy out in ratio to its own temperature, no matter if it is alone in space or surrounded with other objects, whatever their temperature.
    The net energy transfer is simply the difference between the two energy fluxes and shows exactly the same result as in the first formula: a net energy transfer from the hottest to the coldest object. No thermodynamic law is violated.

    An even more interesting (one-way) example is a CO2 laser:
    For a 10 kW CO2 laser, the output beam energy can be up to 3 KW or 3.000 J/s
    Own temperature of the laser: maximum 80°C (cooled), beam of around 10 micron (the “peak” frequency of a black body of around -80°C). Hitting a steel object the huge energy density (in only 1 mm2) heats the steel up to melting at 1500°C…

    You can do the same with a gas flame and conduction between the hot gases and steel that has a flame temperature of over 3200°C (oxygen/acetylene)…

    That is the difference between radiation energy transfer and conduction energy transfer…

    Further: indeed Dr. Van Wijngaarden’s work was based on Hitran, but Hitran was and is entirely based on direct measurements of radiation spectra of a lot of mixtures in air at different temperatures and pressures, be it in laboratory circumstances.

    Full spectrum measurements, with a measuring principle not influenced by the instrument’s own temperature…

  172. FerdiEgb says:

    catweazle666,
    I have tried different dates for the OCO-data at one place: somewhere near the Hawaii islands and didn’t see much variability and comparable to nearby Mauna Loa.
    I suppose that the main variability is over land (but haven’t tried it yet).

    Indeed the performance (or the results) of the OCO satellites is obviously not what they expected at NASA. Probably the main reason that they obscure the data…

  173. Steve Keppel-Jones says:

    Ferdinand, you wrote “Steve, you are still thinking at radiation as if it is conduction…”, but that’s not what I said. Radiation and conduction are different ways of moving energy from one place to another. But the energy being moved is the same concept in both cases. And when energy moves while increasing entropy, it develops power at a given rate. Of course energy only moves from low entropy areas to high entropy areas, i.e. hot to cold. Never the other way. It doesn’t matter whether it’s moving via conduction or radiation. Who told you that radiation does not obey the 2nd law of thermodynamics?

    It sounds like you have gotten confused between “radiation” and “power”. This is understandable, if you never bothered to learn the fundamentals of both concepts to begin with, or forgot. Radiationists (the entire field of climate science), after all, always write “radiation” followed by W/m^2. But radiation is not a power phenomenon. It is an energy phenomenon. That means the word “radiation” should be followed by the word “Joules”, not “Watts”. More usefully, it can be described in degrees, which corresponds to the temperature of the object that emitted it. Degrees and Joules are closely related; both are descriptions of energy quantities.

    You then wrote “net energy transfer”, which is a sketchy phrase, because it implies some kind of “non-net” (“gross”?) energy transfer, but that is not how physics works. Did you see either the phrase “net energy transfer” or “net power” in your physics textbook? I know it’s been 60 years, but you can always go back and check again. Let me know if you find it.

  174. FerdiEgb says:

    Steve, the word “net” energy transfer comes directly from an online textbook about radiation, see the formula in a previous answer (Qnet/t=σeA(T1^4−T2^4)). That is the non-physical formula that can be used instead of the difference between two physical radiation formulas, but gives the same result…

    I have the impression that there is a lot of confusion about the definitions used.
    In my (too old) opinion, W/m2 is simply energy transfer, which may result in warming, cooling or no change, depending of the sum of all incoming and outgoing energy. Energy can be added and subtracted and only the net result of all energy ins and outs is what influences the temperature of an object.
    Just like a 1.000 W heating plate of a 1.000 W IR lamp will heat up a water kettle up to boiling, if giving it the necessary time.

    I don’t agree with temperature for radiation. Radiation contains no information whatever about the temperature of the sender. It is just a small package of energy, that is all. It may come from a hot steel melt (at the tail of its outgoing Planck’s spectrum) or from a CO2 molecule in the stratosphere at -40°C. In both cases it contains exactly the same amount of energy in Joules for the same wavelength…

  175. Steve Keppel-Jones says:

    Ferdinand, if “Qnet” is the actual power transfer after all of the energy in the system has been taken into account, then what do you call the component terms labeled just “Q”? Fake power? You got these backward: “Qnet” is the physical one, and individual “Q” terms with reference temperature set to 0 K are non-physical. They are mathematical fictional constructs.

    You have still failed to grasp what “energy”, “power”, and “work” mean. Energy does not just wander around in random directions looking for something to do. It only flows in one direction, increasing entropy. That means work can only happen in one direction, and power can only be developed in one direction. Anything else is a fictional mathematical construct that does not exist in the real world.

    And temperature doesn’t care whether you agree with it or not. Temperature is defined as the average (usually kinetic) energy of a group of particles. That applies exactly the same way to photons as it does to atoms of iron or molecules of nitrogen. Photons have characteristic energy just as iron molecules have kinetic energy, and when you average these together, you get the temperature. So solar radiation does not have an intrinsic “power” of 1300 W/m^2 or so; that is a misleading description. It does have a characteristic energy of 6000 K, though. And in the same way, radiation emitted by the atmosphere near the surface might have a temperature of around 255 K, and the surface radiation temperature will be around 288 K (depending on conditions etc.) The result is that energy flows from the warmer surface to the colder atmosphere, and power is developed only in that direction. Not the other way.

  176. FerdiEgb says:

    Steve, I am completely surprised by your answer. What you say is the opposite of what I have learned and many before me and after me…

    Any object above 0 K sends out radiation itself, no matter if that is alone in space or surrounded by colder or hotter objects.

    How much is exactly known from the S-B equation:
    Q1 = σeA*T1^4
    See https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law
    (sometimes Wiki has good info).
    And in particular:

    ” j* = ε σ T^4
    The radiant emittance j* has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre, or equivalently, watts per square metre.”

    Radiant energy thus is an energy flux.

    If you have a black body sphere within a sphere at different temperatures with a extremely small distance (thus the same surface of 1 m2 and no outside loss to space)
    The inner sphere gives a flux of:
    Q1 = σ*T1^4 to the outer sphere and the outer sphere sends:
    Q2 = σ*T2^4 to the inner sphere.
    If the inner sphere is the warmest of the two, then the inner sphere will loose energy:
    Q1net = σ*(T2^4 – T1^4)
    While the outer sphere will gain energy:
    Q2net = σ*(T1^4 – T2^4)
    Or an overall transfer of Qnet = σ*(T1^4 – T2^4) from the warmest to the coldest object.

    No physical law is broken and energy still flows from hot to cold. If that delivers enough energy to provide power or work is secondary and mainly is a matter of sufficient temperature differences.

    “That applies exactly the same way to photons as it does to atoms of iron or molecules of nitrogen. Photons have characteristic energy just as iron molecules have kinetic energy, and when you average these together, you get the temperature.”

    Sorry, that is simply impossible. Temperature is only applicable to vibration of molecules and atoms, not to photons.
    One can have a bundle of photons from a CO2 laser at a frequency of around 10 micron, which normally is the peak frequency of a black body at -80°C that simply melts steel at 1500°C, thus a “cold” beam that heats and melts a much hotter object? How is that possible if photons have a “temperature”?

    What is possible, is that one can deduce the temperature of an object by looking at the full spectrum of wavelengths and their strength (in space), only if that is a black body and the radiation is not disturbed by absorption as is the case in an atmosphere with GHGs…

  177. Steve Keppel-Jones says:

    Ferdinand, you should not be too surprised, because I am speaking from physics, and those who came before you and after you are speaking from “climate science”, which, as we know, consists of a set of professional lies, designed to promote an agenda, and to separate you from your hard-earned dollars. Further to this point, the one experimental support you claimed for your theory turned out to collapse upon closer examination, so that should have made you re-examine the theory. But you didn’t, for some reason.

    Next you said: “Any object above 0 K sends out radiation itself, ”

    This is true, but bear in mind that this is radiant *energy*, in Joules. Energy is the potential to do work. It is not the same as power. I don’t think you have grasped this fundamental point yet. Or you have not grasped what “radiation” is. Or more likely both. I will know when you have grasped these points, because afterward, you will always follow the word “radiation” or “radiant” with either “energy” or “power”, so that your readers know what kind of phenomenon you are talking about. If you leave those qualifiers out, then neither you nor your readers can be sure what is going on, or what kind of point is being made. Instead you will be speaking fuzzy and probably false nonsense, but not physics.

    Anyway, then you showed this equation: “Q1 = σeA*T1^4” This is of course the general S-B radiant heat transfer equation, except that one of the two temperature terms, “Tc” (or “T2”), has been set to 0. So we can clearly see that the purpose of this equation is to calculate how much power (per unit area, i.e. energy flow per unit time per unit area) is being transferred from an object at temperature T1 (and specified emissivity and area), to an environment at temperature 0 K. i.e. in conditions pretty close to outer space. For this reason it is not a terribly useful equation in this form, for most people, with the possible exception of rocket scientists and astronomers.

    You can easily tell that what this equation does NOT do is tell you that every object at room temperature produces (more accurately “develops”, because power is “developed” based on external conditions, not “produced” or “sent” intrinsically) close to 400 W/m^2 just sitting there in your room. If that were true, by way of counterexample, then all our power problems would be solved, and we wouldn’t need expensive nuclear reactors and hydroelectric turbines to run our lights and computers and refrigerators. We would have hundreds of watts per square meter all around us, just sitting there ready for the taking. But we don’t have any such thing, do we?

    You correctly used the S-B equations to calculate that energy is transferred from warmer objects to colder ones. So you are on the right track. But your un-physical use of the specialized S-B equation in particular, with Tc set to 0, tells me that you don’t really grasp what is going on, and are just writing equations without any conception of what they mean. As you said, you can get the same answer by applying the general S-B equation just once (“Qnet”) (which is what physicists would normally do), instead of applying the un-physical specialized one twice (“Q1net” and “Q2net”), and subtracting the difference. That just makes more work for yourself, and gives you some non-physical intermediate terms that are totally unnecessary (and imaginary, and mis-labeled to boot – the intermediate terms are even less “net” anything than the final one).

    So somehow, even though you can correctly calculate which direction energy flows (hot to cold), you have failed to make the logical connection that this is the only direction in which work can be done and power can be developed. From this I have to conclude that you don’t know what “work” or “power” mean.

    (Note that energy as temperature is really a separate discussion, and gets tricky when you have a batch of photons generated by a non-thermal source. But that doesn’t change the definition of temperature that I gave you, nor that it is applied equally easily to photons as to atoms. Look up “equilibrium photon gas” to see that a “gas” of photons is characterized in exactly the same way as a gas of nitrogen or water vapour, i.e. by temperature, pressure, and volume. The main thing to remember from this side-discussion is that energy is a fundamental characteristic of a photon, but power isn’t.)

    To wrap up, you need to learn at least 4 fundamental concepts here:

    1) radiation is energy
    2) energy is not the same as power
    3) the relationship between energy and power depends on entropy
    4) what entropy means

    I can help you learn these things, but first you need to stop assuming that you know what you are talking about while making false statements (i.e. repeating the lies of the climate scientists), and start asking actual fundamental physics questions. All of the answers will be backed up by experiment, in contrast to the lies the climate scientists are peddling.

    Remember, science is about knowledge. And it emphasizes repeatable, objectively verifiable knowledge. The best way to back that up is via theory and experiment. If an experiment invalidates your theory, it might be time to throw out the theory and make a new one.

  178. FerdiEgb says:

    Steve,

    Before we are talking about power we need to get the energy transfer right.

    The radiation energy send out by any object above 0 K is exactly known from the S-B equation and is exactly the same for the same object at the same temperature, no matter if the object is in space or surrounded by other hotter or colder objects.

    There is no physical connection between different objects in a vacuum to exchange any information about their own temperature. Only a difference in radiant energy sent and received if objects are at different temperatures.

    Thus every object in a room at the same temperature sends the same W/m2 (assuming all black bodies) out of itself, making that the difference between incoming and outgoing radiant energy from/to the different objects is zero, thus no power or work can be done, as there are no temperature/energy differences.

    I have looked up the “photon gas” definition:
    https://en.wikipedia.org/wiki/Photon_gas
    All what I can say is that it is about sub-atom particle physics which is not necessary the same physics as for atoms or molecules.
    Several of the “definitions” for photons only apply when a wall is hit and an exchange in energy takes place…
    I don’t think that one can apply the ideal gas law pV = nRT to photons…

    It looks like that you haven’t learned the cause of the radiant energy that any solid or liquid object above 0 K sends out. The definition is here:
    https://en.wikipedia.org/wiki/Thermal_radiation
    The origin of radiant energy is that dipoles between adjacent atoms/molecules are formed that result in radiant energy, increasing in total and changing in frequency distribution with the vibration energy (thus temperature) of the atoms/molecules.

    That all has nothing to do with the presence or absence of other objects in the neighborhood, whatever their temperature.
    For the calculation of the net energy transfer that doesn’t matter, but the fundamental point is that each object emits radiant energy of its own, independent of other objects in the neighborhood.

    That is not applicable for gases, where only during collisions dipoles between atoms/molecules are formed and most radiation capture/emission is internally within certain molecules, the so-called greenhouse gases. That is the reason that solids and liquids emit radiation energy in a continuous spectrum and most gases in the atmosphere (N2 and O2) are completely transparent to radiation and any absorption / emission of certain gases is in defined lines (are many lines as for water vapor).

    Further, if you don’t accept that the measurements done at two stations are real measurements of the full spectrum of IR down radiation, no matter the temperature of the instrument (because based on a complete different principle), then any further discussion doesn’t make sense.

    I have been fighting the IPCC where they are wrong, mainly the climate models, which are far beyond reality. Or the Bern and similar models which assume very long residual CO2 levels in the atmosphere due to human emissions.
    Even so, I have reacted on wrong information from skeptics which insist that the human contribution to the CO2 increase is minor, against all evidence or that GHGs have no effect, or that back radiation doesn’t exist, against all evidence…

  179. Ian F says:

    The downward radiation that we see in our atmosphere is not “back radiation”. It is the result of the adiabatic compression heating of descending air parcels at the reverse adiabatic lapse rate. The increasing radiation towards the surface is the consequence of the increasing temperature of the air parcels as they descend. It has nothing to do with CO2.

  180. FerdiEgb says:

    Ian, gases are not emitting radiation in the IR range by warming, except for extreme pressures and temperatures like in the sun. Then you have sufficient dipoles during collisions to give a continuous radiation spectrum like for solids and liquids.

    The satellites that measure the temperature of the earth’s atmosphere layers measure the intensity of O2 radiation in the microwave range, far beyond the IR range.

    At earth’s temperatures the bulk of the gases like O2 and N2 don’t absorb or emit radiation in the IR range. Only a few gases like CO2, CH4 and water vapor do and these show very distinct lines/regions of emissions, almost independent of their own temperature, not a continuous spectrum that you have from temperature related radiation in solids or liquids.

    The full downward radiation spectrum was measured at two stations in the US (Nebraska and Alaska) over a period of 10 years and they could measure the specific back radiation of both the seasonal amplitude of CO2 and the increase in back radiation of about 0.2 W/m2 from 22 ppmv CO2 increase over the period 2000-2010:

    Click to access qt3428v1r6.pdf

  181. Ian F says:

    Ferdinand, what you say is true but the descending parcels of air don’t only contain nitrogen and oxygen they also contain a proportionate amount of carbon dioxide which will emit LW frequencies.

  182. FerdiEgb says:

    Ian, even CO2 or water vapor don’t emit IR in ratio to the temperature of themselves or their surrounding. They catch IR photons in specific wavelengths, which are only a part of the upgoing radiation that the earth’s surface emits towards space. When a photon is catched, that energy is either distributed to other molecules like N2 or O2 by collisions (thus heating the atmosphere) or resend in all directions, including back to the surface, thus increasing the temperature of the surface…

    That is what was measured by two stations in the US specifically for CO2…

  183. Steve Keppel-Jones says:

    Ferdinand, this statement is false and very confused: “The radiation energy send out by any object above 0 K is exactly known from the S-B equation ”

    The S-B equation, as you know, takes two temperatures as input, (along with emissivity and area) and gives you a POWER output on the left hand side. Not ENERGY. You can tell because power is measured in Watts, and this is the unit produced by the S-B equation. Energy and power are not arbitrarily interchangeable.

    Here are two correct statements:

    1) The ENERGY output by any object above absolute 0 is defined by its temperature and nothing else (temperature is energy, and energy is only the potential to do work – not actual work)
    2) The POWER transferred between TWO objects is given by their temperatures (and emissivity and area), and the S-B equation (power is rate of energy transfer, or rate of work being done)

    You could convert your statement into a correct one like this: “The radiation power transferred from an object above 0 K to an environment at 0 K is given by the S-B equation with Tc set to 0 K”.

  184. FerdiEgb says:

    Steve, indeed I did make an error: the radiation power send out by any object above 0 K is exactly known from the S-B law (not equation). See e.g.:
    https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law
    I searched for S-B equation and near nothing did pop up, or just a few towards the definition of the S-B law…

    The outgoing radiation power for any object only depends of its own temperature (and surface and emissivity). Completely independent of the presence or absence of any other warmer or cooler object in the neighborhood.

    Two objects in each other’s neighborhood exchange radiation power in ratio to their own temperature, independent of the temperature of the other object. The net power transfer is the difference in radiation power between the two objects. That can be done with an equation which shortens the calculations, but has no physical meaning itself.

    Better that way?

  185. Steve Keppel-Jones says:

    Ferdinand, given how many fundamental errors you have made in this discussion so far, and the fact that you have no experimental evidence for your theory, don’t you think it would be wiser to stop trying to lecture me on how you think physics works, and instead start asking some fundamental questions?

    The S-B law and S-B equation are the same thing. It is a law which is an equation. It is an equation which requires two input temperatures (i.e. energies) (and a few other variables plus a constant), and produces an output power (the rate of energy transfer between the objects at those two temperatures). No more and no less.

    This statement is false: “The outgoing radiation power for any object only depends of its own temperature” (But at least you did remember to qualify “radiation” with “power”, so that I know what you are talking about. That’s an improvement.)

    The S-B law tells you that the statement is false. The law/equation takes TWO temperatures as input. (Sometimes called Thot/Tcold, or T1/T2) If you set one of those temperatures to 0 K, then you have an equation which applies (approximately) to an object in outer space. This is the version of the equation that Wikipedia misleadingly calls “the S-B law”, without telling you where it came from, and why it is a specialized version of the general radiant heat transfer equation, or that you can only use this equation in outer space.

    You can observe that your statement is false in another obvious way, without having to invoke any equations. If every object emits “power” based only on its own temperature, then we are surrounded every day by objects emitting power. For example, the pen on your desk (if you have a desk and a pen). As well as the desk itself. If your pen sitting on your desk is emitting power just sitting there, why do you need to pay the power company to sell you power? You are surrounded by free power, after all, according to your claim. And lots of it, too – hundreds of Watts per square meter. What are you doing with all those free Watts you are surrounded by? Are you just wasting them? You should make a Stirling engine to turn them into rotational energy, and then run an electric generator with that. Easy peasy! Try it!

    The third way you can tell that your statement is false is that power is not something which is “emitted”. Energy can be emitted, as a field of radiation potential, but power is something which can only be “developed” between two objects, given an entropy gradient between them (and only down the entropy gradient). That phenomenon critically depends on work being done, and an object cannot do work by itself in isolation – only on another object. Do you think it is time to ask fundamental questions about what entropy is, what work is, and why you need an entropy differential to cause work to happen – and therefore power to be developed?

    Note that just because you see an equation on the Internet somewhere (even on Wikipedia) does not automatically mean you have grasped everything you need to know about its meaning and applicability. That will require some more fundamental study first. I know it’s been 60 years since you did any of that, but it’s never too late to start again! I have every confidence that you can figure this out. I am here to help.

  186. Ian F says:

    Ferdinand, CO2 emitted radiation does not add energy to the atmosphere which is what it would need to do to increase temperature. The radiation does no warm anything because it does not have temperature. Radiation simply transfers energy from a warmer body to a cooler body. It does not change the overall temperature of the earth’s surface unless it comes from an external source, in the earth’s case the sun.
    In a gas radiation is absorbed into a molecule by resonating with the molecule’s natural frequencies and then sharing amplitudes (energy amounts). It has been observed that energy will only flow from a higher amplitude to a lower amplitude, so that the higher energy amount is shared with the lower amount. In the case of collisions, the molecule with the higher translational velocity will share its energy with that of the molecule of lower translational velocity. So in both cases energy is transferred from the higher energy to the lower energy. You need to bear in mind that molecules can’t emit more radiation than they absorb (Kirchoff’s law).
    As you say, correctly, CO2 emits photons in all directions this means about two thirds of emitted photons will either go upwards or sideways leaving about one third to go on a downwards trajectory. Since air in the lower troposphere is mostly warmer than at higher altitudes the downward radiation cannot warm the lower troposphere because the radiation is unlikely find a molecule with a lower amount of energy than itself that it can resonate with.
    CO2 is a 400ppm (0.04%) trace gas in our atmosphere necessary for plant life. Absorption tests have shown that CO2 is only able to absorb about 16% of the earth’s emitted infrared frequencies. These are in the mid infrared range that have an average energy value of 0.08 electron volts. To put that into perspective the average energy value of visible light frequencies is about 2.4 electron volts or about 30 times the energy absorbed by CO2. So CO2 has a very small effect on atmospheric temperature even if you include its partial pressure.
    Outgoing long wave radiation (OLR) is a byproduct of the earth’s thermodynamic system. Earth’s atmosphere is not static, it is a dynamic system in fluid motion as a result of conduction, and convection. Over 75% of the surface heat is lifted to the upper troposphere by these processes. Most of the OLR is from water vapor which absorbs a far greater range of infrared frequencies than CO2 and, because it is a condensable gas, also carries latent heat to the upper troposphere.
    You probably know most of these things but it seems to me that a paradigm shift is needed away from just focusing on the radiation of atmospheric trace gases towards the physics of all of the processes occurring in earth’s atmosphere.

  187. FerdiEgb says:

    Steve, all definitions I found on the Internet of the S-B law define the law for one object, not two. The only formula is the loss of radiant energy/power of any object above 0 K, no matter if there is any other object in the neighborhood.
    From a university:
    https://www.pas.rochester.edu/~blackman/ast104/radiation.html

    If you can find a different formula for the S-B law (not equation) then you may be right… Heat/power transfer is a matter of two objects each sending their own energy/power per S-B law to the other, the difference is the net power transfer.

    If I don’t pay the energy company, every object in my house will loose energy per S-B law, and ultimately cool down to 0 K, except that the sun does heat the earth at an average 288 K (with the help of GHGs), thus my house will receive relative much energy/power: radiant and by conduction, from its surroundings. Without that, I would freeze to 0 K. The rest of my comfort comes from the bill I need to pay to my energy company (natural gas in this house).

    Thus indeed the pen on my desk is sending its own energy out to myself and reverse. Only if there is a sufficient difference, that can do practical work, but unfortunately, that is not the case…

    For the rest, there still is a lot of confusion about the definitions: every object above 0 K emits radiant energy/power in W/m2 in ratio to its own temperature to the fourth power (again “power” in a different definition…).
    According to the following definition, Watt is defined as a form of “power”:
    https://www.kqed.org/quest/72724/what-is-the-difference-between-power-and-energy

    “A watt equals a joule per second. If a smart phone uses five joules of energy every second, then the power of the phone is five joules per second, or five watts.”

    As non-English language native, I find the different definitions very confusing…

  188. FerdiEgb says:

    Ian,

    I mostly agree with what you say…

    Just a few remarks:

    – When a CO2 (or H2O) molecule absorbs a photon, the energy is put in internal vibration.
    Some fundamentals are here:

    Click to access barrett_ee05.pdf

    If that excited CO2 collides with a N2 or O2 molecule, the extra energy gets into the vibration of the O2/N2 molecule, practically independent of the vibration level (“temperature”) of the latter (although the opposite may happen too with a high energy O2/N2 molecule). Thus heating the air column where the CO2 molecule resides, independent of the local temperature (vibration energy) of all molecules around it. If one may call the excited CO2 molecule “warmer” than the other molecules, I don’t know, but probably the energy level of an excited CO2 molecule is higher than of most other molecules, even near the surface.
    That is quite different from solid or liquid objects where the type of vibration energy is similar for all molecules present and therefore in average is going from warmer to cooler.

    Without collisions, the IR radiation from CO2 is not absorbed by O2 or N2, neither by water vapor for the frequencies where water vapor is not active, thus only another CO2 molecule can absorb the down going frequencies from CO2 or they hit the surface and are absorbed there (the earth is a near black-body for IR frequencies, both in outgoing as incoming IR).
    That adds to the total energy hitting the earth: about 235 W/m2 (in the above reference, 210 W/m2 in other references) of SW sunlight and the rest is from back radiation.
    If the earth only had incoming SW from the sun, we were freezing at average 254 K (“snowball earth”) and only around the equator some life was possible.

    Indeed most of the energy exchanges between surface and space are thanks to water vapor (in both directions), but even so CO2 plays an additional role, as it is active where water vapor is not/less active…

  189. FerdiEgb says:

    Steve,

    In addition, can you explain why a hand held thermometer can measure temperature of any object in the surroundings, if there was no one-way radiation of all these objects in all directions?
    Or does the thermometer “knows” the net energy/power difference between itself and these objects?

  190. FerdiEgb says:

    Ian,

    A good explanation of the opposite excitation of a high energy N2 molecule to a CO2 molecule is what happens in a CO2 laser: Electrical energy excites a N2 molecule, but that can not loose that energy in an easy way (by radiation), only by collisions with a CO2 molecule, the latter gets excited and that looses its energy by radiation. With the help of “modulating” other gases. See:
    https://physicswave.com/carbon-dioxide-laser-construction-and-working/

    At the end that contains an error: the outgoing beam is 9.6 and 10.6 micrometer not 0.6 and 10.6…

    In how far that happens in the atmosphere near surface, I don’t know, but theoretically it can happen with high energy (‘hot’) N2 or O2 molecules colliding with CO2 molecules…

  191. Steve Keppel-Jones says:

    Ferdinand, for the general S-B radiant heat transfer equation, try here:

    http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/stefan.html

    You will see both the general radiant heat transfer equation containing two temperatures, and a specialized one where the “cold” temperature has been set to 0 K (that is the one you see most often when you do a Google search for “S-B Law”).

    However, most of the examples where they show you the equation with only one temperature in it are written by extremely confused people, including the one you linked to. That one says: ” Stefan-Boltzmann Law gives the total energy being emitted at all wavelengths”, and you can tell that they have no idea what the difference between energy and power is either, since they write “energy”, but the unit is Watts (per square meter). But energy is not measured in Watts! Never has been, never will be. The unit for energy is Joules, and the unit for power is Watts.

    Since it looks like you understand how to calculate the power transfer via radiation between two objects using T1 and T2 (i.e. via the general heat transfer equation in the above link, which you have also written yourself in an earlier comment), what do you think happens when you set T2 to 0 K? What is the resulting equation, and what is the physical meaning of setting T2 to 0 K?

    Then you wrote: “Thus indeed the pen on my desk is sending its own energy out to myself and reverse. Only if there is a sufficient difference, that can do practical work, but unfortunately, that is not the case…”

    This is indeed what I have been trying to tell you. However, previously you wrote “The outgoing radiation power for any object only depends of its own temperature”. That is contradicted by your sentence above: “Only if there is a sufficient difference, that can do practical work”. Remember that power is defined as the rate of work being done. If there is no temperature difference, no work can be done, and no power will be developed. This is the result you will get if you set T1 and T2 to the same value in the S-B radiant heat transfer equation.

    You still seem to think that energy and power are interchangeable, which is probably your main confusion: you wrote “the loss of radiant energy/power”, but that is not how physics works. You can’t write “energy/power”. You have to pick which one you are referring to. Energy is a quantity, power is a rate. They are not the same. This is an extremely important concept to grasp.

    As far as your follow-up question, which was “can you explain why a hand held thermometer can measure temperature of any object in the surroundings, if there was no one-way radiation of all these objects in all directions?
    Or does the thermometer “knows” the net energy/power difference between itself and these objects?”:

    First, I should point out that when you said “one-way radiation” here, you forgot again to specify whether you are referring to “radiant energy” or “radiant power”, which I told you to beware of earlier. Remember that objects above absolute 0 emit radiant *energy*, but developing radiant *power* requires a second object at a different temperature (i.e. an entropy gradient).

    So, what an IR thermometer does is it measures the power (not “net power”, just normal power) experienced by its internal sensor (a bolometer or a thermocouple). Those devices convert an energy flow (heat) into a variable resistance, or a voltage. This does not violate any laws of physics, because as long as energy is flowing from one object to another, there is nothing to prevent you from measuring that, or extracting some of the energy. However, the resistance change (or voltage) will be either positive or negative depending on whether the sensor is gaining energy or losing it (via radiation specifically). (It could also be 0 if you point the sensor at an object which is at the same temperature as the sensor, of course.) So the IR thermometer compares this measurement with an internal temperature measurement, and subtracts the two, and tells you the result. (It would not be able to give you an accurate measurement if it didn’t know its own temperature.)

    The difference between your theory and actual physics is that in your theory, a passive energy flow measurement device such as a bolometer or thermocouple would always show a positive voltage (or positive change in resistance), regardless of its own temperature or what it is pointed at. So you can verify very easily that your theory is wrong, by measuring the voltage on a thermocouple when you point it at various hotter or colder objects. You will get positive and negative voltages, indicating positive and negative energy flow (positive or negative power).

  192. Ian F says:

    Ferdinand, about molecular collisions atmospheres:

    In an atmosphere sufficiently dense such that collisional energy transfer can significantly occur, all radiative molecules play the part of atmospheric coolants at and above the temperature at which the combined translational mode energy of two colliding molecules exceeds the lowest excited vibrational mode quantum state energy of the radiative molecule. Below this temperature, they act to warm the atmosphere. If that warming mechanism occurs below the tropopause, the net result is an increase of Convective Available Potential Energy (CAPE), which increases convection, which is a net cooling process.

    Note here that the term ‘transition temperature’ is not used in relation to phase change, but to a change in the role of the given molecular species from net cooling to net warming or vice versa.

    CO2 is a dual-role molecule, just as all molecules capable of emitting radiation are. The ‘transition temperature’ of any given molecular species is dependent upon the differential between:

    1) the combined translational mode energy of two colliding molecules, and
    2) the lowest excited vibrational mode quantum state energy of the radiative molecule.

    When 2) > 1), energy flows from vibrational mode to translational mode, which is a warming process.

    When 1) > 2), energy flows from translational mode to vibrational mode, which is a cooling process. Below ~288 K, the vibrational mode quantum state energy of CO2’s lowest excited vibrational mode quantum state, CO2{v21(1)}, is higher than the average combined translational mode energy of two colliding atmospheric molecules, therefore the 2nd Law of Thermodynamics and the Equipartition Theorem dictate that energy will flow from vibrational mode to translational mode.

    The increase in kinetic energy of atmospheric molecules represents an increase in temperature.

    Above ~288 K, the Maxwell-Boltzmann Speed Distribution Function shows that enough of the atmospheric molecules carry sufficient combined translational mode energy upon molecular collision to begin significant vibrational exciting of CO2’s lowest vibrational mode quantum state.

    A graphic, showing the percentage of molecules which carry sufficient kinetic energy at 288 K to excite CO2{v21(1)}

    The conversion of translational mode to vibrational mode energy is, by definition, a cooling process.

    This increases the time duration during which CO2 is vibrationally excited and therefore the probability that it will radiatively emit. The resultant radiation which is emitted to space is, by definition, a cooling process.

    This ‘transition temperature’ at which CO2 changes from being a net-warming to a net-cooling molecule is ~288 K, with CO2 acting more and more in its net-cooling mode as temperature increases (because there are three CO2{v2} vibrational mode quantum states which are very nearly degenerate ( {v20(0)} {v21(1)}; {v21(1)} {v22(2)}; {v22(2)} {v23(3)} ), and because there is also the CO2{v3(1)} vibrational mode quantum state which can emit radiation which falls within the Infrared Atmospheric Window, allowing that radiation a nearly unfettered transit to space).

    Thus CO2 is physically incapable of causing catastrophic warming, and indeed is a net atmospheric coolant above its transition temperature, in accord with 2LoT and the Equipartition Theorem.

    The same concept applies for all molecules capable of emitting radiation. The only thing that changes is the transition temperature at which any given molecular species changes roles from net-cooling to net-warming or vice versa, because each molecular species has different excited vibrational mode quantum state energies.

    Thus, in the troposphere, CO2 is a moderator, raising temperature via v-t (vibrational -> translational) collisional processes below its transition temperature and lowering temperature via t-v (translational -> vibrational) collisional processes above its transition temperature. In the tropopause, it acts as a net atmospheric radiative coolant because collisional processes exponentially decrease as altitude increases, thus radiative processes dominate. This increases buoyancy of convecting parcels of air, increasing the convection rate, which is a cooling process. A higher CO2 atmospheric concentration would allow a given mole parcel of air to have more effective radiative emittance, with the majority of that radiation being upwelling due to the mean free path length / air density / altitude relation.

    As I said in my earlier comments to you, our atmosphere is not static, over 75% of the heat transferred from the surface to the upper troposphere is by convective turbulence. That said, ask yourself the question, what is better to increase the convective efficiency in our atmosphere? More homo-nuclear diatomics like nitrogen and oxygen that don’t absorb and emit IR radiation to space, or is it more poly-atomics like CO2 that do absorb and emit IR radiation to space? I know what my answer is!

  193. De Mol says:

    Steve, I’ve read the conversation and you are 100% correct. In the end 2 temperatures will reach a balance and equal out, and then there will be no T exchange any longer. And yes, I am always amazed how they really try to beat the second law of thermodynamics.

    What Ned is saying at January 28, 2023 at 10:46 pm about not being linear is also true. That’s very simple to see: when the T of the ocean changes, which is claimed it does, the oceans will release gas out of the ocean. Of course this might be counter balanced by humans adding CO2 to the atmosphere, but these two processes are countering each other and thus can not be linear. The same with vegetation. Since plants grow faster when there is more CO2 it means that when you add CO2 to the atmosphere plants will increase the uptake of CO2 from that atmosphere, thus is also not linear.

  194. Steve Keppel-Jones says:

    Yes, De Mol, I am also amazed at how people who have basically never studied theoretical physics are quite sure that radiation is somehow a phenomenon of inherent power rather than energy, because that’s the story the climate “scientists” keep trying to sell us. It doesn’t work that way, though.

    The interplay between ocean temperature and atmospheric CO2 is surely quite complex, as you said, involving not just the obvious temperature-dependent equilibrium (Henry’s Law) and biological absorption (plant food), but also the various other CO2 sources and sinks (volcanoes, fossil fuel combustion, marine organism absorption and decomposition, etc). I’m not too worried about that aspect, it’s tricky to model and argue about. So instead I’m focusing on trying to keep people from making false statements about the relationship between energy and power. If we can get rid of all the fake Watts the climate “scientists” are throwing around like some sort of physics confetti, the remaining (smaller) energy flows and balances should look a lot less scary to the untrained eye. That’s my theory, anyway.

    (Sometimes people try to work through the implications of defending their false theories, either about radiant power or atmospheric thermal gradients, and invariably find themselves in a preposterous mathematical or physical contradiction of some sort. Eventually they stop arguing, but I’m never sure whether they’ve actually learned anything or not. I have high hopes for Ferdinand, he seems smart, so we’ll have to see whether he has accepted that he was trying to promote a bogus theory backed up by erroneous interpretations of experimental evidence.)

  195. Mark S. says:

    Hi Steve Keppel-Jones. First, what is your background with regards to this type of physics? You sound like a physicist but i would like to confirm. Second, is there a single for educated non-physicist lay-person resource/webpage that exposes these errors of reasoning among so-called climate scientists? If there isn’t then someone should make one.

  196. Steve Keppel-Jones says:

    Hi Mark S.,

    My background is that I am not a professional physicist myself, but my father is a retired university professor of theoretical physics (and mathematics), so I have been studying these topics since I was a toddler, whether I wanted to or not. To be completely honest, most of the time I didn’t really want to – there was always something more fun that needed doing – but I had to study anyway 🙂 I don’t want to emphasize anyone’s qualifications too much, least of all my own, because physics doesn’t care about qualifications. So I prefer to focus on the actual physics, whether true or false, and let the resulting contradictions speak for themselves. Contradictions speak pretty loudly, and they usually say “Hey! You’re doing it wrong!” 🙂

    Whether there is a one-stop-shop for non-physicists to go to learn about the falsehoods propagated by climate scientists is a good question. There are a few blogs where these topics are discussed regularly, like the Talkshop here, or Watts Up With That, by folks with varying levels of physics backgrounds. To get a really good grasp of the issue, though, there is not much substitute for studying the theoretical physics from the ground up. It is difficult to get an intuitive sense of what is going on without doing all the exercises in the textbooks. You kind of have to be able to see and feel the entropy doing its thing in your head, in order to intuit that the claims of large amounts of radiant power all over the place are absolute nonsense. Otherwise you can see one equation or another and they all kind of look plausible – there are symbols, and constants, and units, and equals signs, and why wouldn’t they be correct? The Internet is full of false interpretations of the S-B law, for instance, which doesn’t help in the slightest. I can’t fix all of them, unfortunately, and even Wikipedia is basically taken over by left-wing global-warming zealots, so you can’t really trust that.

    I will take under advisement your suggestion to write a clarifying piece on this, and try to figure out where to put it. I believe Nikolov and Zeller have already pointed out this particular problem in various publications of theirs, so I wouldn’t be writing anything new. Of course physics textbooks do a fine job too, of explaining things like energy, work, power, entropy, radiation, temperature, heat, etc. But maybe a brief summary would have a place somewhere.

  197. Mark S. says:

    Steve that would be great. I think the expose should tie back to claims of climate scientist. If I was a politician or lay person trying to decide if I was being hoodwinked by phony physics then would I be able to at least suspect this after reading your expose. Also, you should try getting it published in a popular skeptic magazine like Skeptic or The Skeptical Inquirer. They still maybe ideologically blinded to criticism of man-made climate catastrophe by CO2 but it’s worth a try. Thanks.

  198. tallbloke says: