## Nikolov & Zeller: Reply to Eschenbach

Posted: February 9, 2012 by tallbloke in Astrophysics, atmosphere, climate, Energy, flames, Incompetence, solar system dynamics

Anthony Watts has kindly offered the Talkshop the exclusive on Ned Nikolov and Karl Zeller’s reply to the article by Willis Eschenbach published at WUWT, which we accept, gladly.

## Reply to: ‘The Mystery of Equation 8’ by Willis Eschenbach

Ned Nikolov, Ph.D. and Karl Zeller, Ph.D.

February 07,2012

In a recent article entitled ‘The Mystery of Equation 8’ published at WUWT on January 23 2012, Mr. Willis Eschenbach claims to have uncovered serious mathematical and conceptual flaws with two principal equations in our paper ‘Unified Theory of Climate‘. In his ‘analysis’, Mr. Eschenbach makes several fundamental errors, the nature of which were so elementary that our initial reaction was to not respond. However, after 10 days of observing the online discussion, it became clear that a number of bloggers have fallen victim to the same confusion as Mr. Eschenbach. Hence, we decided to prepare this official reply in an effort to set the record straight. This will be the only time that we respond to such confused criticism, since we believe that the climate science community has much more serious issues to discuss.

### Demystifying the Mysteries of Equations 7 and 8

We begin with the most amusing claim by Mr. Eschenbach, which he calls ‘the sting in the tale’. First, some background: in our original paper, we use 3 principal equations that form the backbone of our new ‘Greenhouse’ concept. For consistency, we use here the same formula numbering as adopted in the original paper. Equation (2) calculates the mean surface temperature (Tgb) of a standard Planetary Gray Body (PGB) with no atmosphere, i.e.

where So is the solar irradiance (W m-2), αgb = 0.12 is the PGB shortwave albedo, ϵ = 0.955 is PGB’s thermal emissivity, σ = 5.6704×10-8 W m-2 K-4 is the SB constant, and cs = 0.0001325 W m-2 is a small constant, the purpose of which is to ensure that Tgb = 2.725K when So = 0.0. The derivation and validation of this formula is discussed in more detail elsewhere. We redefine the ‘Greenhouse Effect’ as a near-surface Atmospheric Thermal Enhancement (ATE) measured by the non-dimensional ratio (NTE) of a planet’s actual mean near-surface temperature (Ts) to the temperature of an equivalent PGB at the same distance from the Sun, i.e. NTE = Ts / Tgb(where Tgb is computed by Eq. 2). We then use observed data on surface temperature and atmospheric pressure (Ps) for 8 celestial bodies to derive an empirical function relating NTE to Ps employing non-linear regression analysis. The result is our Eq. (7), which describes all planetary data points with a high degree of accuracy:

The key conceptual implication of Eq. (7) is that, across a broad range of atmospheric planetary conditions, the ATE factor is completely explained by variations in mean surface pressure. In Section 3.3 of our original paper, we specifically point out that NTE has no meaningful relationship with other variables such as total absorbed solar radiation by planets or the amount of greenhouse gases in their atmospheres. In other words, pressure is the only accurate predictor of NTE (i.e. ATE) we found. This fact appears to have completely escaped Mr. Eschenbach’s attention.

From Eq. (7) we derive our Equation 8 (the subject of Eschenbach’s analysis) in the following manner. First, we solve Eq. (7) for Ts, i.e

Secondly, we substitute Tgb for its actual expression from Eq. (2) to obtain:

Thirdly, we combine the fixed parameters 2/5, αgb, ϵ and σ in Eq. (7b) into a single constant, i.e.

Fourth, we use the newly computed constant along with the symbol NTE(Ps) representing the EXP term of Eq. (7b) to write our final Eq. (8):

Basically, Eq. (8) is Eq. (7b) expressed in a simplified and succinct form, where NTE(Ps) literally means the ATE factor as a function of pressure!

Let’s now look at how Mr. Eschenbach interprets Eq. (7) and its relationship to Eq. (8). He correctly identifies that Eq. (7) has 4 ‘tunable parameters’(the correct term is regression coefficients, but never mind this minor terminological inaccuracy for now). He then espouses:

Amusingly, the result of equation (7) is then used in another fitted (tuned) equation, number (8).

This is the first demonstration of misunderstanding in his analysis (with far reaching consequences as discussed below), where he fails to grasp that Eq. (8) follows simply and directly from Eq. (7) after a few straightforward algebraic rearrangements, and that it contains no additional tunable parameters! Instead, Mr. Eschenbach smugly informs our fellow bloggers that the constant 25.3966 is yet another tunable parameter, which he labels t5 (his Eq. 8sym)?! We point out that the fixed parameters used to produce this constant have been defined and set prior to carrying out the regression analysis that yielded Eq. (7). Indeed, it could not have been any other way, because these parameters are required to estimate the PGB temperatures (Tgb) used in the calculation of NTE values, which are subsequently regressed against observed pressure data. Thus, Eschenbach now leads the readers astray telling them that we use 5 tunable parameters instead of 4. Fascinating! Next, in a state of total confusion, he makes the following stunning proposition:

We can also substitute equation (7) into equation (8) in a

slightly different way, using the middle term in equation 7. This

yields:

Ts = t5 * Solar^0.25* Ts / Tgb       (eqn 10

What middle term? This twisted line of reasoning is astounding, because it reveals an utter misunderstanding of basic algebra compounded with an inability to follow content, thus leaving the reader literally speechless! This error leads Eschenbach to his central false claim that our Eq. (8) simply meant Ts = Tgb * Ts / Tgb, and therefore reduces to Ts = Ts!? One can only stand in disbelief before such nonsense! This is what Mr. Eschenbach jubilantly calls ‘the sting in the tale‘. It is a big sting, alright, but in his tail, not ours! He proudly reiterates this ‘finding’ once again in the Conclusion section of his article leaving no doubt in reader’s mind about his analytical ‘skills’.

Blinded by a profound misunderstanding, Mr. Eschenbach pompously concludes in regard to the constant 25.3966 that what we have done is “estimate the Stefan-Boltzmann constant by a bizarre curve fitting method”. He further states: “And they did a decent job of that. Actually, pretty impressive considering the number of steps and parameters involved”. Wow! Hands down, such a conclusion could easily qualify for the Guinness Book of Records on Miscomprehension!

The rest of Eschenbach’s ‘revelations’ in regard to our Equations (7) and (8) are less flamboyant but equally amusing. He argues that the small constant cs in Eq. (2) is pointless while failing to understand the physical realism it brings to the new model (Eq. 8). Since the goal of our research was not just to derive a regression equation, but to develop a new physically viable model of the ‘Greenhouse Effect’, this constant is important in two ways: (a) it does not allow the PGB temperature to fall below 2.725K, the irreducible temperature of Deep Space, when So approaches zero; and (b) it enables Eq. (8) to predict increasing temperatures with rising pressure even in the absence of solar radiation. Indeed, if we set cs = 0.0, then Eq. (8) would always predict Ts = 0.0 when So = 0.0 regardless of pressure, which is physically unrealistic due to the presence of cosmic background radiation.

A major portion of Eschenbach’s criticism focuses on the ‘accusation’ that all we had done is just ‘curve fitting’ devoid of any physical meaning. In an Update to his article, Eschenbach attempts to prove that he can do a better job in fitting a curve through our planetary NTE values using an equation with fewer free parameters. His simplified version of our Eq. (8) has 3 regression parameters (instead of 4) and reads:

Figure 1. Absolute errors of predicted planetary mean surface temperatures by Eschenbach’s simplified equation and by N&Z’s Equation (8). Errors are assessed against the observed mean surface temperatures listed in Table 1 of Nikolov & Zeller’s original paper.

Note that his expression is in a sense more empirical than our Eq. (8), because the coefficient in front of So has been erroneously treated as a tunable (regression) parameter, hence distorting our PGB Eq. (2). Figure 1 compares the absolute deviations of predicted planetary surface temperatures from their true values (listed in Table 1 of our original paper) using Eschenbach’s regression equation and our Eq. (8). It is obvious to a naked eye that Eschenbach’s formula produces far less accurate results than our Eq. (8). This was also recently quantified statistically by Dan Hunt in an article published at the Tallbloke’s Talkshop. For example, Eschenbach’s equation predicts Earth’s mean temperature to be 295.2K, which is 7.9K higher that observed. This is not a small error, because the last time our Planet was 7.9K warmer than present some 40M years ago the earth surface was ice-free, and Antarctica was covered by subtropical vegetation! Of course, being a construction manager, Mr. Eschenbach likely has a limited understanding of Earth’s climate history and what a 7.9K warmer surface actually means. However, the fact that he claims aloud a superior accuracy of his simplified equation over ours is puzzling to say the least. His exact words were:

Curiously, my simplified version actually has a slightly lower RMS

error than the N&Z version, so I did indeed beat them at their own game. My

equation is not only simpler, it is more accurate

This statement blatantly contradicts the evidence. Mr. Eschenbach does not know that we have extensively experimented with exponential functions containing various numbers of free parameters many months before he became aware of our theory, and we have found that it takes a minimum of 4 parameters to accurately describe the highly non-linear relationship between NTE and surface pressure (Eq. 7). The basic implication of Eschenbach’s analysis is that one could indeed use a 3 parameter exponential function to predict planetary temperatures from solar irradiance and surface pressure but with far less accuracy. Truly enlightening!

By the way, curve fitting is an integral part of the classic science method. When dealing with an unknown process or phenomenon, taking measurements and using the data to fit curves is the only feasible approach to understand and develop a theory about the phenomenon. This method was extensively used throughout the 18th and 19th Century and a good part of the 20th Century to extract the so-called first principles in physics we currently employ to describe the World. However, arguing about curve fitting really misses the main point of our study.

### Focusing on the Big Picture

What Mr. Eschenbach and a number of others have totally failed to grasp is the highly significant fact that the enhancement factor NTE (i.e. the Ts / Tgb ratio) is indeed closely related to pressure, and that no other variable can explain the interplanetary variation of NTE so completely. As Dr. Zeller pointed out in a recent blog post, given the simplicity of Eq. (8), it is a ‘miracle’ how accurately it predicts surface temperatures of planets spanning a vast range of environmental and atmospheric conditions throughout the solar system! This cannot be a coincidence! Rather it suggests the presence of a real physical mechanism behind the regression Equation (7) related to the thermal enhancement effect of pressure. This effect is physically similar (although different in magnitude) to the relative adiabatic heating observed in the atmosphere and described by the well-known Poisson formula derived from the Gas Law (see discussion in Section 3.3. and Fig. 6 in our original paper).

Even the mistaken analysis of Mr. Eschenbach could not manage to negate the above truth. He vigorously criticized our Eq. (8) using all sorts of faulty technical arguments only to arrive himself at a similar (albeit less accurate) equation that predicts planetary temperatures as a faction of the same two variables – solar insolation and pressure! His argument that one could arbitrarily use air density instead of pressure is groundless, because pressure as a force is the primary independent variable in the isobaric thermodynamic process of planetary atmospheres. Ground pressure depends solely on the mass of air column above a unit surface area and gravity, while air density is a function of temperature and pressure. In other words, density cannot exist without pressure. For a given pressure, the near-surface air density varies on a planetary scale in a fixed proportion with temperature, so that the product Density*Temperature = const. on average, i.e. higher temperature causes lower density while lower temperature brings about higher density according to the Charles/Gay-Lussac Law for an isobaric process.

We now draw attention to a key logical contradiction in Mr. Eschenbach’s approach. In the main text of his article, he makes the central claim that our Eq. (8) represented a mathematical nonsense, since according to his logic, it reduces to Ts = Ts (the TA-DA! moment). Yet, in the Update section, he uses data from Table 1 in our original paper to derive a very similar equation, which he calls a ‘simplified version’ of Eq. (8). So, according to Mr. Eschenbach, our Eq. (8) is numerically meaningless, while his equation based on the same data is mathematically sound. This raises the question, how poor do one’s reasoning skills have to be in order for one to contradict himself in such a ridiculous manner? We will let you be the judge …

### Conclusion

We have shown in this reply that all criticism of our Equations (7) and (8) by Mr. Eschenbach is without merit. We emphasized the need for better understanding of and focusing on the big picture that our theory conveys. We propose to shift the discussion from meaningless argumentations about number of regression coefficients or number of significant digits of constants used, to how pressure as a force controls temperature and climate. In this regard, we would like to issue an appeal to all of you, who are capable of carrying out an intelligent discussion at a decent academic level to stop engaging in pseudoscientific, besides-the-point fruitless debates. We are here to discuss and offer a resolution to the current climate science debacle and welcome everyone who shares that goal. We are not here to promote or engage in endless circular talks or teach laymen ‘skeptics’ basic math and high-school level physics. Hence, we will no longer participate in dialogs of the kind that prompted this reply. We urge all sound thinking readers to do the same.
Thank you!

1. Stephen Wilde says:

“The distribution of solar energy as it moves dynamically through the system is such that temperature is enhanced in the near surface atmosphere relative to higher altitudes. This makes the conduction of heat from ocean to air slower than it would otherwise be. So heat accumulates in the ocean until it is in thermal equilibrium with the air above it.”

That is similar to the AGW contention that downwelling IR makes the ocean skin warmer and so heats up the oceans by making the flow of energy from ocean to air slower than it otherwise would be. We need to distinguish between those two propositions.

I can accept that pressure could achieve that outcome but not so called downward IR because downward IR would just cause more evaporation whereas the energy flow through the system and hence ocean temperature would already have come into balance with the surface pressure so that without a change in pressure there would be no change in ocean temperature.

I don’t believe there is any downward IR anyway. All that is recorded by the sensors is the warmer air near the surface and directly in front of the sensor. Any IR comes from that warm air near the surface and not from up in the sky.

Furthermore the evidence from Earth and Venus is that anything other than pressure is irrelevant.

2. tallbloke says:

Phil: You raise two issues.

On the first, Ned has been working with a NASA scientist on the Diviner data and the N&Z grey body calc is within 6K of the integrated empirical data.

On the second, the grey body calc for Earth gives nearly the same result (to within a couple of Kelvin) because an airless Earth wouldn’t have an ocean either.

[co-mod: Phil, try these backlinks Ned 6th Jan and same date a search will find more.
Try google (or bing or whatever) like this

site:tallbloke.wordpress.com search_text
--Tim]

3. BenAW says:

tallbloke says:
February 13, 2012 at 10:37 am

Ben AW: Thanks, we can start disagreeing agreeably again now.

“Earth is a “BB” that is already at ~275K without any radiation falling on it (oceans bulk temp).”

How does it get from 2.75K to 275K?

If you mean that’s the temperature the bulk of the ocean stays at overnight after 3.5 billion years of being warmed by the Sun every day, we can move forward

I assume the bulk of the oceans do NOT interact with either the atmosphere nor the hot core.
They just sit there being oceans. (forget about upwelling etc etc for the moment, these are disruptions of the big picture)

The oceans temperature came either from their creation, billions of years ago (remember the hot core?) or a later warming event, like the braking up of Pangea, with some of Earths internal heat escaping, or a major meteorite crash or whatever.
Point is, only the surface layer (few hundred meters) is DIRECTLY influenced by solar and cools during the night, having enough heat capacity to have a relatively steady temp over a 24 hr period.
I assume the depth of the thermocline will vary slightly over a day, and will certainly vary over the seasons.

4. tallbloke says:

Ben AW: I think the oceans have a large degree of thermal inertia, which may enable an oscillation at the length of Milankovitch cycles. I don’t accept your “they just got created that way” argument though.

What about all the paleo evidence for thermohaline overturning, internal tides, meridional circulation, coriolis forces, and all the rest of their motions? If there was no overturning, the whole of the deep ocean would be anoxic, and no fish could live in the deep.

No. The thermal oceans have the temperature/depth profile they do, and are teeming with life because they are dynamic, not because they are static and stagnant. Sorry, I don’t buy it.

5. Phil. says:

Tallbloke the point I raised regarding heat capacity applies for a rocky planet , the existance of an ocean is not relevant. Ned’s calculation does not agree with Diviner, the temperature on the dark side of the moon does not get close to 3K, Ned’s model assumption. Ned’s basic assumption is unrealistic, far more so than the conventional one of uniformity. Why doesn’t Ned do the calculation using a realistic value for surface heat capacity, his enhancement parameter will be smaller and his compensation for the error by excessive pressure terms will be less.

6. davidmhoffer says: February 13, 2012 at 11:40 am
;
“spitter cirquit”. I assume you mean voltage multiplier as this takes AC and gives you higher voltage DC out

Remember that using R and D in this sort of circuit need to be used sensibly since there is no loss what goes in is still in till it comes out!.

Free simulator:
http://www.linear.com/designtools/software/

7. tallbloke says:

Phil: The point is that they are not striving to make a perfect model of surface temperature, but to do a calculation using a relatively simple equation which gets the right answer for the average temperature of the grey body. Which it does to within 6K. This is substantially better than the classic misapplication of S-B, which get the wrong answer, ~100K out.

On your second point, it might make a very small difference, but I doubt it would make Robert Brown happy. That issue will be dealt with in N&Z’s ‘Reply to comments on the UTC part 2′ which will be published here at the Talkshop when they are good and ready.

8. Phil. says:

[snip]

Quote what Tim said and give the timestamp link to when and where he said it, and then try again.

Thanks – TB.

9. Phil. says:

Tallbloke, I’m not going to be able to do that from my phone so you’ll have to wait. Regarding your comparison of the conventional and Ned’s model, it’s inappropriate because they are doing different things. The conventional model addresses the influence of radiatively active gases in the atmosphere (the filtering effect of the atmosphere) whereas Ned is trying to model the effect of having an atmosphere. By neglecting surface heat capacity in his model he makes a significant error. By Holders inequality it’s not possible to get the right average temperature by using the correct integrated flux and the wrong temperature distribution.

1) I stepped through Tim’s comments to this thread and he has said nothing about Holder’s Inequality, so whatever your reply to him is, put it on the thread where he said whatever it is you are replying to, not here.
2) You were discussing Diviner data, which measured the Moon’s surface temperatures, not Earth’s, so stop wriggling.
3) By neglecting heat capacity Ned’s calc comes out 6K low, whereas the classic misapplication of the S-B equation comes out 100K high. You should be able to work out which is better.
4) Whatever the ‘conventional model’ is trying to do, it is wrong because it’s application of the S-B equation is demonstrably incorrect. As you said earlier, if a model fails at the first step, everything afterwards will be wrong too. If the ‘conventional modelers’ want to provide some other rationale for a ghg free atmosphere leading to a 255K Earth surface, tell them to bring it on.

10. davidmhoffer says:

thefordprefect;
spitter cirquit”. I assume you mean voltage multiplier as this takes AC and gives you higher voltage DC out>>>

Yes, same thing.

thefordprefect;
Remember that using R and D in this sort of circuit need to be used sensibly since there is no loss what goes in is still in till it comes out!.>>>

see next comment. the spitter gave me the idea, but a much simpler cirquit is a better description of the actual use case.

11. BenAW says:

tallbloke says:
February 13, 2012 at 1:11 pm

Ben AW: I think the oceans have a large degree of thermal inertia, which may enable an oscillation at the length of Milankovitch cycles. I don’t accept your “they just got created that way” argument though.

What about all the paleo evidence for thermohaline overturning, internal tides, meridional circulation, coriolis forces, and all the rest of their motions? If there was no overturning, the whole of the deep ocean would be anoxic, and no fish could live in the deep.

No. The thermal oceans have the temperature/depth profile they do, and are teeming with life because they are dynamic, not because they are static and stagnant. Sorry, I don’t buy it.

Of course these things are real. Lets stick to the basics first, and worry about secondary effects later.
If it makes you happy, fine, the oceans have accumulated their present profile over billions of years.

See: http://er.jsc.nasa.gov/seh/Ocean_Planet/activities/ts2ssac4.pdf
second page
Also: http://earthguide.ucsd.edu/earthguide/diagrams/woce/
Pacific ocean
Only the top layer of the oceans is DIRECTLY heated by solar, with a nice fall of the temp. towards the poles. Around the polar circles the cold deep ocean “surfaces”.
The “band” of warm water extending from equator to both poles buffers the incoming solar over the day.
This is the basic picture. Continents, ocean currents, upwelling etc. etc just disrupt this basic picture.

12. davidmhoffer says:

OK, here’s the brief explanation before I run off to engage in income generation:

Consider a capacitor in parallel with an AC voltage.

Assume an AC voltage with a peak of 200 volts and an “effective” voltage of 120 volts. (engineering class was too long ago and not enough time to figure out the right numbers at the moment so all you EE’s out there just live with it I’m illustrating a concept not a perfectly valid physical model).

“Average” voltage across the capacitor is zero.

Slap a diode in series with the capacitor.

The voltage across the capacitor will build to peak voltage, which is 200 volts.

Now make it real world and put a resistor in parallel with the diode.

If the value of the resistor is infinity, the voltage reaches a limit of 200 volts.

If we adjust the resistor value downward, the voltage that the capacitor reaches goes downward as well.

The warmists would have us believe that the maximum voltage the capacitor can reach is 120 volts. In their model, there is another resistor in the cirquit, in series with the diode. This “charge” resistance exactly equals the resistance of the “discharge” resistor. If that were the case, then the voltage across the capacitor would in fact reach exactly 120 volts as a maximum.

But as soon as those two resistances change such that the “charge” resistance is lower than the “discharge” resistance, the voltage across the capacitor will increases above 120. The higher the resistance of the discharge resistor, the closer to the peak voltage of 200 volts the capacitor will get.

SW goes into the system with nearly no resistance at all.
LW comes out of the system fighting high resistance every step of the way.
Heat capacity is equivelant to capacitance.
Insolation is just like an AC voltage except that it is a half wave with a flat line between half waves.

This is what BenAW was alluding to.
This is why surface temperature can get above 255K with nary a GHG in sight.
Observational evidence to support this is in Figure 5 of Doug Proctor’s article, and in his detailed explanations.

13. davidmhoffer says:

I’m an idiot.
the resistor in my comment above goes in parallel with the DIODE not the cap.

[Fixed... Maybe ]

14. tallbloke says:

Ben AW: I think the reason I’m fighting you is because although your model might be sufficient for your understanding of the overall ‘big picture’ energy balance, it conflicts with my understanding of the solar cycle in relation to ENSO, and other oceanic oscillations. So maybe we can compromise. If you can agree with me that the dynamic aspects of the oceans are vital to our understanding of shorter term climatic variation and not mere ‘secondary effects’, I’ll agree we can put them aside for the sake of the elegant simplicity of your particular Gedanken experiment.

Agreed?

15. Phil. says:

Close David but the diode should be on the input side so that charging is only half the time, the average voltage will not be zero. There will be a charging half-cycle and a discharging half-cycle, N&Z’s model has a zero capacity.

16. Robert Brown says:

I was looking forward to having breakfast with Robert Brown yesterday. He did not show up and he has ignored my recent emails. Now that I have had time to read the exchange between him and N&K, my guess is that he is ticked off at me but does not want to hurt my feelings. Don’t worry Robert, I still hold you in the highest regard; thank you again for all the help you gave me during my 12 years in the Duke university physics department.

Dearest Camel,

I didn’t reply to your emails because I could not go on Saturday, and you phrased your invitation in such a way that I didn’t feel a need to decline, only accept. I am not in any way ticked off at you, only insanely busy. I’m teaching a double load of recitation sections this spring — six of them — and have a lot of family stuff on my plate as well. Perhaps another time I will be less busy, although (did I mention that I’m CTO of a small startup as well, and it may be getting ready to take off and consume the little sleep that I get now) I can’t see any time soon — maybe lunch on a Monday or Wednesday.

I would have enjoyed meeting Dr. Scafetta although it is certainly true that I have somewhat similar reservations about his work as I have Nikolov and Zeller’s. However, I can think of a number of ways for various coincidences between planetary periods and local climate fluctuations to occur, both as “coincidences” — similar periods but no causal relationship — and as highly indirect causal influences. One of the first things I read about in my initial foray into the climate was the apparent coincidence of solar variation and climate variation, which led through a discussion of Mauder minima, Gleissberg cycles, the Sun’s erratic orbit around the center of mass of the solar system, and much more. Even as I’m still critical of its “numerological” character — Scafetta has IIRC compared his work to early heuristics concerning the tides, but the comparison is not apropos in an era when we know physics well enough to do far, far better — at least in the case of his work I can observe the coincidence and imagine at least one or two plausible explanatory causal chains, chains that I think it was his responsibility to investigate and quantify before publishing.

I guess it is not a stretch of the imagination that perhaps Willis and Anthony Watts, may also imbibe. (and Robert Brown provably so, although surprisingly, Tim Folkerts and some “other suspects” seem to be absent)

I’m curious, just what is it that I “imbibe” other than the beer I spent all night last night making (it’s the only time I can run my personal brewing operation, see “overcommitted and too damn busy” above:-)? If this is yet another form of dim ad hominem to avoid having to make a substantive comment on Nikolov and Zeller’s absurd equation 7, why not have a beer instead and save the blogosphere from yet another zero information content remark.

I cannot help but see an interesting connection / analogy between Scafetta’s research and ours (N&Z). Both studies focus on new and unknown mechanisms, and use correlations and empirical relationships to quantify them. Scafetta’s work is already published in the peer-reviewed literature, and no one has objected that he did not explain the highly significant correlations he found via ‘first principles‘. The lack of physical ‘first principals’ is very typical when studying new phenomena, and is part of the standard scientific inquiry. I’m mentioning this in regard to Dr. Brown’s criticism of our work, where he argues against the physical significance of our Eq. 7 simply because he could not explain values of the regression coefficients through known laboratory-derived physical constants. Such a critique is unwarranted for the above reasons …

Actually, I’m certain that a lot of people have objected — myself among them — but as noted just above, because the Sun is itself a complex system with chaotic internal dynamics — instantly visible in a plot of the Solar cycle over the Holocene, for example, as captured in radiometric proxies — and because Jupiter and Saturn are both powerful drivers of the Sun’s erratic internal orbit around the center of mass of the solar system, an orbit which doubtless drives internal resonances that can be lagged as much as 100,000 years, one can at least imagine a causal process with an effect on the Earth’s climate, and with resonance phenomena the physical forces involved need not be very large if they have a very long time to operate. With that said, I think that pointing out the numerical coincidence without analyzing the causality associated with it does little to advance our knowledge, especially when there are decadal oscillations already known to have an effect on our climate that are have similar periods. That is, there are confounding explanations that cannot readily be separated from the data.

I’d be happy to work through the usual “correlation is not causality” argument with you, and why this really does matter in general epistemology lest we use statistics to prove that smoking causes pregnancy (example from one of my favorite statistics books) or other nonsense. This kind of “numerology” is rife in the medical profession, where it is used to “prove” that high voltage power lines, or cell phones, or failing to eat your oatmeal, all cause cancer. It isn’t terribly good science there (as it is an open invitation to cherrypick and engage in confirmation bias and other forms of Feynman’s “Cargo Cult Science”) and it only ends up being decent science if it is rapidly followed by a quantitative and consistent causal analysis that includes a full disclosure discussion of the possibly confounding causes and where the observation fails. I’m awaiting this in the case of Scafetta’s paper, but I’m not holding my breath.

However, the difference between Scafetta’s reported coincidence and your Equation 7 is that there is no possible way that your Equation 7 can have the slightest bit of physical meaning. You have obscured this — quite possibly from yourselves — by writing it in a way that hides its internal dimensional scaling and the scale pressures involved — but I’ve helped you (or rather forced you) to confront it.

[MEGA SNIP]

1) Hi Robert. I won’t tolerate accusations of dishonesty here, since Ned Nikolov stated earlier that he and Karl Zeller will address your points in their upcoming ‘Reply to comments on the UTC part 2′. The policy here is that everything from the offending remark onwards goes in the bit bucket. Think of it as aversion therapy. As a special favour, since it was such a humongous snip, I saved it in a text file. Let me know if you want it emailing. I might put our new menu system to the test with a new sub-page for ‘Rants’.
2) N&Z know what your objections are, so you don’t need to club them over the head with them in 14 foot long comments. I won’t allow you to dominate with such lengthy bombastic diatribe either, so cool it if you want to get your (hopefully shorter) comments posted here in future. Thanks – TB.

PS. Why not simply wait until they publish ‘Reply to comments on the UTC part 2′ and then see what pertains? Just a thought.

17. tallbloke says:

Phil: I thought capacitors did the charging and diodes blocked two way traffic?

Anyway, you still don’t understand N&Z’s ‘model’. They calculate the grey body temperature, and the actual temperature as derived from the surface pressure and the TOA insolation.

The ATE factor is then the ratio of these two numbers. Theirs is the more realistic model, because without an atmosphere and its consequent pressure, there is no ocean to spread heat around.

Now, the conventional modelers say that Earth with no ghg’s would be 255K at the surface. This is arguable, though from N&Z’s point of view, it a fruitless argument anyway. This is because according to their theory, the surface temperature is a result of atmospheric mass and TOA insolation, and albedo is a result of temperature and pressure.

18. davidmhoffer says:

Phil,

Yeah, yeah, I’m trying to get to work and the last time I drew a cirquit was a different millenium.

I sent rog a drawing, such as it is.

there are two resistors. One in series with the diode that limits the charge rate, and one in parallel with the whole thing that limits discharge rate.

If the charge and discharge rates are equal, the maximum voltage across the cap is the effective voltage which is 120. That’s the model that the radiative xfer models use, and that is why they get 255K as a max.

If the charge resistance is zero, and the discharge resistance is infinity, the cap will charge to 200V. That’s unrealistic of course. But given a “low” charge resistance and a “high” discharge resistance, the voltage that the capacitor reaches is somewhere above 120 but below 200.

Or, in climate terms where resistance to incoming SW is very very low, but resistance to out going LW is very high:

The temperature that results is above the effective BB of 255K, but below the peak of whatever 1000 w/m2 comes out to. If someone wants to suggest 288K as a good approximation, I’d be willing to accept that.

19. Phil. says:

David, the voltage is the analog of the energy flux, I guess the level of charge in the capacitor is the analog of T? To be accurate the discharge rate would need to be proportional to charge^4. I don’t know if this simple a circuit can be a true analog, maybe with op-amps?

20. davidmhoffer says:

Phil. says:
February 13, 2012 at 3:51 pm
David, the voltage is the analog of the energy flux, I guess the level of charge in the capacitor is the analog of T? To be accurate the discharge rate would need to be proportional to charge^4. I don’t know if this simple a circuit can be a true analog, maybe with op-amps>>>

If Rog or Tim can post the drawing I sent them, it will be a lot more clear. Actually, the drawing I sent them is missing a diode on the discharge side, I’ll fix it when I have a moment. But essentially:

SW = charge voltage. Resistance to SW is LOW.
LW = discharge voltage. Resistance to LW is HIGH
Capacitance = heat capacity
Voltage across capacitor = Temperature.

No need for op amps.

In an AC cirquit, V(effective) = RMS = Root Mean Square
In an Insolation circquit, P(effective) = 4th-root Mean 4th-power

The two are 100% analagous, just one uses square root and the other 4th root. Other than that the equations for both are identical and so are the concepts. Take a look at the top graphic in Figure 5 of Doug Proctor’s paper. What that is showing you is that R to SW is LOW and R to LW is high. If I configured this cirquit with resistances, voltages and capcitances of the right order of magnitude, I’d get a voltage curve exactly like that one.

21. BenAW says:

tallbloke says:
February 13, 2012 at 3:03 pm

Ben AW: I think the reason I’m fighting you is because although your model might be sufficient for your understanding of the overall ‘big picture’ energy balance, it conflicts with my understanding of the solar cycle in relation to ENSO, and other oceanic oscillations. So maybe we can compromise. If you can agree with me that the dynamic aspects of the oceans are vital to our understanding of shorter term climatic variation and not mere ‘secondary effects’, I’ll agree we can put them aside for the sake of the elegant simplicity of your particular Gedanken experiment.

Agreed?

Of course, I assumed it would be blazingly obvious that this is not the complete picture.
Do you accept that the oceans have a temp of ~275K and that this is above any BB or GB approach, making these numbers meaningless for waterplanet earth?
And that this kills the whole GHE, because their model assumes a deficit of 33K AFTER the sun has
heated the earth?
So lets get this in a post and move on from there. Imo this finding should make even a politician or journalist understand the enormous mistake in the GHE theory.

22. tallbloke says:

Ben AW: I suspect their counterargument is that the back radiation warms the near surface air, and this slows down oceanic cooling, and that is why the ocean is at 275K rather than frozen at 255K.

How do you reply to them?

23. BenAW says:

tallbloke says:
February 13, 2012 at 4:32 pm

Ben AW: I suspect their counterargument is that the back radiation warms the near surface air, and this slows down oceanic cooling, and that is why the ocean is at 275K rather than frozen at 255K.

How do you reply to them?

Backradiation in the GHE is supposed to increase the surface temps from 255K to 288K (33K difference)
Adding 33K to 275K gives 308K as average earth temp. Not even close.

24. rgbatduke says:

1) Hi Robert. I won’t tolerate accusations of dishonesty here, since Ned Nikolov stated earlier that he and Karl Zeller will address your points in their upcoming ‘Reply to comments on the UTC part 2′.

It’s your blog, censor as you like. I was utterly respectful in those accusations — if there was any irritation that showed through it was strictly due to the fact that it is dead on topic for this thread and yet N&Z refuse to address it on thread. Or am I mistaken, and is this not all about “Reply to: ‘The Mystery of Equation 8’ by Willis Eschenbach” which — note well — contains Equation 7 as its sole “interesting” input. Equation 8 is after all just a rewriting of Equation 7 with the substitution of an uninteresting if oversimplified description of $T_{gb}$?

Why, exactly, do we need yet another thread for them to reply in?

2) N&Z know what your objections are, so you don’t need to club them over the head with them in 14 foot long comments. I won’t allow you to dominate with such lengthy

Excellent! Then perhaps they can reply to them instead of starting a thread in which they wish to claim that Willis was completely wrong in his criticism of Equation 7/8 and then ignoring the only post in it that shows that Willis was completely correct in his criticism of Equation 7/8, only his criticism wasn’t strong enough. Or starting yet another thread just to reply to them. Or stating that they don’t need to reply to them, because this thread and their paper are about something else entirely, something that presumably survives the death of their “miracle equation” (their words, not mine).

bombastic diatribe either, so cool it if you want to get your (hopefully shorter) comments posted here in future. Thanks – TB.

Two remarks — one is the post was long because I was replying to three different posts above and avoiding the posting of a fourth. It’s more efficient for me that way — I’m in a hurry and am really too busy to be participating at all, but it strikes me that preventing questionable science — for surely Equation 7 is questionable, given my very specific and thoroughly supported questions — from being taken too seriously is a worthwhile cause. Including when I do it. Let he who has never made an error in a printed paper cast the first stone — I certainly have, and as a consequence I rather welcome it when people point out my errors. Even students catch me in errors.

Second, I am busy and I am a physicist. I’m not a climate scientist, and if anything I am a skeptic (probably technically a “non-catastrophic lukewarmist” if skepticism has to come in flavors these days). All I care about in my replies and the issues I’m pursuing is whether the science and the math/statistics are done well, or at least plausibly. If you want to boot all of the physicists who are actually critical of mistakes like Equation 7, it’s your call, but think about the probable consequences.

Finally, I will apologize on general principles since I was not trying to offend anyone with my post. I do admit that I have gotten frustrated by the manifest fact that the authors of the paper refuse to actually reply to and address the very, very simple points that I raise, supported by both computations and figures, and instead say that I’m missing some sort of “big picture”.

Perhaps I am — I’m completely uninterested in a “big picture” supported by hand waving, heuristics, the production of arbitrary curves with impossible dimensioned numbers and exponents that don’t quite fit even the highly idealized data. But this thread isn’t about the big picture, it is about equation 7/8! If they want to address the “big picture” (again), perhaps that might be a good new toplevel post, as long as that picture doesn’t rely on a still-undefended Equation 7/8.

It doesn’t have to be Nikolov or Zeller. I’d be thrilled to hear you personally or anyone explain how 54,000 bar can reasonably appear in and dominate the fit of five out of the eight planets with atmospheric surface pressures ranging from “zero” to a tiny fraction of a bar or atmosphere. It does seem as though defending equation 8 requires an actual explanation of this here and now not in the future and yet another thread on equation 8. We’re up to four or five threads that I know of so far — top posts on the original paper here and on WUWT, criticism on WUWT, this rebuttal here — do we really need to shoot for number 5 or number 6 to hear an actual explanation?

PS. Why not simply wait until they publish ‘Reply to comments on the UTC part 2′ and then see what pertains? Just a thought.

If I notice, I will. That doesn’t stop me from being frustrated here and now. If I had just said (as Willis did) “Look, your fit isn’t physically motivated and statistically it isn’t that impressive to fit 8 points with four parameters, look, I can do it too” that would be one thing. I did not just do this, I went far beyond this, both here and on the WUWT thread. Just for my second post on this thread I spent several hours doing the arithmetic, writing code, building the plots, inserting their own data from their table of planetary temperatures and pressures to be sure that I was using the same numbers they used. Their sole reply is that I’m missing the big picture. What big picture? I’m addressing Equation 8, the topic of this thread. Do I need to make that a top post on the blog myself to get their focused attention? Nothing else that they can say in defense of Equation 8 matters in the least until they address my quantitative and specific objections, and no big picture arguments can be taken seriously with the guts (Equation 8) kicked out.

rgb

[Reply] Well, according to you anyway. I’ll happily wait to see what Ned and Karl have to say in ‘Reply to comments on the UTC part 2′. They’ve blown Willis’ math away here on this thread, and they intend to deal with your objections in their next paper. I think it’s quite legitimate, given the length, detail and repetition of your points by you, that they choose to keep their powder dry until they set everything down that they want to in one place on a new headline post.

Thank you again for your learned input, and spare us any further “utterly respectful accusations” if you don’t mind. Thanks – TB.

PS. Friendly advice: When in a hurry, type less, because less is more when people read to the end.

25. BenAW says:

tallbloke says:
February 13, 2012 at 4:32 pm

Backradiation is invented to explain the missing 33K in the GHE.
Imo it’s not a physical reality.

If my theory holds, I doubt an AGW’er will have much to ask, after seeing that they missed a 275K base in their assumptions.
If the sun is capable to keeping a BB earth 255K above it’s base temp of 0K, why wouldn’t it be capable of keeping our waterplanet 15K above it’s basetemp of 275K?

26. Phil. says:

tallbloke says:
February 13, 2012 at 3:23 pm
Anyway, you still don’t understand N&Z’s ‘model’. They calculate the grey body temperature, and the actual temperature as derived from the surface pressure and the TOA insolation.

I think I understand it fairly well actually.
They calculate the grey body temperature for a rocky, atmosphere-less, planet with zero surface heat capacity (this gives the minimum average temperature for a given input flux).
They then calculate the ‘actual average temperature’ for those planets with an atmosphere from the surface pressure and near surface gas density using the ideal gas equation of state. (This assumes that the gas density is better known than the surface temperature which seems a questionable assumption to me, only applicable to 4 of the planets anyway, the application of the ideal gas law to the super-critical atmosphere of Venus is questionable too).
They then fit the ratio of those two temperatures to an arbitrary curve with P as the variable (see Brown’s critique of that process with which I agree).

[Reply] Read their papers Phil. I challenge you to find any reliance on density in their equations.

The ATE factor is then the ratio of these two numbers. Theirs is the more realistic model, because without an atmosphere and its consequent pressure, there is no ocean to spread heat around.

The assumption of zero heat capacity is unrealistic as indicated by the temperature distribution on the moon. That the Earth has an ocean makes it a poorer model since the heat capacity of the ocean is a major factor.

[Reply] Which part of ‘there would be no ocean without the atmospheric mass and pressure’ do you not understand? Are you aware of how small the energy distribution difference is between their lunar GB calc assuming no heat capacity and the diviner measurements? They get the actual average surface temperature correct to within 6K. The method used by ‘conventional models’ was 100K+ too hot. I don’t think they need to take any lessons in heat distribution from you or them.

Now, the conventional modelers say that Earth with no ghg’s would be 255K at the surface. This is arguable, though from N&Z’s point of view, it a fruitless argument anyway.

It is a fruitless argument because it’s an apples and oranges comparison. The Conventional view models an earth with an atmosphere and ocean and surface heat capacity and tries to remove the effect of GHGs. The N&Z approach attempts to model a rocky planet without an atmosphere or ocean and no surface heat capacity, in the case of Earth this leaves the effect of atmosphere, GHE, ocean and surface heat capacity out, so the rather large deficit due to these missing terms is assumed to be a fitted function of pressure. For the other planets with an atmosphere the term due to the ocean is absent. If you want to follow this approach then you’d have to account for the surface heat capacity as there’s no reason to assume that it’s a function of pressure. Even then you have two effects of the pressure of the atmosphere due to its heat capacity and GHE (filtering effect) both of which are functions of pressure so how do you disentangle them?

[Reply] What ‘term due to the oceans’. I challenge you to identify in their equations. No more posts from you until you find it or admit you are wrong about that and density.

This is because according to their theory, the surface temperature is a result of atmospheric mass and TOA insolation, and albedo is a result of temperature and pressure.

Their theory assumes constant albedo, if they want it to be a function of P then their constant 25.3966 should in fact be a function of P!

[Reply] Their theory does nothing of the sort. It uses the empirically measured lunar grey body albedo to obtain the grey body temperature for all rocky planets, from which they calculate ATE as a ratio between that and the actual surface temperature derived from atmospheric mass and distance from the Sun. Do us a favour and read their papers before you come back to admit you were wrong.

27. Phil. says:

[Reply] Read their papers Phil. I challenge you to find any reliance on density in their equations.

Answer to Tallbloke the Ideal Gas Law is:
P=ρRT where ρ is the gas density

They say in their paper:
“This can be written in terms of the average air density ρ (kg m-3) as
ρTs =const.=Ps M/R (6)”

The whole of section 3.1 deals with this and the Ts in Table 1 is calculated using density!

Satisfied? I take it the ban is lifted and an apology will be forthcoming?

[Reply] Read it again Phil. Eq 7 shows that Ts/Tgb is equivalent to their exponential function which involves pressure only. the discussion is about Eq 8 which is Eq 7 transposed. So their theory does not rely on density as I said, and you are the one who needs to admit you’re wrong (no apology needed). You are not banned, but you won’t be having further comments published until after you step up and do the right thing.

You said:
This is because according to their theory, the surface temperature is a result of atmospheric mass and TOA insolation, and albedo is a result of temperature and pressure.
When I pointed out that they actually use a constant albedo and if their theory in fact uses an albedo which is a function of pressure then that should be included in their equation (8), you remarked:
“[Reply] Their theory does nothing of the sort. It uses the empirically measured lunar grey body albedo to obtain the grey body temperature for all rocky planets, from which they calculate ATE as a ratio between that and the actual surface temperature derived from atmospheric mass and distance from the Sun. Do us a favour and read their papers before you come back to admit you were wrong.”
I have read the papers, and my remark is correct.

[Reply] Their theory doesn’t use an albedo which is a function of pressure. An atmospheric albedo which is a function of pressure (and temperature) is a logical outcome derived from their theory. So once again Phil, man up and admit you are incorrect.

28. davidmhoffer says:

rgbatduke;
Excellent! Then perhaps they can reply to them instead of starting a thread in which they wish to claim that Willis was completely wrong in his criticism of Equation 7/8 and then ignoring the only post in it that shows that Willis was completely correct in his criticism of Equation 7/8, only his criticism wasn’t strong enough.>>>

I read Willis’ criticism and if you consider substituting an equation into the equation it was derived from to arrive at a variable that equals itself as correct criticism… I don’t even have words to describe that.

Your first comments at WUWT in regardf to calculating surface temperature properly were informative and valuable. Unfortunately I for one have seen nothing informative or of value from you since.

29. [...] SB Law of 255K), these figures have very little practical value. If you go to the links, I have two additional comments that illustrated this. The first one shows a very simple “step” function [...]

30. Bob_FJ says:

Robert Brown @ February 13, 3:12 pm
I guess you are fairly new to blogging, but you may have to accept the fact that many threads wander into fringe areas around the lead article, and even off-topic. The exchange I was having with David Hoffer concerned some unusual and interesting aspects of how the N&Z hypothesis was being addressed or not addressed in various areas of the blogosphere. (was on-topic)
I’m sorry if you felt an ad hominem innuendo in my use of the word imbibe. Where I come from, the word has several meanings in addition to your wrong interpretation.

31. tallbloke says:

Bob, don’t worry about it. Robert was looking for any handy peg to hang his shirtiness on.

32. Robert Brown says:

Bob, don’t worry about it. Robert was looking for any handy peg to hang his shirtiness on.

Actually, I didn’t understand what he was trying to say to, or about, me (or Anthony or Willis or whoever). I still don’t. I wasn’t really accusing you of ad hominem; rather asking what you meant. That’s why I put the smiley face at the end of the sentence.

rgb

33. tallbloke says:

Robert, I think Bob just meant the use of the word in the old fashioned sense of ‘partake of’ but ‘drinking in’ the written word rather than one of the many beverages available. By the way, I’m trying a Belgian dark beer brew kit for the first time, and considering using some molasses instead of some of the glucose. I don’t really know what quantity gives me ‘measure for measure’ though. Any experience to share?

34. gallopingcamel says:

rgb said:
“It isn’t terribly good science there (as it is an open invitation to cherrypick and engage in confirmation bias and other forms of Feynman’s “Cargo Cult Science”) and it only ends up being decent science if it is rapidly followed by a quantitative and consistent causal analysis that includes a full disclosure discussion of the possibly confounding causes and where the observation fails. I’m awaiting this in the case of Scafetta’s paper, but I’m not holding my breath.”

Nicola Scafetta is working with models but he is well aware that the correlations they show need to be backed up by plausible mechanisms if they are to be taken seriously. He sees that as “the hard part”. He already has some ideas that make sense to me. You would probably be a much tougher audience so why not ask Nicola to sample some of your home brew. That might take the sharp edge off criticism.

While it is desirable that causal analysis should “rapidly follow” it took over 30 years for Wegener’s “Continental Drift” hypothesis to be vindicated.

I think it is fair to say that the same applies to N&K, although their analysis did start with a series of physics equations which they boiled down to the dimensionless one you don’t like.

35. Nick Stokes says:

“So their theory does not rely on density as I said”

It’s here in this post:
“For a given pressure, the near-surface air density varies on a planetary scale in a fixed proportion with temperature, so that the product Density*Temperature = const. on average, i.e. higher temperature causes lower density while lower temperature brings about higher density according to the Charles/Gay-Lussac Law for an isobaric process.”

36. gallopingcamel says:

rgb,
You mentioned the possiblity of getting together today or Wednesday. Unfortunately I am teaching in Tennessee through to February 20 with no days off. Thanks to a certain university cancelling my contract, I no longer have any courses scheduled in North Carolina. It may be time to retire. For real this time.

It was a thrill to find Dukies such as you and Nicola getting involved in climate science issues. It would have been a blast to be a participant in some small way. I was hoping that the Physics department was showing the spirit to resist being engulfed by the Nicholas School of the Environment.

I’m trying hard to understand Dr. Brown’s objection to equation 7. As I understand it, he’s saying essentially that the equation was empirically derived (a best-fit regression), instead of being derived by a logical progression of real-world observations. I can understand that logic and it’s what I initially wondered about when reading N&Z’s first paper. I agree it would be much more satisfying to see an argument that derived an equation from first-principles if-then sort of logic.

However, I’m lost regarding the argument that if certain parts of an equation imply non-real results, then the formula must be wrong. For example, people that look at demographics will say that the replacement rate for a given society must be 1.9 children per female or higher in order to have a stable population. Dr. Brown seems to be arguing that since there is no such thing as a real-world 9/10 child, this statistic is meaningless. I’m not claiming that is what he’s saying; it’s just all that I can gather from what I’m reading; i.e. since we don’t have any observable planetary bodies with an atmosphere at 54,000 bar, then N&Z’s result must obviously be wrong.

Can somebody help me here? I believe Dr. Brown is trying to say something important to math-challenged people like me, but I’m not getting it.

38. Bob_FJ says:

ALL,
Am I getting irrational in my septuagenarianisticalific afflictions, when I expound the following?

I can see that the mathematical derivations of N&Z will be variously critiqued by those that step forward to oppose the new theory. (which is a normal part of science). This then boils down to acceptance of the arguments from whomever one might alternatively prefer as an authority. (camp or consensus culture annat)….. He said, she said, we said, they said, and I like her because she has [self snip] !!!

So, whilst the N&Z mathematical derivations may be “difficult”, it does not mean that they don’t work, even if why it is so, is not fully understood. However, if their hypothesis is correct it should be possible to obtain a series of correlating data with a range of parameters in the lab. If this is successful in outcome, then it should precede the maths in any paper, which should then be offered as a possible mathematical solution.

Of course, see earlier threads where low budget Konrad is still exploring such data.

And, I wonder if deep mineshafts with geothermal energy source might also reveal another piece of supporting empirical data.

39. tallbloke says:

Nick Stokes says:
February 14, 2012 at 2:15 am

Tallbloke said:
“So their theory does not rely on density as I said”

It’s here in this post:
“For a given pressure, the near-surface air density varies on a planetary scale in a fixed proportion with temperature, so that the product Density*Temperature = const. on average, i.e. higher temperature causes lower density while lower temperature brings about higher density according to the Charles/Gay-Lussac Law for an isobaric process.”

Yes Nick, but this is additional explanation. They do not need to rely on density because Eq 7 shows that Ts/Tgb is equivalent to their exponential function which involves pressure only. This is why there are two ‘equals’ signs in Eq 7. Immediately above Eq 8 they say
“Equation (7) allows us to derive a simple yet robust formula for predicting a planet’s mean surface
temperature as a function of only two variables – TOA solar irradiance and mean atmospheric surface pressure, i.e. [Eq 8]”

No doubt accurate density measurements for Earth could have assisted them in calibrating their pressure function, but density is not required for the other celestial bodies they then go on to calculate surface temperatures for. This is an important distinction.

I had to point this out to Willis Eschenbach on the demonstration of his own ignorance in the post this present discussion addresses. I’m somewhat shocked to see you making the same elementary error. I thought WUWT troll ‘Phil’ was being disingenuous with his comment here and willfully misinterpreting N&Z in an attempt to cast doubt on their work. Seeing you make the same mistake has me wondering if a lot of the argument over N&Z’s work is a result of people simply failing to read what they wrote.

Having said that, I had to correct Willis Eschenbach on his claim that they were using the atmospheric albedos inside equation 2 and thus ‘tunaeable parameters’ as well. Not that he accepted that he had made a ‘mistake’ even after I pointed it out. Phil has also parroted a variation of this error. N&Z use a single empirically measured grey body albedo (our Moon’s) in their Tgb for all rocky planets in Eq 2, which seems reasonable to me. I think he Willis-fully Hash’n'baulk’ed the theory and then trash talked it. Whatever his reasons, it ain’t science so far as I can see.

Robert Brown says Willis’ criticisms of Eq’s 7 and 8 are correct, despite N&Z’s elegant demolition of his faulty algebra in the headline post here. This and his errors on the Loschmidt effect in hs criticism of Hans Jelbring’s paper leads me to doubt the value of his other criticisms too.

Then there is Ira Glickstein’s misdirection of discussion of N&Z’s theory with the spurious discussion of the heat of initial compression, which is simply irrelevant to the discussion of the dynamic throughput of solar energy in an atmosphere subject to a gravitational field.

All in all, WUWT hasn’t handled discussion of Nikolov and Zeller’s theory at all well, to put it kindly.

40. Robert Brown says:

No doubt accurate density measurements for Earth could have assisted them in calibrating their pressure function, but density is not required for the other celestial bodies they then go on to calculate surface temperatures for. This is an important distinction.

For what it is worth, density and pressure are not independent variables. The pressure at any given height is $P = g \int_{z}^{\infty} \rho(z) dz$. Atmospheric pressure has to support the weight of all of the atmosphere above any given height. Any fluid, atmosphere or not, in (even approximate) hydrostatic equilibrium must satisfy the relation:

$\frac{dP}{dz} = - \rho g$

(for $z$ positive “up”). If one knows the thermal profile of the atmosphere (and assume that it e.g. an ideal gas) it is a straightforward mathematical exercise, although one that may or may not be easy to do analytically, to convert a function in one variable to a function in the other.

[Reply] True but irrelevant to my point, which is that having calibrated their EQ8, Which they could have done with temperature rather than density, they are able to correctly calculate the surface temperatures of the other celestial bodies using only their pressure function and the TOA insolation. Furthermore the pressure at any given height can be calculated by using the mass and the gravitational constant rather than the density.

This is related to why Nikolov and Zeller need two distinct exponential forms to fit the planetary data. The physics that describes the surface temperature of the extreme low pressure/density planets is different from the physics that describes/predicts the surface temperature of the four planets on their list with substantial atmospheres. In particular, the last four planets have atmospheres that consist of optically thick greenhouse gases. The first four planets have atmospheres that are optically thin. These are two completely different regimes, so the functional form of the surface warming changes. If you like, the exosphere begins at the surface of the first four planets — they lack meaningful convective transport and have only a tiny split between the direct radiation temperature from the surface and the radiation temperature of the extremely diffuse atmospheric gas.

[Reply] Yes, I agree it’s quite remarkable that their equation holds good across such a diverse set of temperature and pressure regimes.

Robert Brown says Willis’ criticisms of Eq’s 7 and 8 are correct, despite N&Z’s elegant demolition of his faulty algebra in the headline post here. This and his errors on the Loschmidt effect in hs criticism of Hans Jelbring’s paper leads me to doubt the value of his other criticisms too.

Willis is correct when he asserts that taking some data ($T_s$), normalizing it with a computed number $T_{gb}$ to form $N_{TE} = T_s/T_{gb}$, fitting the $N_{TE}$ data to a mathematical form $N_{TE,fit}$ that is neither derived nor heuristically justified, and then multiplying out the normalization to get:

$T_s = T_{gb} N_{TE,fit}$ (equation 8)

is hardly a “derivation”.

If I have data points ${x(y_i)} = (x_1,y_1), (x_2,y_2), (x_3,y_3)...$ and a smooth function $f(y)$, and I use this function to convert the data to $(x_1/f(y_1), y_1), (x_2/f(y_2),y_2), (x_3/f(y_3),y_3)...$, empirically fit the data to a smooth function $g(y)$, and then assert that:

$x(y) = g(y)*f(y)$

this is an identity, not a derivation.

If I define $g = x/f$, fit $g$, then assert $x = g*f$ I haven’t “derived” anything at all. If I fit $g$ with a functional form that is not justified by any physical argument that has enough free parameters, I can find alternative descriptions of $x(y)$ that are all equally meaningful, given that $0 = 0$ (lack of meaning is conserved).

[Reply] All very snarky, but not what Wilis said at all.

As for my “errors” concerning the Loschmidt effect in my criticism of Jelbring’s paper, Jelbring’s paper does not refer to any such effect. It quotes a single textbook that derives the DALR for an atmosphere in which there are parcels in convective motion. It asserts without proof that this lapse rate applies to an isolated ideal gas that is not being driven and is in true static equilibrium, even though an ideal gas (being ideal) has the thermal conductivity of an ideal gas and it is the work of a few seconds to see that the DALR atmosphere is not a state of maximum entropy. It then concludes that the atmosphere in question will have a lapse rate and hence be warmer at the bottom than at the top.

I do not question that dynamical atmospheres have a lapse rate. I do not question that the lapse rate is important in determining surface temperatures. I question — indeed, I categorically reject — the assertion that a lapsed atmosphere is a state of true thermodynamic equilibrium for an isolated ideal gas. I offer considerable proof that this is so, including the straight-up observation that if it were true it would violate the second law of thermodynamics.

[Reply] What you fail to appreciate here Robert is that there is a 120 year old paradox here which has not been resolved. If you had taken the trouble to read the Loschmidt thread on this site you’d be better informed, and if you had any science sensibility, a good deal less categorical too.

Whether or not you ultimately agree with my proofs, with the explicit statement of the author of a textbook on physical climatology on the thread (Caballero) that I am correct, with the statistical mechanical computation cited on the thread that concludes that I am correct, whether or not you take note that the DALR is always derived in the context of slowly moving parcels of air in a dynamical atmosphere and that any sort of additional mixing e.g. turbulence destroys it and restores isothermal equilbrium (as does conduction, but much more slowly) — it is difficult to assert that Jelbring dealt with any of this in his paper. He takes a well-known result of atmospheric dynamics, moves it out of context, and makes an entirely circular argument that it applies to the static case as well. It does not. The actual dynamics even of moving air parcels is “adiabatic” only to the extent that you neglect conduction, but an ideal gas has an ideal, easily computed, thermal conductivity that is not zero. Most textbooks point that out when they derive the DALR. Caballero’s certainly does.

[Reply] See the Loschmidt thread for where Caballero gets it right, where he gets it wrong, and where he gets it muddled. It would also have behooved you to have taken a bit more notice of WUWT commenter ‘Trick’ on your impossibly long thread. He tried to alert you to another equally eminent Author and textbook which sits on the other side of the paradox. You ignored him of course.

To conclude — I am certain that you are aware that your “argument” that I am not to be believed because I once stated something that you disagree with and that is disproved by something I’ve never heard of and therefore I must be wrong now is a textbook case of logical fallacy. This makes it all the more ironic when you conclude by stating:

All in all, WUWT hasn’t handled discussion of Nikolov and Zeller’s theory at all well, to put it kindly.

If by this you mean that Anthony hasn’t (to my knowledge) stepped into the discussion to defend a theory that he’s fond of not by addressing specific algebraic points of concern, the actual physics of the theory, but by http://en.wikipedia.org/wiki/Poisoning_the_well, I suppose you are correct.

[Reply] Anthony is not so much at fault as those who took advantage of the fact that he’s too busy to keep an eye on what they’re up to.

In the meantime, I’m perfectly happy to wait for Nikolov and Zeller’s actual derivation of equation 7 to continue the discussion, aside from answering specific questions about my specific objection to equation 7 in the comments above.

rgb

41. davidmhoffer says:

I’m trying hard to understand Dr. Brown’s objection to equation 7. As I understand it, he’s saying essentially that the equation was empirically derived >>>

His complaint relates to the number of free parameters which have been assigned as constants by N&Z. Essentially, N&Z have come up with an equation that successfully calculates the surface temperature of various planets from their insolation and their surface pressure. But, did they come up with an equation that is right simply because of the variables and constants they chose? We cannot say for certain because we don’t have enough data points (planets) to compare to. It could be that for a broader number of use cases, their formula will break down.

N&Z believe that their formula will hold up for any planet that one can get the data for.

RB believs they have fooled themselves by developing a complicated equation that is right for the 8 planets we do have data for, but will have no predictive value for a broader set of planets.

42. Robert Brown says:

Can somebody help me here? I believe Dr. Brown is trying to say something important to math-challenged people like me, but I’m not getting it.

Sure. The point is that one can learn, or estimate, a lot about any physical quantity from a knowledge of its dimensions, its units. This is particularly true for exponential functions. Take exponential decay, for example, which describes things as disparate as the population of radioactive atoms. The idea there is that every decay event is independent, and occurs with the same probability per unit time.

Suppose the probability of a single atom decaying in some small time interval $\Delta t$ is $\Delta P = R \Delta t$. This is easy to understand — think of it as rolling dice once a second, and if snake eyes comes up then the decay happens, but in a way that works for smaller and smaller intervals so that the probability per unit time is constant in the limit of very small times.

Then the number of decays we expect in a population of $N$ atoms in that small time $\Delta t$ is just the probability of decay per atom times the number of atoms:

$\Delta N = - N \Delta P = - N R \Delta t$

Physicists usually start thinking about finite times $Delta t$, but they want to be able to use calculus to find the result so they assume things like “$N$ is really big” and “$\Delta t$ can be made arbitrarily small” so that the discretization error associated with turning this expression into a continuous expression can be ignored. Note that these assumptions won’t work at all well for $N = 10$ atoms and very short times, because an atom can’t fractionally decay — either it does or it doesn’t. This sort of process is called “coarse graining” the derivatives — choosing intervals large compared to the place where discrete events matter, get small enough to use calculus.

If we coarse grain our decay problem we get:

$dN = -N R dt$

and we do basic calculus. Don’t worry about understanding it if you’ve never had calculus, but if you have had calculus you should recognize this:

$\frac{dN}{N} = - R dt$

$\int \frac{dN}{N} = - \int R dt$

$N(t) = \int \frac{dN}{N} = - \int R dt = - R t + C$

and exponentiating both sides and defining $N_0$ is the number of particles one has at time $t = 0$, one gets:

$N(t) = N_0 e^{-Rt}$

There are some very general things about this derivation — the exponential function is the function that is directly proportional to its own derivative, and exponentials in physics therefore must describe this sort of differential relationship. However, this sort of relationship is common as dirt in science — physics and chemistry in particular — and statistics in general, making exponential functions very important. This particular example is exponential decay, but very similar reasoning applies to e.g. compound interest investments and exponential growth, trigonometry (the sine and cosine functions are parts of a complex exponential), oscillations and waves and ever so much more.

Physicists learn early on that when one introduces functions like an exponential into a theory — in particular any nonlinear function that has a power series expansion, such as $e^x = 1 + x + x^2/2! + x^3/3! + ...$ — the arguments of the exponential must be dimensionless. This is easy to understand. Suppose that $x$ in this expression were a length and had units of length. Length squared is an area. Length cubed is volume. Length to the 28th power is God knows what. Then the expansion for $e^x$ would have us adding a pure number (1) to a length to an area to a volume… which is nonsense. I don’t know what it could possible mean to add a liter to a meter.

That means that in our example above, $Rt$ must be dimensionless! We know that the units of $t$ are units of time, say seconds, so the units of $R$ must be inverse time! There is really no choice here. It cannot be otherwise. Even if we didn’t know where $R$ came from (you can see above that it has units of “probability per second” and since “probability” has no dimensions this is inverse seconds) we would know its units because we know $Rt$ must be dimensionless.

This suggests that instead of using $R$ at all, it might be better to use the time implicit in it: $\tau = 1/R$. This time is called the “exponential decay time” and we can write:

$N(t) = N_0 e^{-t/\tau}$

as a manifestly dimensionless form to describe the number of still-undecayed radioactive atoms as a function of time. Writing it with the $R$ was OK, but it hid the true dimensions and characteristic time associated with the process from us. The second form is much more revealing, because now we can interpret the time that has appeared.

$\tau$ is the time required for the population of undecayed atoms to drop to $1/e$ of its original population. This is closely related to things like the “half life” of the decay process, the time required for half of the atoms in any initial sample to decay. It is even more than that — it states that in any time interval $\tau$, $1 - 1/e$ of the atoms will decay, no matter how many there were at the beginning!

The point of this is that $\tau$ isn’t a random number. It has a physical meaning. It has to be connected to a physical process — the one responsible for the decay occurring. This time has to appear naturally when considering the units and sizes of the actual components of the system in question.

To understand this, one has to learn a bit about Fermi estimation. Enrico Fermi was famous for his ability to take a physical process and, by considering the units and “reasonable” dimensions of the system in question, produce remarkably accurate estimates of the physics involved, although the idea applies to many things. I’ll include this link:

http://en.wikipedia.org/wiki/Fermi_problem

and quote from it: “Scientists often look for Fermi estimates of the answer to a problem before turning to more sophisticated methods to calculate a precise answer. This provides a useful check on the results: where the complexity of a precise calculation might obscure a large error, the simplicity of Fermi calculations makes them far less susceptible to such mistakes. (Performing the Fermi calculation first is preferable because the intermediate estimates might otherwise be biased by knowledge of the calculated answer.)” In other words, Fermi estimates are invaluable as sanity checks. They reveal results that, however much we are biased to believe in them, are in the end unbelievable. You can learn a lot more about Fermi estimation online — it is literally a part of most introductory physics courses. I’ll offer a single example of Fermi estimation and dimensional analysis here. It is not inapropos to my objection.

Students often are asked to compute the moment of intertia of things like spheres, rods, cylinders, grandfather clock pendula, in physics courses. Doing so involves formulating and evaluating an often-complicated integral and perhaps using something like the parallel axis theorem. It is easy to make purely algebraic errors. How can a student tell if their end answer makes sense, at least enough sense that it might be correct? By checking its units and making sure that the answer satisfies Fermi! The former requires that they look at the size and mass involved. The units of a moment of inertia are $ML^2$ — mass times length squared. The units of the algebraic answer had better contain the mass of the object, to the first power, and its characteristic size to the second power, or it is wrong out of hand. Then one can look at everything else. Most moments of inertia about the center of mass of an object have the form $\beta M L^2$ where $\beta$ is a pure (dimensionless) number between 0 and 1. It cannot be negative, and I can’t think offhand of a case where it could be greater than 1 if $L$ is the maximum radius of the system relative to its center of mass. If a student somehow ends up with $\beta \approx 100$ in their answer, even if it otherwise has the right units, they probably divided instead of multiplied somewhere, or made some other error in their algebra or arithmetic.

That’s it — in physics the arguments of exponentials must be dimensionless or they are nonsense. If a dimensioned variable appears in the exponential there must be a similarly dimensioned variable that cancels its units. Finally, in order for the expression to make sense, the actual dimensioned variable that provides the “characteristic” length, or time, or pressure, in the expression has to be physically reasonable. It is this characteristic pressure that dominates the exponential behavior. It is the signpost towards the important physics, and vice versa. Like $\beta$ in the previous example, we should be very worried if it is much more than order unity away from the range of mundane values we expect for the actual physical quantities we are trying to describe.

All I did is take Nikolov and Zeller’s empirical equation 7 and put it in manifestly dimensionless form. This is unique — there is no other way to do it, any more than there is for the radioactive decay example above. This reveals that their coefficients are actually dimensioned pressures $P_i$, the characteristic pressures $P$ where $e^{P/P_i}$ has an argument of order unity, where the “shape” of the exponential is important. I argue that it is unreasonable for a characteristic pressure of 54,000 atmospheres to describe the actual physics of a gas at a pressure of $10^{-7}$ atmospheres or even less. It can’t even reasonably describe a gas in the pressure range from 1 to 100 atmospheres. The second characteristic pressure that appears is 202 bar/atmospheres (at this level of Fermi-estimate description the difference doesn’t matter). This isn’t as bad as the 54,000, but it is still worrisome. It is still a “$\beta > 1$ answer, given that the largest pressure being fit is 92 atmospheres.

The last area of concern in their result is the very, very odd exponents that appear within the exponentials. One of them is $0.065 \approx 1/15$. Again, in physics, one expects there to be very ordinary relationships between connected quantities in a physical theory, especially when one is considering an idealized theory like an ideal gas (or a normal gas far away from critical points where its behavior is expected to be “nearly” ideal). $PV = NkT$ has fairly straightforward exponents — they are all 1!

It’s true that other exponents can appear — an ideal gas that is confined to a container and adiabatically expanding follows a different curve, one where $PV^\gamma =$a constant. This means that the exponent 0.385 in the second term is not inconceivable — it is difficult for me to see how it could arise from any simple dimensional analysis of the problem — it is close to but not equal to $1/\alpha$ for an ideal diatomic gas, but the atmospheric gases of the planets in question do not all have the same $\gamma$ — Mars is mostly triatomic CO_2 and the Earth is mostly diatomic N_2 and O_2, for example, and Triton is a complicated brew of all sorts of non-diatomic molecules.

However, the particular exponents of exponents and characteristic pressures in the second term of the fit depends in detail on the values of the first exponential term with its unphysical 54000 and 0.064. If one simply fit (say) the last four planets all by themselves, one might get a functional form that wouldn’t make Fermi (or me, channelling his and Feynman’s ghosts in this discussion) from running screaming from the room — or rather, gently saying “no, that cannot be a physically meaningful description of the phenomena”.

Hopefully this explains why Nikolov and Zeller’s empirical fit almost certainly is not physically meaningful as it stands, in terms even a non-math lay person can understand. There are many ways one might fit radioactive decay data with combinations of functional forms, but only one of them is going to be rationally derivable and it will contain a characteristic time that is directly characteristic of the physics of the process, not e.g. the age of the Universe or the period of the Earth around the Sun. At 54000 bar and the surface temperatures in question, the atmospheres would no longer be gases in the case of all of the colder planets. I don’t know the coexistence curves offhand for the components of Venus’ atmosphere, but I’m guessing even its atmosphere would liquify at this pressure and its ambient temperatures.

rgb

43. Robert Brown says:

His complaint relates to the number of free parameters which have been assigned as constants by N&Z. Essentially, N&Z have come up with an equation that successfully calculates the surface temperature of various planets from their insolation and their surface pressure. But, did they come up with an equation that is right simply because of the variables and constants they chose? We cannot say for certain because we don’t have enough data points (planets) to compare to. It could be that for a broader number of use cases, their formula will break down.

N&Z believe that their formula will hold up for any planet that one can get the data for.

RB believs they have fooled themselves by developing a complicated equation that is right for the 8 planets we do have data for, but will have no predictive value for a broader set of planets.

This is also true, but does not actually answer Dan’s question. I just did that above. The problem with fitting eight points of data with four free parameters was actually originally raised by Willis. All I did here is refine it and plot the actual fit against the actual data so one can see that it is not, in fact, a good fit of all eight points, but rather fits four points well and four points poorly, three at one end and one at the other. Where “poorly”, in an arbitrary nonlinear functional fit to data without without error bars, with no $\chi^2$ or objective measure of quality of fit even theoretically possible, is in the eye of the beholder, to be sure.

As for predictive value: Suppose one simply connected the data points with a cubic spline, with any number of parameters you like. If one supposes that there is a monotonic increasing function that describes the data, then this spline might well have predictive value of that monotonic increasing function, as might a line you just draw with a pencil so that it passes smoothly through all of the points. However, there isn’t any physics in the interpolating spline, the line you draw with your pencil, or in Nikolov and Zeller’s equation 7. The coefficients of the spline are not simply related to the actual physical processes that govern and establish the hypothetical relationship, and one gets zero physical insight or knowledge from knowing them. The four parameters of Nikolov and Zeller’s fit are manifestly not related to the actual physical processes that govern the surface temperatures, and one gets zero physical insight or knowledge from knowing them, even though they, too, might have just as much predictive power as a spline. Would they fit surface temperatures on the gas giants? Highly doubtful. Do they fit surface temperatures on the moon as they are now? Only if you are generous about what constitutes “a fit”.

There is a lovely paper written by a couple of Greek guys who are analyzing e.g rainfall that I have squirrelled away somewhere that illustrates the problem of taking a small finite section of data and extrapolating it vs interpolating it. Interpolation is generally “easy” — lots of functions will fit/interpolate any small data set, especially when one is willing to use any nonlinear function with any form to do so without regard for any sort of justification or reason to think it is a correct or relevant form. They illustrate this by taking IIRC a cosine function plus white noise and then analyzing how fits to this function might well proceed. If you take a very small interval and fit it, your best fit will be a linear function, and you are tempted to say “Aha, I’ve discovered that $x$ is linearly dependent on $y$” and then use that to predict the end of the world, if $x(y)$ exceeding some threshold will bring it about. This argument is, in fact, familiar to us all in CAGW “science”. Of course, eventually one gets more data, and — Aha! — now the data turns up. In fact, it looks a lot like the true dependence of $x$ on $y$ is quadratic! Eureka! Surely now we can use it to predict our catastrophe.

Only we, gifted with God-knowledge, know that this is an illusion. They only learn of their error when they get still more data and their quadratic function fails to extrapolate and they now have to add e.g. a cubic terms, or perhaps some bright lad then tries to fit an exponential to it, fails, and tries — just saying, you know — an exponential with its argument itself a nonlinear exponential function, all with adjustable parameters. Well, the function is still smooth — there are an infinity of smooth nonlinear functions that correspond within the fit domain, all with the same first $n$ terms in their power series expansion (for example) and all of which differ completely beyond this point. Even the cosine could be modulated with e.g. a long time exponential decay (or other modulating function) and you couldn’t fit it or observe it until you had tracked many cycles of oscillation and identified the apparent cosine.

This is why simply fitting arbitrary functional forms to a small data set, however successful it might be at interpolating the data, however predictive it might be of new data within the interpolated domain, is not useful or meaningful. One can always perform such fits many different ways, and I haven’t even gotten to overcomplete bases yet where even the fit in terms of a given set of functions is not necessarily unique. All of this is well known in functional analysis. In order for the results of a fit to be anything other than heuristic and descriptive, the numbers in the fit and the functions themselves have to have some physical basis. Willis pointed out the problem of fitting from the infinitely large set of all possible nonlinear functions — hell, one might well find a one parameter fit out of that set — without any sort of physical argument supporting it or criterion for judging goodness of fit and a claimed “derivation” of equation 8 that was really just restating the definition of the function empirically fit via equation 7. I pointed out that it is worse than that — the fit they obtained does contain hidden physics (whether they like it or not) [snip].

[Reply] Now you’re getting the hang of it! Leave the insult to the end so you don’t lose so much of the post.

rgb

44. Ned Nikolov says:

Robert Brown (February 14, 2012 at 3:10 pm):

Dr. Brown,

I do not quite understand why is all that twisted reasoning, when the reality of our derivation is simple and can be summarized in the following commonsense points:

1) We define the ‘Greenhouse Effect’ as a ratio of the actual surface temperature to the planet’s equivalent gray-body temperature, since such a ratio expresses the integrated thermal effect of the atmosphere in a single non-dimensional number (non-dimensional numbers as you know are widely used in fluid dynamics to describe turbulence and other phenomena). This definition also has a physical meaning one can call relative Atmospheric Thermal Enhancement (ATE).

2) Our gray-body temperature model is not arbitrary, but based on proper integration of the SB law over a sphere and uses values for regolith albedo and emissivity that are representative of values measured for Moon and Mercury. Even the average albedo of the earth surface (0.12) is very close to that of the Moon’s rocky surface (0.11). The data suggest that short-wave substrate albedo and emissivity of airless planets are quite conservative quantities, i.e. A ~ 0.11, and e ~ 0.95.

3) Our analysis revealed that mean surface total pressure (Ps) is the only parameter that nearly completely explains the ATE values for all 8 planets. No other parameters such as ‘greenhouse-gas’ concentrations or their partial pressures, or the actual absorbed radiation by planets (accounting for observed albedos) came even close to describing the ATE variation. Hence, the derivation of Eq. 7. Again, NTE(Ps) was derived using non-linear regression!

4) Eq. 8 is simply a solution of Eq. 7 for the surface temperature (Ts). This is a legitimate and simple (high-school level) math, and it’s really puzzling to me why it prompts any questions at all. Willis made a silly algebraic error of substituting Eq. 7 into itself, thus arriving at the non-nonsensical and false conclusion that Ts = Tgb * (Ts / Tgb). This is a demonstration of his ignorance in math, not a deep thought!

With respect to your comment that pressure and density are not independent variables – we never claimed that they were! However, at the surface, the mean atmos. pressure is only a function of the average weight of atmospheric column above a unit surface area and gravity. Surface air density, on the other hand, depends on surface pressure and temperature. Hence, the mean pressure at the surface is independent of near-surface temperature and density! That is because the average thermodynamic process at the surface is isobaric in nature.

In summary, the tight exponential relationship between NTE and pressure is real, and the fact that it is described by a function, which coefficients cannot be easily interpreted in terms of known physical quantities, does not invalidate that relationship! This is because it is a higher-order emergent relationship, which summarizes the net effect of countless atmospheric processes including the formation of clouds and cloud albedo. This relationship might not be precisely reproducible in a lab, simply because it may require a planetary scale to manifest. However, a lab experiment should be able to validate the overall shape of the curve defining the thermal enhancement effect of pressure over an airless surface. BTY, this shape is already supported by the response function of relative adiabatic heating defined by Poisson’s formula (Fig. 6 in our paper).

45. Ned Nikolov says:

davidmhoffer says (February 14, 2012 at 3:58 pm):

RB believs they have fooled themselves by developing a complicated equation that is right for the 8 planets we do have data for, but will have no predictive value for a broader set of planets.

I think you right about Dr. Brown’s belief. However, if those planets span a really broad range of conditions as they do, it is very unlikely that this relationship will break. The tightness of the relationship suggest that this is NOT an accident, but a real physical phenomenon … Read my comments in the previous post addressing Brown’s arguments…

This whole conversation about regression coefficients is really meaningless, as it reveals a lack of understanding about the fact that we are dealing with a HIGHER-ORDER EMERGENT RELATIONSHIP that is rooted in the Gas Law, but embeds complexities that are beyond the simple gas law equation and not necessarily observable in a lab such as the cloud dynamics and cloud albedo. In a sense we are dealing with an interplanetary manifestation of the Gas Law, which maybe a higher-level fractal expression of the simple gas law equation … For those, who are not familiar with fractal structures, please see

Fractals as an organizational principle in Nature occur not only in physical structures, but in the hierarchy of processes as well.

46. davidmhoffer says:

Ned,

I had the oddest thought. What would the result be if you were to derive all your equations and constants, but limit yourself to only three planets for doing so? Say Earth, Venus, and Mars. If you did that, and made it clear that ONLY data from those three were used, then we’d have the use case that RB demands. An equation that is built upon a very small set of data, and then it either extrapolates to other planets… or doesn’t.

RB,
Am I on the right track here? Would you choose three different panets?

dmh
PS – btw RB, I got a lot of value out of your last two comments, thanks. I’m not saying that I agree 100%, just that there’s a lot of value to be had in a constructive discussion of the issues which is what I saw in your last two comments. thanks!

47. tallbloke says:

Please could Robert explain the physical basis of the imaginary number ‘i’ (or ‘j’ in engineering) the product of which when multiplied by itself is minus 1, which is used extensively in electronics design and control engineering?

Presumably any competent Duke physicist at the time of the invention of this imaginary quantity which defies the laws of mathematics would have rejected it out of hand for being “absurd nonsense” and therefore of no possible use?

Thanks.

48. Ned Nikolov says:

davidmhoffer says (February 14, 2012 at 7:14 pm):

I had the oddest thought. What would the result be if you were to derive all your equations and constants, but limit yourself to only three planets for doing so? Say Earth, Venus, and Mars. If you did that, and made it clear that ONLY data from those three were used, then we’d have the use case that RB demands. An equation that is built upon a very small set of data, and then it either extrapolates to other planets… or doesn’t.

David,

We have already done this! In fact, the regression constants in our Eq.7 were derived from a plot of ln(Ts/Tgb) vs. ln(Ps) that did NOT include Titan, Moon and Mercury (we have not explicitly stated this in the paper). You can reduce the number of points (planets) and still get a very similar response function as long as the planets included in the regression span more or less the the whole environmental range. I think using Venus, Earth, Triton ans Europa will produce a function that very closely predicts the mean temperatures of Mars, Titan, Moon and Mercury. Try it …

49. Stephen Wilde says:

davidmhoffer said:

“N&Z believe that their formula will hold up for any planet that one can get the data for.

RB believes they have fooled themselves by developing a complicated equation that is right for the 8 planets we do have data for, but will have no predictive value for a broader set of planets.”

I think that helpful summary from davidmhoffer is right.

So, we can tell from the Gas Laws and observations that planets with atmospheres are very different from those without.

We can see from Venus and Earth that despite their very different atmospheres there is an observed match (approximately) between temperatures on both Earth and Venus at the same atmospheric pressure after adjusting for distance from the sun.

N & Z are doing their best to ascertain whether the same relationship applies on other planets within the solar system and so far it is looking good although the precision of the data is weak.

In the process N & Z have put forward some equations that fit the observations but to my mind it is early days because we just don’t have enough planets or enough variations between planets or accurate enough data to provide absolute proof. N & Z acknowledge those limitations but aver that what they do have is enough to demonstrate a surprising similarity from one planet to another regardless of the vast differences in their atmospheric compositions.

Then rgb comes along and from an ivory tower says that because it isn’t all perfect and that therefore it cannot (yet) be shown to be an absolute proof applicable always and everywhere then it has no significance or value at all.

Well, excuse me, but I think that given the Gas Laws and the observations we do have then it is perfectly reasonable and indeed valuable for N & Z to announce that they have created a formula that could extend the Gas Laws as observed on Earth to other planets and thereby say something useful about the climates on all planets everywhere.

My personal opinion is that in due course they will be found to be correct and that for every planet with an atmosphere it is atmospheric mass, planetary gravity and solar input that ultimately defines every aspect of the atmospheric circulation and that nothing else affects total system energy content. All other factors simply redistribute energy differently within the system.

I have no doubt that any planet which fails to configure its atmosphere according to atmospheric mass, planetary gravity and solar input will simply have no atmosphere. Either it will have boiled off into space or be frozen and congealed on the surface.

However given the wide range of ‘successful’ planetary atmospheres already found within the solar system it is clear that planets without any atmosphere at all are extraordinarily rare so that one must assume that the relationships which N & Z are endeavouring to describe are very robust.

rgb’s time would be better spent accepting that N & Z are attempting something novel here and doing the best possible with the data currently available.

50. My understanding of Equation 7 is (a) that it is empirically derived (b) unfortunately I don’t know what “exp” means so I cannot check myself (c) there are four six-figure numbers measured / calculated / tuned from this empirical fit that produces the curve with all the planets on it.

My question is, how many planets are needed to define this curve requiring those four very precise numbers extracted from the curve-fit? My first thought was, this is a logarithmic curve that is defined from just three fixed points. But here I am not so sure.

How many planets are surplus to definition requirements, and therefore constitute hard evidence?

Is it possible to rescale the graph so that a straight line appears, using logarithmic scales? Seems this would help a lot to convince, if it is possible. P=1000W

51. Ned Nikolov says:

To All,

Please, realize that this entire discussion about regression coefficients and their ‘physical meaning’ pointless, because it does nothing to refute or negate the very EXISTENCE of the relationship.

This relationship is not a coincidence, because: (1) There are no other atmospheric parameters (besides pressure) that can explain (describe) so accurately and beautifully the variation of the empirically based NTE factor (the relative ATE) across planets; and (2) The shape of this relationship matches the response of the relative adiabatic heating to pressure changes described by the Gas-Law based Poisson formula… And that is the BIG PICTURE!

52. Ned Nikolov says:

Lucy,

53. Ned Nikolov says:

Lucy,

It was a log/log plot we used the derive Eq. 7. A log/log plot does not make the NTE – pressure relationship linear. It makes it somewhat less ‘exponential’ and less non-linear. We will present this plot in our Reply Part 2 …

54. B_Happy says:

Dr Nikolov says above
“We have already done this! In fact, the regression constants in our Eq.7 were derived from a plot of ln(Ts/Tgb) vs. ln(Ps) that did NOT include Titan, Moon and Mercury (we have not explicitly stated this in the paper). You can reduce the number of points (planets) and still get a very similar response function as long as the planets included in the regression span more or less the the whole environmental range. I think using Venus, Earth, Triton ans Europa will produce a function that very closely predicts the mean temperatures of Mars, Titan, Moon and Mercury. Try it …”

but this makes no sense – Mercury and the Moon have no atmosphere, and therefore do not contribute to the pressure dependence fitting anyway, so to say that excluding them makes the predictions more robust seems unlikely.

By the way, it would be helpful if the experimental sources for the temperature, pressure and density were given in detail. Half the planets being considering have never had any sensors landed, so all measurements are remote spectroscopic techniques – it would be interesting to see which methods were used to determine the data.

55. Ned Nikolov says:

B_Happy says (February 14, 2012 at 9:43 pm):

but this makes no sense – Mercury and the Moon have no atmosphere, and therefore do not contribute to the pressure dependence fitting anyway, so to say that excluding them makes the predictions more robust seems unlikely.

Think, B_Happy! The regression curve describes a CONTINUUM from an airless surface to whatever pressure. Also, technically Moon and Mercury some small pressure: 1E-9 Pa …

56. B_Happy says:

Dr Nikolov

I can think and I know that a planet with zero pressure is taken care of purely by the exponential form of your equation. No fitting is needed therefore they do not contribute. It does not matter what parameters you have in your NTE factor, you will get the same answer for Mercury and the Moon no matter what. So can you explain again how they contribute to the fitting process, and thus how leaving them out proves anything?

57. davidmhoffer says:

Ned,
I see your point, but B_Happy’s also. I would suggest putting the airless and near airless bodies aside. Split the remaining ones into two groups and use the data from one to predict the data from the other and vice versa. It isn’t that the airless bodies don’t have value in the larger analysis, it is just that taking them out removes what will otherwise be a major objection. wars are won one battle at a time.

58. davidmhoffer February 9, 2012 at 6:39 pm refers to a past WUWT article that

found no evidence that increases in CO2 correlated to increases in temperature. What they found was that a change in CO2 caused a “ripple effect” that then settled back into the exact same equilibrium state there was before.

That study in my mind confirms N&Z. N&Z are pointing out that the change in CO2 concentration doesn’t change the equilibrium temperature, but that doesn’t mean there isn’t a “greenhouse” effect. There is. Doubling of CO2 is like throwing a rock into a lake…

Got a reference?

59. davidmhoffer says:

Lucy;

http://wattsupwiththat.com/2010/02/14/new-paper-on/

This is the paper I was referring to. “non stationary effects” are the ripples on the surface of the lake.

60. Ned Nikolov says:

Fellows,

There is NO experimental evidence from the free atmosphere that increasing CO2, water vapor or any other so-called ‘greenhouse gases’ has ever caused an increase in temperature. We have proxy records of CO2 and global temperature going back more than 65M years. These data sets show that CO2 has ALWAYS lagged temperature changes. The CO2 time lag increases exponentially with the time scale of the data set considered. Thus, we find an 800-1,000-year lag in the ice core data covering past 1M years, and a 12.25M-year lag in the ocean sediment records covering past 65M years …

The whole notion that CO2 changes can affect global climate comes from models and models ONLY! Such effect is predicted by the climate models due to decoupling of radiative transfer from convective heat exchange in their code. In other words, the CO2 warming effect is a result of a physical algorithmic error in climate models, it’s a model artifact with no physical equivalence!!

61. gallopingcamel says:

Ned points out that it is nonsense to say that CO2 is a major factor controlling global temperature. The physical evidence shows that the temperature is a major factor determining the CO2 concentratration.

So why is it that the “Scientists” who are prepared to sell their souls claiming that CO2 rules are showered with money while their opponents find it hard to get funding?

62. I’m having a very enlightening experience, re-re-reading material here. I can only take so much science and formulae at a time. But repeated study here is like slowly clearing a frost-covered windscreen.

Early on with N&Z my instincts said YES!!! I was lucky to have just read about Jericho below sea level seriously hotter than nearby Jerusalem, and to have thought about the flat snow line on the hills and the cloud underside flat lines. And I was highly upset with Willis. So I was in the mood to study, always my solution to emotional upset is to re-examine the evidence. I was thus ready to take on the huge, under-our-noses paradigm shifter that atmospheric pressure is the major determinant of temperature. So obvious in hindsight.

Taking on the full power of N&Z, and the full weight of the maths and their fivefold paradigm shift, is taking much longer. But each time I re-read, the misty surface over comments here has cleared a little, patch by patch, as it were, and every time it’s been reinforcing N&Z, and suggesting to me that most of the commenters here are having similar experiences to my own. For some, the frost over N&Z has simply never lifted at all – especially those who feel emotionally uncomfortable with the presence of significant correlation, but with maths factors raised to strange fractions of powers, lack of recognizable patterns of causation.

Heck, this is how every major scientific discovery is made. We’re at the exciting moment when it’s clear the hunt is up, so let’s go looking for the causation. And on reflection, I suspect that N&Z suspect the fractionality has to do with things like convection – and this is why they consider convective influence in their equations.

At one point I thought that Huffman was right to criticize N&Z about albedo. But now I see where N&Z are coming from on this paradigm-shift too, I can also finally see that Huffman is wrong in every point he makes here. And now that I can see it, it doesn’t even look that difficult to see!

Ah, this is the problem. Communication. Especially when there are so many commenters one has to skim. Now that I understand N&Z better and better, even their communication sounds clearer and clearer. But I have to remember what it was like when I was a dummy, when all the words here were simply covered with white frost….

************************************************
Looks like I’ve finally found my elevator speech.

63. Ned Nikolov says:

Lucy,

Great to hear that our concept is coming nicely into focus for a non-scientist and a math-shy person such as yourself. This means that hopefully other people will start getting it too … It’s really not a difficult paradigm to understand, but it does require a shift in perception. Once the shift is made, it becomes self-evident.

Now, go ahead and present your ‘elevator speech’ to Willis … you may have to do it in the elevator of the Empire State Building, though …

64. Ned Nikolov says:

To gallopingcamel (February 15, 2012 at 1:23 am):

The CO2-based ‘theory’ of climate change might enter the Guinness Book of Records one day as the one supported by the least amount of empirical evidence, while violating the most fundamental laws of physics, yet being the longest lasting and most funded misconception in modern science …

When all the ‘dust’ settles down in 10-15 years from now, a major lesson learned from this gigantic Greenhouse blunder would be that any absurdity can be sold as a solid science for decades given the right amount of money invested in it.

65. Roger Clague says:

Ned Nikolov says:
February 15, 2012 at 12:48 am

“The whole notion that CO2 changes can affect global climate comes from models and models ONLY! Such effect is predicted by the climate models due to decoupling of radiative transfer from convective heat exchange in their code. In other words, the CO2 warming effect is a result of a physical algorithmic error in climate models, it’s a model artifact with no physical equivalence!!”.

Obervations show we can ignore radiative effects such as IR absoption.Mass not composition determines the temperature of an atmosphere.That what you say.

Also the thermodynamic theory of a gases( gas laws) confirms this.

Within the atmosphere radiative transfer is decoupled from heat exchange ( conduction and convection ).

Radiation ( light) is a property of space, heat and all other forms of energy are properties of matter.

How is this possible? Maybe the total matter ( mass) of the atmosphere absorbs and emits radiation such that the heat energy and also the radiation energy each stay the same.

66. Ned Nikolov says:

TO: Roger Clague (February 15, 2012 at 9:47 am)

I would like to clarify something important. I am NOT saying that “within the atmosphere radiative transfer is decoupled from heat exchange ( conduction and convection ).”… On the contrary, in the REAL atmosphere radiative transfer is coupled to (happens simultaneously with) convection! Since convection is MUCH more efficient than radiation in transferring heat, globally, it completely offsets on average the warming effect of back radiation. So, the long-wave back radiation does NOT heat the surface in reality…

In climate models, however, radiative transfer is NOT solved simultaneously with convection. As a result, changes in atmospheric emissivity (due to an increase of CO2 concentration for example) lead to the calculation of positive heating rates (degree per day). These rates are produced by the radiative transfer code due to the fact that it is solved independently (outside) of convective processes. The heating rate predicted by the radiative transfer code are then passed onto the thermodynamic (convective/advective) portion of the model, and get distributed around the globe causing the projected warming. So, it is this ARTIFICIAL decoupling between radiative transfer and convection in climate models that is responsible for the non-physical prediction of rising surface temperatures with increasing atmospheric CO2 concentration.

67. wayne says:

Ned: So, it is this ARTIFICIAL decoupling between radiative transfer and convection in climate models that is responsible for the non-physical prediction of rising surface temperatures with increasing atmospheric CO2 concentration.

That is the trouble with modeling processes is it not. In systems such as the climate that are constrained and driven by so many multiple physics equations and laws that ALL are occurring simultaneously and they all are also inter-related, each affecting the others ruling parameters recursively. Let’s face it, we will never match nature’s calculations, her ‘computer’ has hundreds of digits of precision and a Δt better that yocto-seconds… that is the core reason that predicting the future in such of a system of more than days or maybe weeks is pure fantasy.

68. B_Happy says:

Wayne, I would not take Dr Nikolov’s word for it that convection and radiative transfer are decoupled. These models treat the earth’s atmosphere/ocean/ice caps as a 3D grid (ie a set of boxes) and each box influences its neighbours both spatially and in time ie the calculations of pressure and temperature etc in one box at one time are fed into the calculations of the pressure etc of both that box and its neighbours at the next time step. Does that sound to you as if they are decoupled? Now you could argue that they have the magnitudes of some of the couplings (a.k.a feedbacks) wrong, and that makes the models inaccurate (and I would probably agree with that), but that is an entirely different assertion to claiming that the couplings are missing entirely.

69. sergeimk says:

N&Z have not responded to my post so I will try it again – I think it shows up some serious errors in their application of Hölder’s inequality

Their Equation 2 seems to sum TSI and Cs then spread it round the globe. However Cs is already global so should not be so spread it must be constant over the surface. Inconsequential but physically wrong.

I do not see where the continuous downwelling radiation is handled in the equations. Like Cs this is day and night 200+ watts so should not be spread equatorially although it does taper off polewards.
the 200Watts was measured here SGP Central Facility, Ponca City, OK 36° 36′ 18.0″ N, 97° 29′ 6.0″ W Altitude: 320 meters

Eq 3 uses ap= Earth’s planetary albedo (≈0.3).
Eq 2 uses agb=Earth’s albedo without atmosphere (≈0.125),
Why the difference both assume atmosphereless planet?

[co-mod: I think the answer will come with Part 2 which N&Z have not yet posted. They are trying not to get too distracted at the stage, hence some patience is needed. --Tim]

70. davidmhoffer says:

B_Happy;
each box influences its neighbours both spatially and in time ie the calculations of pressure and temperature etc in one box at one time are fed into the calculations of the pressure etc of both that box and its neighbours at the next time step. Does that sound to you as if they are decoupled?>>>

To be fair, I don’t really know how the models work, I have never dug into it in detail. That said, I have a simple question:

Given that the models get it wrong, have no hindcast capability, no predicitve capability, and have repeatedly been shown not just wrong, but way wrong, if Dr Nikolov’s explanation of why isn’t correct, then what IS the reason?

71. tchannon says:

Keep this in mind as far as GCM are concerned. I read it as the models are less than 3D
http://declineeffect.com/?page_id=189

72. B_Happy says:

“Given that the models get it wrong, have no hindcast capability, no predicitve capability, and have repeatedly been shown not just wrong, but way wrong, if Dr Nikolov’s explanation of why isn’t correct, then what IS the reason?”

Well that is probably a bit of an exaggeration, but I agree that the models are not satisfactory. I am not a climate scientist by the way – I work in a branch of physical chemistry which also uses a lot of computer time and shares a few techniques, but I deal with system about 10^12 times smaller!. I would say that the current GMC’s are interesting as scientific explorations, but do not have the accuracy needed to justify the kind of political and economic decisions that they are being used to support. As for what I think is wrong, well that was alluded to above.

They are trying to use the Navier-Stokes equations, which model flow in gases, and solve them using a multi-grid, multi-timestep approach. Multi-grid because they need different size ‘boxes’ for atmosphere and ocean, and multi-timestep because these evolve at different rates. So they have a whole set of coupled partial differential equations that they are evolving in time, but not all of the couplings (feedbacks) are known accurately. The trouble with this is that the errors can (in fact do) build up over time i.e if there are inaccuracies on a particular time step, then these are fed into the next time step along with the parts that are right. Eventually you can end up with nonsense. The particular coupling that are worst described are those linking temperature, humidity and albedo, which are called clouds by most people…

However saying that the couplings are wrong is not the same as saying that there are no couplings – the latter statement is incorrect.

73. Ned Nikolov says:

B_Happy says:
February 15, 2012 at 10:56 pm

Wayne, I would not take Dr Nikolov’s word for it that convection and radiative transfer are decoupled.

B_Happy, are you are climate modeler or have you worked with climate models at all? This is not my word, but a fact! Allow me to know my field, please!

Yes, climate models are 3D models, but that refers ONLY to the thermodynamic part of the models. Radiative transfer (RT) code works in 1D (along the vertical axes) only, and RT calculations are performed NOT at every time step, but at every OTHER time step of these model. Also, RT is not solved in the same iteration with convection. Rather it is solved independently at a given time step, and it’s results in terms of heating rates are then passed to the 3D thermodynamic portion of the model …

74. B_Happy says:

” but at every OTHER time step of these model”

In other words they are coupled…..do you not know what this means?

75. wayne says:

B_Happy, Hoffer beat me to it… no hind cast capability. I also have not taken the time to dig into climate simulations either but I have written multiple solar system simulators where you have something as simple as the 15 most massive bodies all interacting simultaneously. That’s 210 3d ODE projections per small dt of a sixth order integrator (position –> velocity’, acceleration”, jerk”’, snap””, crackle””’, and pop””” derivatives :)) and the accumulating round-off error will always get you in the end.

I do know the problem nice and personal. Just as a dreamed up example to illustrate… to me a simulator is only marginally ‘correct’ if you can run it backward let’s say 600 years and tell within a few arc-minutes that an osculation matches to the monk’s records in English adjusted France’s Julian 1413-Apr-7 at 3:10 am local time of x-star by y-planet. And not relying on one exact confirmation but hundreds. Then, and only then, do you know that if you then reverse the integration can you somewhat trust it’s accurately predicting positions and times in the near future. Climate simulations have a long, long ways to go, if they are ever even possible. That is why I will not spend my time on climate simulations. Far, far to many assumptions, to much questionable data, to many inter-tangled physics limits and processes, some of the physics is questionable itself… mother nature always knows reality… we never will. It is a fantasy and I don’t have time for pure fantasies. I’d rather write a fantasy game, at least then I would understand that it is only fiction.

Once climate models can, in reverse, match the monthly records backwards for something like 20 years tit-for-tat I’ll have more confidence in their ability to possibly predict the near future. So far, two years ago, they can’t even match last years temperature records.

That’s how I see it.

76. B_Happy says:

Wayne,

That is more or less what I said. I was not claiming the models were accurate, just that they were not inaccurate for the reason Dr Nikolov was stating, since he is actually wrong on that specific point.

77. Roger Clague says:

From sun to the top of the atmosphere radiation rules, we ignore matter. However climate models include both radiative tranfer and heat transfer, coupled or not.

Observations shows that thermodynamic gas laws alone explain the properties of atmospheres.

We assume at TOA that radiation in is equal to radiation out. Radiation produces heat and hot things radiate but the but the effects cancel each other.

Climate models should be purely thermodynamic not a mix of radiation in space and heat tranfer in matter.

78. wayne says:

B_Happy, OK. I’d still pay heed to what Ned was saying. I believe he is correct and I myself have a major nit to pick with the radiation code within then models and Trenberth among others’ handling of radiation, the one dimension aspect. But it will take a while to compose a proper explanation so check back here later, like tomorrow evening, maybe even Friday. The handling in radiation in all of climate science is messed up and everyone can properly feel it, the numbers never jibe and I think I have found the reason.

79. tallbloke says:

I’d be grateful if Ned would add any further clarification he feels is needed to this post – I didn’t get a reply from him in time to add it here.

Joel Shore says in an unapproved comment:

tallbloke:

Congratulations on [snip] so that Nikolov’s [snip]. This statement by Nikolov is [snip]: “The whole notion that CO2 changes can affect global climate comes from models and models ONLY! Such effect is predicted by the climate models due to decoupling of radiative transfer from convective heat exchange in their code. In other words, the CO2 warming effect is a result of a physical algorithmic error in climate models, it’s a model artifact with no physical equivalence!!”

In fact, nobody (including N&Z) has challenged my content[ion], which is[;] the reason why N&Z got rid of the radiative greenhouse effect by adding convection is that they added it in totally incorrectly. We know that because they tell us they added it in incorrectly when they say, “Equation (4) dramatically alters the solution to Eq. (3) by collapsing the difference between Ts, Ta and Te and virtually erasing the GHE (Fig. 3).” I.e., they tell us that they added in convection in a way that leads to the completely unphysical result of an atmosphere isotropic with height, i.e., with zero lapse rate. And, any elementary climate science book would tell them that this would indeed eliminate the radiative greenhouse effect.

I guess you are trying to make your site the place on the internet where [snip]

[Reply] Hi Joel, thanks for vindicating my reasons for preventing you from turning my blog into a ruckus of inflammatory comment, misdirection and unjustified accusation.

N&Z say:
“Pressure by itself is not a source of energy! Instead, it enhances (amplifies) the energy supplied by an external source such as the Sun through density-dependent rates of molecular collision.”

So the temperatures nearer the surface where the atmosphere is under greater pressure and the density of the compressible air are higher than those at high altitude, in a proportion which approximates to the observed lapse rate. The pressure and consequent thermal gradient is not included in equations 3 and 4 because it is not required for the purposes of demonstrating the inadequacy of radiative activity to account for the GHE (or the lapse rate), this is why the conceptual system under consideration is isotropic.

” Rog, we are only showing the impact of adding convection on a ‘thin-slice’ one-dimensional model to demonstrate the effect. We go on to say: ‘These results do not change when using multi-layer models. In radiative transfer models, Ts increases with ϵ not as a result of heat trapping by greenhouse gases, but due to the lack of convective cooling [in the radiative transfer models], thus requiring a larger thermal gradient to export the necessary amount of heat.’

Cheers

Rog

I’ll add the publishable parts of Joel’s next reply plus any further response necessary after the weekend.

80. [...] Shoulder to Shoulder with Anth…BenAW on David Hoffer: Short Circuiting…tallbloke on Nikolov & Zeller: Reply to…davidmhoffer on David Hoffer: Short Circuiting…wayne on Nikolov & Zeller: Reply [...]

81. Robert Brown says:

Dear Dr. Nikolov,

Thank you for the courtesy of a serious reply. Allow me to address your points one at a time.

1) I have no problem with your expressing the GHE or ATE as a dimensionless ratio.

2) I do not mean to suggest that $T_{gb}$ in your paper is arbitrary. However, in computing it you use a single $\alpha$ for the Earth and for the Moon and for Europa and for Venus, but this number bears no resemblance to their actual bond albedo. Unless you consider the solid high albedo “ice” (in the case of Triton N_2 ice) coating nearly atmosphere-free Europa and extremely diffuse atmosphere Triton this doesn’t make the slightest bit of sense. The entire point of the insolation computation is to determine the fraction of solar energy that heats the planet and must ultimately be lost through radiation. The whole point of the albedo in this computation is that it is a direct measure of the fraction of energy reflected away without causing heating. Why bother with albedo in the first place if you’re going to do this?

I’ll tell you why. Because by doing so, it becomes an irrelevant scale factor — you’ve eliminated a source of variability for the planets. You can indeed factor $(1 - \alpha_{gb})^{1/4}$ (and all the other constants) out from under the integral in your equation (2), and write $T_{gb}(p) = C (S_0{p})^{1/4}$ where C is a constant for all the planets and $S_0(p)$ is the TOA TSI for the planet $p$. When you form the dimensionless ratio $N_{TE}$, the constant simply doesn’t matter as it no longer contributes to the variability and you’ve made $S_0$ the only variable. This is just a projection technique, in other words.

It introduces significant errors into your table 1 number for $N_{TE}$. The data in this table has other problems. When I look for the bond albedo of Venus (for example) in actual publications such as:

http://www.sciencedirect.com/science/article/pii/S0019103505005105

I get 0.9, not 0.75, and you do not provide anything but “multiple references using cross-referencing” which makes it hard for me to assess whether your number is likely to be better. In this case $((1 - 0.9)/(1 - 0.1))^0.25 \approx 0.57$ and the error associated with ignoring the true bond albedo in favor of an artificial one that turns $T_{gb}$ into a direct proxy for $S_0$ could be as high as 43%.

In the end, the reason it doesn’t matter much is the forgiving nature of 1/4 powers, and yes I understand that $T_{gb}$ is always an artificial measure, but it doesn’t help to have you do a better job of computing it by accounting for spherical geometry and $S_{microwave}$ and then a worse job of handling the albedo without any estimation of errors!

Still, I appreciate what you are trying to do, so let’s just let this go for the moment and concentrate on the rest of it. Bear in mind that I am being critical but I am not hostile to your efforts. Indeed, I agree that the “33 degree warming” number is bullshit, think that in general your improved formula for computing $T_{gb}$ is an improvement, although it would be improved still more if you didn’t insist on making every planet into the Moon as far as albedo is concerned. I also think there is still more work to be done here, because I do not agree with some of your remarks (made in other papers of yours I’ve grabbed) on just how to compute an average surface temperature for the purposes of considering outgoing radiation. But we can discuss this (if you like) another time.

3) Our analysis revealed that mean surface total pressure (Ps) is the only parameter that nearly completely explains the ATE values for all 8 planets. No other parameters such as ‘greenhouse-gas’ concentrations or their partial pressures, or the actual absorbed radiation by planets (accounting for observed albedos) came even close to describing the ATE variation. Hence, the derivation of Eq. 7. Again, NTE(Ps) was derived using non-linear regression!

Before I start on the substance of this, let’s get one thing straight. One does not “derive” an equation using regression. One fits data to a presumed functional form using regression. One derives an equation by using the laws of physics, algebra, calculus, geometry, things like that. If you want to be picky, one derives theorems from axioms, but in physics a “derivation” invariably means proceeding from the axioms of physics — the laws of nature, or accepted idealized empirical formulae that themselves may or may not be derived — to a result.

For contrast, you arguably derived your variant (2) of the usual $T_e$ formula — you could have provided a lot more detail, but what you provide is enough for me to see what you are doing, and since I already have a good idea of where $T_e$ comes from I can at least assess whether or not I agree with your derivation, whether you did your spherical integrals correctly, and I can identify where I do not agree, e.g. using a one-size-fits-all albedo that completely defeats the purpose of this idealized measure of the integrated absorbed power. In my opinion, of course. The sunlight directly reflected from Europa’s shiny white surface does not contribute to its surface temperature.

On the other hand, you did not even justify the form of the function(s) used in your fit using physical laws, and when I point out that it contains implicit physical constants (the pressures required to make the arguments dimensionless) that cannot possibly be justified — again, in my opinion, but feel free to prove me wrong — you loudly ignore this. So please, do not assert that you have derived equation 7 or 8. It isn’t even semi-empirical, it is purely empirical and ad hoc.

Next let’s think about your $T_s$ data in table 1. You provide it, and $N_{TE}$, and just about everything in your table, to absurd precision. Do you seriously mean to assert that the Earth’s mean temperature is exactly 287.6 K? Has this temperature historically been constant? Even for the Earth, surely the best measured body in the Solar system, there is considerable argument over just what the average surface temperature is (much of it on this very blog) and it varies by at least 10K (3%) over time scales as short as a few thousand years.

[Parenthetically, if your equation 7 were truly predictive, how would it predict this? Are the ice ages caused by the Earth losing atmosphere and hence surface pressure? Do they end because the pressure goes up?]

[Reply] Don’t forget the other half of the equation – Insolation, the distribution of which changes considerably with the changes of obliquity, precessional orientation and orbital eccentricity the earth undergoes on these timescales. – TB.

It has varied by over a degree on a timescale of a mere 100-150 years. Surely “288 K” would do for the temperature, and “287 +/- 2″ K would be a better descriptor still on the timescale of centuries, given that we are probably at a local high point.

Or perhaps not. Perhaps you have a source that you rely on to give you more than an uncertain measure of the Earth’s temperature, one with error bars and without tenths of a degree. If we assume (reasonably) that the order of uncertainty in the Earth’s temperature is 1%, surely the order of uncertainty in all of the other planets with the possible exception of the Moon is an order of magnitude greater.

Maybe you disagree. Maybe you can cite references to support the temperatures you give in this table, and a claim that they are known right down to that last 0.1 degree, although I can’t imagine that our fundamental sources are different for the outer moons, just about all of which are known only from one or two flybys of satellites. However, you have not given any references at all to support the data in table 1, so I cannot assess this. Wikipedia has better referential support than this paper for its data.

To pick just one more entry in your table 1, Europa, Wikipedia indicates that it has an equatorial average temperature around 110K and a polar average temperature around 50K. You indicate an average surface temperature of 73.4K. Yet one does not have to do the integrals to see that this is inconsistent with Wikipedia’s result. A straight arithmetic average of the two is higher than this, but there is much more surface area at the equator, due to the Jacobean you so ably included in your improved integral in equation (2). Guestimating the integral, the mean surface temperature should be closer to 90K, although this would still have a large error estimate, would it not, and there is no chance that could actually be accurate to 0.1 degree K.

Before I or anyone can consider the goodness, or uniqueness, of your fit to your data, surely one needs to have the probable or possible sources of error accounted for and error bars included in the numbers for use in the regression program. One can get truly horrible errors fitting a set of noisy data with a single one size fits all error bar (especially one that is too small so it places too much weight in the fit on data that is actually not known particularly accurately, even more so when one is fitting a small set of data with a large set of parameters). In the meantime, as I said, fit the data with a cubic spline — it is just as meaningful. What you’ve done is no different from Roy Spencer’s “cubic fit” presented on his lower troposphere temperature curve — presented a curve that smoothly interpolates the data, sure, but that is physically unmotivated and hence meaningless except as a guide to the eye. Spencer openly acknowledge it. You’ve written a paper on it, claiming that your arbitrary fit is “derived” by virtue of roughly interpolating the data.

Now let’s talk about Equation 7 itself. You yourself in figure 6 plot “potential temperature”. Potential temperature is a dimensionless quantity like the one you hope to understand in the form of $N_{TE}$ — I get it. Note well that in the case of potential temperature, because it is based on and indeed actually derived from some fundamental physics, the two numbers that appear: $P_0 = 1 atm$ and the exponent $0.285$ are both entirely physical!. The one is a reference pressure that not only is relevant but sets the scale of pressure-temperature relationships for the entire atmosphere, the other is related to $\gamma$ and the atmosphere’s actual molecular composition. This is characteristic of “good physics”, or at least of plausible physics. The quantities make physical sense even before one digs into and learns to understand where they come from.

For some reason you presented Equation 7, the result of your nonlinear regression fit, in a form that was not as manifestly dimensionless as potential temperature in figure 6, after claiming it as inspiration. I have helped you out there by filling in the characteristic pressures that go with your choice of exponents. These pressures are clearly absurd, are they not? Unlike $P_0$ in potential temperature, 54,000 atmospheres is a pressure that appears nowhere in the physics describing ideal gases, in physical processes that might possibly be relevant on the surface of Europa or Triton or Mars or Venus.

I’ve played the “fitting nonlinear functions” game myself, for years, as part of finding critical exponents from scaling computations, and in the process I learned a thing or two. One thing I learned is that it is often possible to get more than one fit that “works”, and that the fit that works best may not be the one you are seeking, the one that makes physical sense. Often this is a matter of the error bars or lack thereof. Too small error bars will often “constrain” the best fit away from the true trend hidden in the data. The problem is compounded when one is fitting data with multiple independent trends, such as a fast decay mixed with a slow decay (multiple exponential).

Your data clearly has such multiple trends with completely distinct physics — you misrepresent it as a single fit, but presenting it in dimensionless form clearly shows that you are really proposing two different physical processes occurring at the same time with completely different characteristic dimensions. I think this is as clear a signal as you will ever see that you are overfitting the information content of the data, and would do far better to just fit the larger planets on your list with a single dimensionless form, preferrably after putting error estimates into all of the data in your table 1 and using the correct bond albedo for the planets in question, and adding references.

In summary, the tight exponential relationship between NTE and pressure is real, and the fact that it is described by a function, which coefficients cannot be easily interpreted in terms of known physical quantities, does not invalidate that relationship! This is because it is a higher-order emergent relationship, which summarizes the net effect of countless atmospheric processes including the formation of clouds and cloud albedo. This relationship might not be precisely reproducible in a lab, simply because it may require a planetary scale to manifest. However, a lab experiment should be able to validate the overall shape of the curve defining the thermal enhancement effect of pressure over an airless surface. BTY, this shape is already supported by the response function of relative adiabatic heating defined by Poisson’s formula (Fig. 6 in our paper).

Actually, as I’ve pointed out very precisely above, equation 8 is just as algebraic restatement of your definition of $N_{TE}$. You’ve simply inserted an empirical heuristic fit to your data to replace the data itself. This isn’t a derivation of anything at all, it is curve fitting, which is a game with rules. Mann, Bradley and Hughes tried to play this game and broke the rules when they built the infamous Hockey Stick. Mckittrick and McIntyre called them on it.

I’m trying to keep you from making the same sort of mistake. You fit the data with the product of two exponentials of ratios of the surface pressure to arbitrary powers. Why? Well, exponentials are functions that are 1 when their argument is zero, so you fit two of your data points (badly if you leave out error bars or use the actual data in your table) for free without using a fit parameter, and come damn close to a third, close enough that — lacking error bars and given a monotonic relationship — you can count it as “well fit” whatever the error really is.

You then are really “fitting” five data points with four free parameters. Skeptics often quite rightly mock the warmist crowd for their global climate models with highly nonlinear behavior and enough free parameters that they can be tuned to fit past temperature data, accurate or not, as nicely as you please, and we are not surprised when those fits of past data turn out to be poor predictors of either future trends or even earlier past data (hindcast). We mock them because it is well known in the model building business that with enough free parameters and the right choice of functional shapes you can fit anything, but unless you treat error in the data with the respect it deserves and include some actual physics in the choice of functions being fit, the result is unlikely to actually capture the physics.

Listen, in fact, to your own argument. There is a dazzling amount of physics involved in the processes that establish the surface temperatures on the planets in your list. One can split the planets up into completely distinct groups — two airless planets near the sun with no surface ice, two nearly airless planets that are completely coated in high-albedo ice, one water ice, one frozen N_2, one of which is heated by a tidal process that still isn’t well understood, the other of which is hypothesized to have a greenhouse trapping of heat by the semi-transparent N_2 ice that replenishes its atmosphere. Of the four planets with substantial atmospheres all of them have an optically thick greenhouse gas content and all of them therefore have tropospheres and stratospheres and lapse rates driven by vertical convection across the temperature differential between the surface and the tropopause.

Yet somehow none of this matters? Calling it an “high-order emergent relationship” is just fancy talk for “we found a fit and have no idea what it means”, but it isn’t surprising that you can fit the data with an arbitrary form with four free parameters, especially without error bars or any criterion for judging goodness of fit.

How is your fit more informative than fitting the data with a spline, or with a polynomial, or with anything else one might imagine? I’ve already pointed out that your figure 6 is precisely why one should not believe your result. In it, $P_0$ means something, and so does the exponent. There is nothing “emergent” about it, it is really a derived result, and when it turns out to approximately describe actual atmospheres we gain understanding from it.

What does the 54,000 bar in your fit mean? What does the 202 bar in your fit mean? What does the exponent 0.065 mean? You cannot answer any of these questions because you have no idea. How could you? They are all completely irrelevant to the pressures present on the planets in question. They have precisely as much meaning as the arbitrary coefficients of a cubic spline or any other interpolating function or approximate fit function that could be used to approximate the data, quite possibly as well or better than the fit that you found if you actually add in error bars

“Please could Robert explain the physical basis of the imaginary number ‘i’ (or ‘j’ in engineering) the product of which when multiplied by itself is minus 1, which is used extensively in electronics design and control engineering? Presumably any competent Duke physicist at the time of the invention of this imaginary quantity which defies the laws of mathematics would have rejected it out of hand for being “absurd nonsense” and therefore of no possible use? – Thanks – TB.
.

So far the total information content of your paper is:

* We do a better job of defining/computing a baseline greybody temperature $T_{gb}$ for the planets.

Yes and no. Yes to the integral, no to ignoring the bond albedo, especially in the case of Europa and Triton where there is no conceivable justification for doing so.

* We define a dimensionless ratio between empirical $T_s$ and $T_{gb}$. We tabulate this computed ratio for the data, forming an empirical $N_{TE}$ dataset with eight objects.

Sure.

* We heuristically fit a four parameter functional form. The fit works. It is unique. It must be meaningful.

Lacking error bars on your data, you cannot possibly assert that it is unique. There could be dozens of functional forms, some of them with fewer free parameters, that produce comparable Pearson’s $\chi^2$ for the fit once you add in error bars. I rather expect that there will be, especially if you correctly treat the bond albedo for planets with almost no atmosphere and no exposed regolith that reflect away over half their incident insolation without being heated by it.

The fit you obtain is not meaningful. If you disagree, give me a physical argument for the 54000 bar, the 202 bar, and the exponent of 0.065. The only parameter of your four parameter fit that is plausible is the 0.385, although even that number would need to be connected to some actual physics in order to obtain meaning.

* The real meaning is that only surface pressure explains surface temperature, because we were able to fit a functional form to $T_s(P_s)$.

Excuse me? I can fit any set of data pairs with any sufficiently large basis. If the data is monotonic I can almost certainly fit it with fewer free parameters than there are data points, especially if I completely ignore the error estimates for the data points! Lacking the error bars, you cannot even compute $R^2$ and plausibly reject no trended correlation at all! I’m not suggesting that this is reasonable for your particular data set, only that you are far away from presenting a plausible argument for uniqueness or correlation that implies causality. In two of the four planets in your list, it’s rather likely the case that surface temperature implies surface pressure, not the other way around! The chemical equilibrium pressure of N_2 over a thick layer of N_2 ice or O_2 over water ice is far more likely to be the self-consistent result of surface temperature, not its cause.

In the end, you are left where you started — that there is a monotonic trend to the data that you cannot explain or derive, and because of flaws in your statistical analysis you cannot even resolve difference between competing explanations including the simplest one that the last four planets have surface temperatures dominated by the greenhouse effect and their albedo, the first two are greybodys to a decent approximation (that somehow turned into 1.000 to four presumed significant digits in your Table 1), and two are special cases described by a completely different physics than the others (dominated by the incorrectly used albedo), and to some extend different even from each other.

Nothing in your analysis rejects this as a null hypothesis. You cannot even assert that it does without including an error analysis in your data and fit.

To conclude, you have two choices. You can ignore my objections above and plow ahead with your paper as is. You might get it past a referee, although I somewhat doubt it. You can in the process continue to get all sorts of uncritical positive feedback on it on the pages of this blog and have it trumpeted as “proof” that there is what, no actual GHE? That gravity alone heats atmospheres? I’ve heard all sorts of absurd punchlines bandied about, and your result can be used to support any or all of them if one ignores the statistical and methodological flaws.

Or, you can fix your paper. Include references, for example. Use the correct bond albedos. Here’s a small challenge for you. Apply your formula to Callisto, to Ganymede, to other planetary bodies. Callisto is an excellent case in point. It has an albedo almost twice that of the moon, It is the warmest of Jupiter’s moons — warmer in particular than Europa, for good reason given the difference in their albedos . It has an atmosphere with a surface pressure around 0.75 microPa, it will fit right in there on your table. It puts the immediate lie to any assertion that your fit is either predictive or universal, as its surface pressure is lower than Europas and its surface temperature is higher than Europas and if you use your “universal” $T_{gb}$ formula for it the lower albedo will further raise $N_{TE}$ for it relative to Europa. Your nice monotonic curve won’t be monotonic any more, and you can see some of the consequences of ignoring albedo, atmospheric composition (Callisto’s is mostly CO_2, hmmm), error estimates, and using cherrypicked data to increase the “miraculous” impact of your result.

I honestly hope that you fix your paper. There may well be something worth reporting in there in the end, once you stop trying to prove a specific thing and start letting the data speak. I actually rather like what you are trying to do with $T_{gb}$, but if you want to actually improve this you can’t just leave physics out at will, especially not when looking only at the temperature of moons tells you that your assumptions are incorrect even before you get to actual planets with actual atmospheres. Also, if you do indeed do your statistical fits correctly, you might find something useful — a less “miraculous” fit that is still good given the error bars and that has characteristic pressures and exponents with some meaning,

Best regards,

rgb

82. Robert Brown says:

Oops, Tallbloke please insert my missing \$. Sorry.

rgb

83. wayne says:

Robert Brown, you have not read N&K’s paper correctly. N&K’s Tgb has nothing to do with the actual atmospheric Bond albedo. Tgb is defined in the paper as the albedo and emissivity of that planet or body with NO atmosphere… no ice… no oceans… no clouds, possibly no rotation though in the definition that matters little. You start off incorrect in your point two from the very beginning. Now I’ll read the rest of your lengthy comment.

84. Crom says:

Tallbloke, you may want to find a better analogy. Asking about the physical basis of ‘i’ is the same as asking what the physical basis of the number 42 is. It is just a number unless it is being used in a specific context to represent something physical. In this case, as Dr. Brown has repeatedly noted, the construction of the N&Z equations puts the regression coefficients in a context that gives them a physical meaning (they have units of pressure).

Also, I would suggest to you that ‘i’ does not defy the laws of mathematics at all. It is just another abstract mathematical concept that is useful in solving some physically meaningful problems.

[Reply] Know of any other ‘numbers’ which when multiplied by themselves gives a negative number?

85. Ned Nikolov says:

Dear Dr. Brown:

… if your equation 7 were truly predictive, how would it predict this? Are the ice ages caused by the Earth losing atmosphere and hence surface pressure? Do they end because the pressure goes up?“.

The answer is partially contained in Section 5 of our first paper, and specifically in Fig. 10. Ice ages are NOT caused by pressure changes, they are caused by orbital variations (the so-called Milankovitch cycles). Earth’s atmospheric pressure has been relatively stable for the past 1.8M years. Pressure changes typically occur (and control global temperature) on a time scale on millions to tens of millions of years …

86. Ned Nikolov says:

wayne says (February 17, 2012 at 12:01 am)

Robert Brown, you have not read N&K’s paper correctly. N&K’s Tgb has nothing to do with the actual atmospheric Bond albedo. Tgb is defined in the paper as the albedo and emissivity of that planet or body with NO atmosphere… no ice… no oceans… no clouds, possibly no rotation though in the definition that matters little. You start off incorrect in your point two from the very beginning.

Thank you, Wayne! You made quite a correct observation! … As I mentioned previously, a lot of details are not being picked up (understood) by many bloggers including physicists on the first read. That’s because people always look through the glasses they are used to wear, while a new paradigm requires a new pair of glasses …

[ ]…[ ]

87. Bob_FJ says:

N&Z propose a change of paradigm in “climate science”.

One of my favorite paradigm shifts was proposed by Alfred Wegener concerning his unproven “continental drift”, (tectonics), for which he was scorned by his contemporaries, only to be accepted relatively recently.

A controversial blogger “Myrrh” at WUWT has cited extensive links that convincingly explain WHY people living in areas of low exposure to sunlight by latitude have evolved to have pale skins, whilst being descendants of black peoples in Africa. The evidence is strong that that vitamin D is multiply essential for health, and that much more D is generated by UV in pale skin.
However, this flies in the face of the medical and governmental church, whom collectively insist that we should not expose our skin to sunshine.
See Myrrh’s post: http://wattsupwiththat.com/2012/02/03/monckton-responds-to-skeptical-science/#comment-895283
And my following response, but there is a lot of reading in the links which may not be time effective for N&Z to follow.

88. Robert Brown says:

Thank you, Wayne! You made quite a correct observation! … As I mentioned previously, a lot of details are not being picked up (understood) by many bloggers including physicists on the first read. That’s because people always look through the glasses they are used to wear, while a new paradigm requires a new pair of glasses …

Dear Dr. Nikolov,

I assure you that I have not missed this point. However, it is completely irrelevant.

I have just completed applying your hypothesis, with your own numbers for T_gb per object, to the actual commonly accepted numbers for T_s for the planets in question. Curiously, with the exception of the last three points not a single planet lies on your curve. I have also applied your formula, with the T_gb you supply to objects orbiting Jupiter (e.g. Europa) to Io, Ganymede, and Callisto, all of which have even more atmosphere than Europa and all of which are considerably warmer — but not in the right direction — Io has the greatest surface pressure by three orders of magnitude but Callisto has the greatest mean temperature.

None of them — including Europa, whose mean temperature you underestimate by over 30% — lies remotely near your curve using your own T_gb. However, the warmer temperature of Callisto is instantly understandable given its low albedo.

This forces me to ask the question — exactly how did you come by the numbers in your Table 1 for T_s for the planetary bodies in question? When I look at the goodness of the fit to your model, it appears to me to be impossibly good. Literally impossibly. If one ascribes even modest error bars to the T_s and P_s in question, your curve would seem to put each and every point dead on the curve. Surely you realize that this is extremely unlikely in any fit involving real world data. You do not provide any references for the numbers in your Table 1 so I cannot check them against the references you actually used, but they are in significant disagreement with the numbers that I found in every instance but Titan, the Earth, and Venus.

One critical aspect of science is reproducibility. I am endeavoring to reproduce your results, but find myself unable to. Please help me by explaining the sources of your data and how you arrived at the numbers in your table 1.

I’d be happy to provide the table of numbers I used, and a description of their provenance, as well as the octave/matlab code I used to perform the comparison available, or if you would prefer I can just publish the graph itself on this blog, but before I do I would really like to see where your numbers come from and how it happens that they lie so perfectly on your curve. For example, in your table 1 you find that Mercury and the Moon both exactly have $N_{TE} = 1.000$ — to four digits, presumably. This is all by itself simply not the case. Your estimate of Mercury’s temperature is egregiously low, and its albedo is not (according to most published work) equal to that of the Moon. Neither of them has a significant atmosphere, so one would expect their mean temperature to be determined by their actual albedo according to your own reasoning!

Yet somehow they end up having exactly the right surface temperature to have the same $N_{TE}$ in spite of that fact that physically, this is quite impossible by your own arguments. How did that work out, exactly?

rgb

89. B_Happy says:

I have also asked about the Galilean satellites, and have as yet received no reply to my question regarding the temperatures. Repeating Robert Browns point – where did the experimental data come from?
As a further complication, even calculating S0 for these satellites is problematic, unless allowance is made for a) tidal heating b) radiation from Jupiter itself and c) the time the satellites spend in Jupiter’s shadow – were any of these allowed for?

90. Anything is possible says:

Robert Brown says:
February 19, 2012 at 10:18 pm

Yup. I have a similar problem with this aspect of the theory.

My thinking is that the formula shouldn’t work on planets with tenuous atmospheres for the very simple reason that the ideal gas law appears to break down under low pressures.

The clue is in the way that the temperature/height relationship, as defined by a stable adiabatic lapse rate, breaks down at the top of the troposphere (the tropopause) on every planet with a “mature” atmosphere. It happens on Venus, it happens on Earth, it happens on Titan, and it also appears (according to wiki) to happen on the gas giants – Jupiter, Saturn, Uranus and Neptune.

Even more significantly perhaps, it seems to happen at a similar atmospheric pressure (200mb-250mb) on ALL these planets.

I’d be very interested to hear the thoughts of all you professional physicists on this…….

91. Robert Brown says:

B_Happy says:
February 20, 2012 at 12:15 am

I have also asked about the Galilean satellites, and have as yet received no reply to my question regarding the temperatures. Repeating Robert Browns point – where did the experimental data come from?
As a further complication, even calculating S0 for these satellites is problematic, unless allowance is made for a) tidal heating b) radiation from Jupiter itself and c) the time the satellites spend in Jupiter’s shadow – were any of these allowed for?

Good points B. However, those corrections seem as though they are in the wrong direction given that Io is very close to Jupiter, covered with a relatively dense atmosphere (that is still quite close to hard vacuum, of course), and yet cooler than Callisto which is so distant that tidal heating is surely ignorable. Then there is the conceptual difficulty of pretending that Europa — in an atmosphere that is the most tenuous of the four — has a surface temperature (given in the table as 73.4K but given on its Wikipedia page as 102K) that is completely unaffected by the fact that 2/3 of its insolation is reflected back to space without heating anything at all.

Just FYI, Wikipedia has the following data:

Moon T_s P_s
Callisto 134K 7.5pbar
Io 110K 300-3000 pbar
Ganymede 110K 2-25 pbar
Europa 102K 1 pbar

Given a common T_gb, this data alone completely confounds the “miracle” of equation 7. Io should be the warmest of the planets by far, and all of them should be much, much colder (commensurate with N&Z’s 73.4K for Europa) in order to fall close to their curve.

Wikipedia actually provides references for their numbers. Perhaps Nikolov and Zeller would be so good as to do the same? I’d like an idea of the uncertainties of those numbers as well — if the error bars are as large as they seem as though they really have to be (given the disparity between their numbers and the published/accepted numbers) then one can actually compute a p-value or chi-squared for their fit and see if it is reasonable.

rgb

[Reply] N&Z give full information on how they calculate the T_gb for airless bodies in their last paper. FYI Wikipedia has removed the reference to Mercury’s average surface temperature calculated from the ‘classic’ S-B method, since we pointed out that it was physically impossible for it to be higher than the simple average of the equatorial max and min empirical data.

92. Robert Brown says:

Even more significantly perhaps, it seems to happen at a similar atmospheric pressure (200mb-250mb) on ALL these planets.

I’d be very interested to hear the thoughts of all you professional physicists on this…….

My own thoughts are that the usual greenhouse effect is determined by the height, and via the DALR temperature, at which the atmosphere becomes optically thin to radiation from greenhouse gases. It isn’t unreasonable that many gases would become optically thin at similar pressures. This also determines the tropopause — below the tropopause there is significant vertical convection from the differential heating at the surface and cooling at the top of the troposphere. So for the last four planets they attempt to fit — Mars through Venus — they all have a troposphere and a stratosphere and hence have a surface temperature that is related to a DALR, although I personally don’t find them lying on a single N_TE curve (Mars is well off) with N&Z’s T_gb. I haven’t tried computing N_TE with a physically plausible T_gb that uses the actual albedo of the planets involved, but I’m a bit doubtful that it will produce a particularly compelling fit. That’s on my agenda of future work to do with my little octave program.

[Reply] Both N&Z and Hans Jelbring agree Mars’ atmosphere is too thin to show a pressure effect on surface temperature. Using actual albedo rather than T_gb won’t work in N&Z”s equations. Willis didn’t understand this either.

The other Moons as you note have very tenuous atmospheres indeed — atmospheres that are already far thinner than the pressure at the tropopause on most planets — and IIRC only Triton has a troposphere at all, and it lacks a stratosphere. There isn’t any reason in the world to think that the physics that dominates their mean surface temperature in any way resembles the physics that dominates the surface temperature of Venus. For one thing even those Moons with greenhouse gases are optically thin and unsaturated and have very little greenhouse effect compared to a planet with an optically thick, saturated GHE.

And as I have noted and do not wish forgotten, N&Z’s equation 7 has completely unphysical reference pressures in it. How the reference pressure of 54,000 bar or 202 bar can arise in any sane way from the consideration of physical principles very much remains to be demonstrated. Personally, I reject it out of hand as evidence that the entire result is wrong unless and until most rigorously proven otherwise.

[Reply] I notice you still haven’t answered the question I last asked five days ago. I’ll be snipping any further repetition of your argument about ‘reference pressure’ since you are unwilling to engage in a discussion of its relevance or applicability.

Perhaps it is worthwhile to go ahead and show the results of applying N&Z’s own T_gb per planet to independently obtained estimates of planetary T_s to form N_TE.

It’s a log-log plot so that one can see the data (spread out over many orders of magnitude in pressure, otherwise). The leftmost x is the moon, then mercury, then the four circles are the Jovian moons, then the remaining x’s are triton, mars, earth, titan and venus. Note that I do not pretend that my numbers are certain, only that I can tell you where, and how, I got them, and that I think that they are pretty good ones unlikely to be more than maybe 10% off (less in the case of the Earth, Mars and Venus, which all have more or less permanent weather satellites and good data, more in the case of planets known only from single flybys and very long range Hubble studies — in many cases the error ranges I did find were easily 10% for the pressure alone (probably measured indirectly from pressure broadening of spectral lines, something with fairly large uncertainties given a signal from the entire atmosphere and not just the surface).

The +’s are N&Z’s own data from Table 1 in their paper/poster. The curve absolutely precisely goes through the +’s — even a deviation of a few percent in either T_s or P_s would be enough to move them well off, as indeed occurs for all of the points but three in my refit.

How can this be? The + signs are presumably experimental data with uncertainties! There is a “miracle” here indeed!

rgb

[Reply] Did you use surface temperatures calculated with the stefan-boltzmann method N&Z have comprehensively shown to be wrong with empitical data for our Moon?

93. B_Happy says:

“Both N&Z and Hans Jelbring agree Mars’ atmosphere is too thin to show a pressure effect on surface temperature. Using actual albedo rather than T_gb won’t work in N&Z”s equations. Willis didn’t understand this either.”

Eh?????

If Mars’s atmosphere is too thin to show a pressure effect, then why is it in their training set, and why do N+Z quote it as a success? Are you claiming that their NTE values work, but actually have nothing to do with the pressure? And if the Martian atmosphere is too thin, why are Europa and Triton in there.

[Reply] Because although the atmosphere is too thin to warm the surface, their equations still work. “Are you claiming…” No, I’m not.

“Did you use surface temperatures calculated with the stefan-boltzmann method N&Z have comprehensively shown to be wrong with empirical data for our Moon?”

I would say they have asserted it to be wrong, not that they have shown it. Do you really think the temperature drops to 3K when the sun is not shining?

[Reply] The N&Z method assumes a zero heat capacity for the surface, but gets the average surface temperature right to within 6K for the Moon. The ‘classic S-B method’ gets the Moon’s average surface temperature wrong (too warm) by over 100K.

94. Crom says:

I’ll be snipping any further repetition of your argument about ‘reference pressure’ since you are unwilling to engage in a discussion of its relevance or applicability.

Tallbloke, did you miss Dr. Brown’s rather extensive comments regarding dimensional analysis and Fermi estimation in this thread? Or did they just go over your head?

[Reply] Robert has still not responded to my simple question, which is logically prior to his more rarified analysis. I notice you haven’t responded to the question I asked you when you interceded on his behalf either. Neither of you will be posting more of the same repetitive verbiage here until it they have been properly responded to. Ignoring pertinent questions and instead repeating your own claims is merely politician’s rhetoric technique which has no place in scientific discourse.

95. Nick Stokes says:

But could we have a reference for N&Z’s data? Several people have asked for it. The success of the theory rests on their ability to match temperature and pressure data, but where does it come from?

[Reply] Thanks Nick, noted. I hope N&Z will address this in their next paper. Meanwhile, Robert has offered, and I have taken him up on his offer.

96. Stephen Wilde says:

I said this elsewhere but I think it valid here:

Neither the Earth nor the Earth’s atmosphere are black bodies.To give black body status to Earth you have to take a point beyond the atmosphere as the ‘surface’ and only then apply SB.

Treating Earth and its atmosphere as two black bodies separated by a vacuum is wholly inappropriate because the Earth and its atmosphere are a single unit interacting primarily via non radiative processes which is where the Gas Laws come in.

So, for planetary bodies separated by a vacuum, apply SB but only at a point outside any atmospheres where radiative processes do indeed dominate exclusively.

For bodies not separated by a vacuum, such as a planet and its atmosphere, apply the Gas Laws because non radiative processes dominate.

AGW has applied radiative physics to a non radiative scenario and the outcome is garbage.

97. Stephen Wilde says:

Applying the SB equations at the contact point between a planet and its atmosphere is no better than applying it at the junction between the Earth’s mantle and the crust.

The SB equations are only of relevance at the junction of atmosphere and space. They can never predict the temperature at the contact point between two differing materials within a planetary system.

98. Robert Brown says:

Tallbloke, I think that you are really grasping at straws if you are trying to assert that the use of the complexxx unit “i” in mathematics or physics — or for that matter the use of general geometric division algebras of arbitrary grade, since complex numbers are just a step on an infinite series of geometries and number systems — has anything whatsoever to do with Fermi estimation and the appearance of a truly absurd reference pressure in a fit. [snip]

[Reply] I didn’t. I asked you to explain the physical basis for it. Can you do that?

99. Robert Brown says:

Treating Earth and its atmosphere as two black bodies separated by a vacuum is wholly inappropriate because the Earth and its atmosphere are a single unit interacting primarily via non radiative processes which is where the Gas Laws come in.

I completely agree. But it is also completely off topic to the discussion at hand. Radiative balance only makes sense beyond the atmosphere, or if one wants to be very picky, one should really consider a sphere around e.g. the Earth at (say) $2R_E$ (well beyond the atmosphere) and compute the simple flux conservation equation:

$\frac{dU}{dt} + \int \vec{S} \cdot \hat{n} dA = 0$

(or in words, the integrated outgoing flux of the Poynting vector equals the rate of change of the total internal energy inside) averaged over a sufficient time and assuming that one can neglect work and stored energy, at least on average.

It isn’t even this simple — the notion of “average” has to be coarse-grained, we cannot really track the ultimate disposition of all of the retained energy when the one doesn’t balance the other — but in general this sort of equation describes the energy flux in and out of the Earth via radiation.

rgb

100. Robert Brown says:

[Reply] The N&Z method assumes a zero heat capacity for the surface, but gets the average surface temperature right to within 6K for the Moon. The ‘classic S-B method’ gets the Moon’s average surface temperature wrong (too warm) by over 100K.

6K is over 2% error. Yet $N_{TE} = 1.000$ to four digits in their table 1 data for both the Moon and for Mercury. Curious, don’t you think?

rgb

[Reply] I find it a good deal more curious that the climate science mainstream, and apprently, Duke physicists, still defend the classic S-B planetary equation when it has been demonstrated to be at odds with empirical data of planetary surfaces not be a couple of percent, but by over 50% (!?)

101. rgbatduke says:

Tallbloke, I think that you are really grasping at straws if you are trying to assert that the use of the complexxx unit “i” in mathematics or physics — or for that matter the use of general geometric division algebras of arbitrary grade, since complex numbers are just a step on an infinite series of geometries and number systems — has anything whatsoever to do with Fermi estimation and the appearance of a truly absurd reference pressure in a fit. [snip]

[Reply] I didn’t. I asked you to explain the physical basis for it. Can you do that?

OK, I’m done.

Best of luck and all that.

rgb

[Reply] Robert has been defeated by an imaginary number. I never imagined that would happen.
It’s easy to see that N&Z could recast their equations to include a significating algebraic letter to represent their multiplication factor for their equation 7, just as engineers and electricians use ‘j’ in their equations for real world solutions that really work, even though they, and Robert are unable to explain the physical basis for it. Robert’s complaint about “absurd reference pressure” is therefore itself absurd. Of course, this only gets N&Z’s equation to the level of a heuristic, so this small victory doesn’t close the issue of the underlying physical basis. In the meantime, we’ll await N&Z’s log-log plot so we can compare it with Robert’s, and further ahead, more planetary data their heuristic can be tested against.

102. tallbloke says:

Ned Nikolov says:

The actual temperature profile in Earth’s atmosphere is more complex because of differential absorption of spectral solar radiation by air constituents with increasing altitude. The temperature raise in the stratosphere is due to an increased absorption of UV radiation by ozone (oxygen) molecules with height. Higher levels in the stratosphere absorbs MUCH more energetic UV light than lower levels. The pressure effect in terms of relative thermal enhancement is still there and pressure falls with height, but the UV absorption by the higher levels is so much more than that at lower levels that the lapse rate becomes actually inverted. This is similar to the situation we have with Earth and Titan. The NTE factor for Titan is larger than that for Earth (due to higher pressure on Titan), but Titan’s surface is much colder than Earth because it receives/absorbs much less solar radiation … The temperature increase with height in the stratosphere has NOTHING to do with the proposed effect of GH gases to slowdown or reduce infrared cooling to space. There is no such reduction of cooling! Instead, there is an increased absorption of UV radiation with height.

Reported temperatures of up to 2,500C in the thermosphere reflect a bit of a confusion, because these are temperatures (energy states) of INDIVIDUAL molecules, not for the gas as a whole. For comparison, the temperatures typically measured and reported in the troposphere refer to the energy state of the entire gas volume. The latter are palatable temperatures as opposed to individual-molecule temperatures. Due to an extremely low air density in the thermosphere, the palatable temperature there is quite low! In other words, if you stick a normal thermometer into the thermosphere, it will measure temperatures that is WAY lower than 2,500C … see this Wikipedia page:

Palatable temperatures are basically not compatible with temperatures reflecting the energy state of individual molecules. So, comparing thermospheric temperatures with tropospheric temperatures is a bit like comparing apples and oranges … Not many people realize that fact!

- Ned

103. Stephen Wilde says:

Ned makes a good point about the difficulty of comparing temperatures at different levels.

That is what makes it virtually impossible to demonstrate with our current knowledge and sensing systems that the energy flow from ground to space for a given planet with an atmosphere does actually match the rate of flow that results from the dry adiabatic lapse rate.

Logic, however, suggests that it must be so if the atmosphere is to be retained at all or kept in gaseous form for billions of years.

I agree completely that from tropopause upward there are different thermal responses to solar irradiation at different levels. Indeed that is what alters the vertical temperature profile from above so as to affect the air circulation patterns below the tropopause whilst, at the same time, changes in sea surface temperatures are trying to achieve the same effect from the bottom up.

Climate change is primarily a consequence of the ever changing balance between the top down solar and bottom up oceanic influences on the rate of energy flow through the system in place of any change in the energy content of the system.

104. Crom says:

Tallbloke, I find it curious that people who disagree with you are required to answer your seemingly arbitrary questions under penalty of snip. Curious.

To answer your question to me; 2i would be another number that, when multiplied by itself, would result in negative number. I suspect that you won’t find that answer satisfactory. I’m not really sure what you’re getting at, though. I wasn’t even certain that your question was serious.

And to be clear, I wasn’t trying to “intercede” on Dr. Brown’s behalf. I was hoping that you would reconsider your question and perhaps find a better way to express it. As stated, your question to him does not make sense. Mathematics is abstract and none of it has any physical meaning unless it is being applied to a physical system. Since you did not provide any context (e.g. a particular physics formula), of course ‘i’ has no physical meaning. How could it?

105. tallbloke says:

Joel Shore says:

Robert Brown is someone who you ought to have been able to keep on your side if you were at all reasonable given that he seems strongly pre-disposed toward a skeptical position on AGW. Unfortunately for you, however, he also knows enough physics not to believe silly pseudoscientific nonsense like you and Nikolov and Zeller are peddling.

By the way, you really seem to have no real clue about the science that you are attacking, which means you spend most of your time attacking strawmen.

[Reply] Hi Joel, You set ‘em up, I knock ‘em down.

So is that your considered opinion on the vertical temperature profile of the troposphere? That N&Z’s idea that the main cause of it is the Sun’s energy interacting with the gradient in air pressure caused by gravity, and the consequent higher near surface density with its higher heat capacity is: “silly pseudoscientific nonsense”? it’s very noticeable that you won’t confirm or deny this basic point. I think you are being evasive and unresponsive. Neither of these traits are conducive to proper scientific discourse, so answer the question please.

For myself, I think the ocean has a lot to do with the reason surface T (and consequently marine surface air temp) is what it is. This is because the Sun heats it faster with shortwave radiation which penetrates it than it can cool overnight by evaporation, convection, conduction and long wave radiation (which has a tough time escaping). That is, until it gets up to a temperature where those processes removing heat from it (and all of them do on the average) can work at a rate which sets a rough equilibrium. That temperature seems to be around 275K judging by the bulk of the ocean. This also depends on near surface air temperature to a small extent ( but only small, since on average the ocean is warmer than the air, and ‘back radiation’ can’t penetrate the surface anyway) but I think N&Z are right in the wider sense that if the mass of the atmosphere wasn’t exerting pressure on it, the ocean would have boiled off into space.

So I’ll continue to provide a venue where the details and premises of their theory, and its strengths and weaknesses, can be calmly discussed without being trash talked by you and Willis, and gish-galloped into the ground like you and he and Robert did at WUWT. As for the quality of Robert’s science, the news that the laws of thermodynamics have been considered and defined in terms of energy rather than heat since the 1880′s doesn’t seem to have reach Duke yet. He is knowledgeable in some specialist areas, but at the end of the day he’s just another person with a false sense of infallibility and a propensity to talk (much) more than listen. He should himself have paid more attention to the Feynman lecture he berated N&Z with IMO.
I don’t see much in the way of Feynmannian humility in Robert, or you, or Willis for that matter. All of you need to read the Loschmidt thread.

106. wayne says:

“[Reply] I find it a good deal more curious that the climate science mainstream, and apprently, Duke physicists, still defend the classic S-B planetary equation when it has been demonstrated to be at odds with empirical data of planetary surfaces not be a couple of percent, but by over 50% (!?)”

I double agree Roger. I am shocked too at the general attitudes of men of science that should be inquisitive. No wonder sceince is in the shape it is in. I never heard Ned or Karl claim that those four best-determined parameters were set in stone. There is one point where I do agree with Dr. Brown, at some point in the future the physical process needs to be found, explained, and those parameters take on proper form in the units. Heck, the entire equation may even end up in a different form for what anyone knows at this point, but that curve stays. It is too consistent, well formed, and smooth to not drive the future investigations to find out why it exists.

107. Stephen Wilde says:

“empirical data of planetary surfaces not by a couple of percent, but by over 50% (!?)”

Well, you would get that if you define the SB ‘surface’ as being within the system beneath an atmosphere and then fail to apply the Gas Laws.

Might as well apply it between the crust and the mantle !!!

108. Tenuc says:

“Robert has been defeated by an imaginary number. I never imagined that would happen.

Nor me! Being a physicist he should be used to dealing with the imaginary stuff which has been invented to protect the ‘standard model’ and has lead us to today’s brand of unphysical physics.

No wonder climate science can’t understand what is actually going on when the underpinnings are collapsing. Time to go back to a mechanical explanations for what we see.

109. j.j.m.gommers says:

About the impact of rotation on airless planet. I did numerical calculations for the entire equator for two cases a. nonrotating b. infinite rotating
a. Tmean(GB) = 0,5 Te
b. Tmean(GB)= Te
Conclusion temperatures are converging with rotation as postulated by A.Smith(2008)
When I looked to the results it made sense to me, half of the surface is not used in case a.
I made a brief post with calculated temperatures and a brief explanation and mailed it to WUWT.
If there is interest in an extensive post with more details I will submit.

110. tallbloke says:

Willis Eschenbach says:
April 26, 2012 at 2:21 am
Lucy, you never did understand the problems I exposed in Nikolov and Zeller’s work at “The Mystery of Equation 8“. In fact, in that thread you said:

I get the feeling that there are a number who can see Willis’ limitations who are no longer coming here to post.

… to which another poster replied about why some people, including Nikolov and Zeller, were no longer posting on that thread

Yes, their goose has been well and truly cooked by Willis’s article, their fox has been shot. Anyone with a basic knowledge of science, or in this case,just basic mathematics, is aware that when the number of ‘fudge factors’ exceeds the number of unknowns then any ridiculous proposition can be formalised. It isn’t really a ‘Miracle’. Well done Willis – that’s what I call a game-changer.

Lucy, have you ever thought that you and Tallbloke do harm to the sceptic cause by promoting nonsense?

Indeed, the poster was right, you do harm …

I note you haven’t attempted to reply at WUWT to their rebuttal here of your serial maths errors in your vicious and vacuous attack piece:

In that post, you also spoke highly of Hans Jelbring and his cockamamie hypothesis that you can get ongoing energy from gravity, a hypothesis that I discussed in Perpetuum Mobile, and that Dr. Robert Brown totally blew out of the water with a formal proof in Refutation of Stable Thermal Equilibrium Lapse Rates. Jelbrings hypothesis was obviously and glaringly wrong. But you, you thought Jelbring’s hypothesis was good, solid science.

It rests on which side of the debate over the as yet after 120 year unresolved Loschmidt paradox. I doubt you have the finesse to understand such nuanced issues in the history of science.

Heck, even Nikolov and Zeller wouldn’t answer my questions. They refused to reply, to defend their work, or to even discuss their work, they ran like vampires at sunrise from the huge problems I pointed out in their work … and now you want me to listen to you explain their brilliant science? Really?

Likewise, Lucy has responded here: http://tallbloke.wordpress.com/2012/04/24/the-connection-to-evolution-is-a-culmination-of-this-work-dtu-director-on-svensmarks-new-paper/#comment-24093

You, Willis, are a disgrace to the principle of fairly and courteously conducted scientific debate.

111. ozzieostrich says:

Tallbloke,

I wonder if you agree with the proposal that maximum radiative transfer of energy between non contiguous bodies occurs in a vacuum. If you agree with this, it should take you about ten seconds to realise that Interposing anything at all – CO2, pixie dust, whatever, cannot possibly result in anything other than a drop in received energy, and hence a drop in temperature in an already cooling body such as the Earth.

Anyone who calculates the Earth to be warmer than it is, is a fool. The Earth probably (nobody was there at the time) had a surface temperature in excess of 5000K at the time of its creation. The surface has demonstrably cooled to the present time. Has it stopped cooling yet? Highly unlikely, given that most of he Earth by mass is still molten.

Sitting in a vacuum, receiving insufficient insolation to hold the temperature any higher than it is now, the Earth should continue to cool.

So, a rise in the Earth’s store of energy may be caused temporarily by the mechanism which results in the creation of CO2 – oxidation of carbon. This is radiated away.

I note that many people seem to be enthralled by analogies – so here’s one.

Heat a more or less spherical chunk of steel up to white heat. Don’t record the initial temperature.

Wait until it has cooled a fair bit. Don’t record the length of time it has been cooling. Now calculate the temperature using SB or any other equation you like. Measure the actual temperature. Explain the difference between the calculated and “real” temperature using words like “back radiation”, “forcings”, “sensitivity”. “radiative transfer”, or any buzz words that mean whatever you want them to mean.

I think I am right. I will change my views if I am wrong. Nobody seems to be able to discuss my initial understanding about radiative transfer of energy.

Thanks,

Mike Flynn.

112. tallbloke says:

Hi Mike, and welcome.

“Interposing anything at all – CO2, pixie dust, whatever, cannot possibly result in anything other than a drop in received energy,”

Co2 is largely transparent to incoming wavelengths from the Sun, but absorbs at some of the longer outgoing wavelengths. The cloud albedo is a much bigger blocker of incoming sunlight than the absorption direct into the atmosphere.

“Sitting in a vacuum, receiving insufficient insolation to hold the temperature any higher than it is now, the Earth should continue to cool.”

So far as I can tell, it receives just enough radiation to hold a steady surface temperature. It is thought that only around 0.1W/m^2 is escaping from under the crust on land. I suspect it loses a bit more than that into the ocean though.

113. ozzieostrich says:

Tallbloke,

Thank you for your response. What I want to know is whether there is any known material in the universe which allows transmission of EMR better than a vacuum. I gather from your answer there is not, but you haven’t specifically answered me.

As far as I am aware, there is no such thing as a one way insulator. In any case, it matters not whether the wavelengths are long, short, or in between. If a body absorbs energy of any wavelength, its energy content will increase, with the inevitable consequences. In he case of CO2 etc, the energy absorbed raises the temperature of the CO2, which, then radiates the increased energy away. I believe this is why NASA is able take IR photos of CO2 distributions within the atmosphere, amongst other things.

So, I wonder if you agree about the vacuum thing? Regardless of re-radiating, reflecting, refracting, or otherwise attempting to create something from nothing, we can’t seem to achieve Earth surface temperatures anything like those on the Moon. The main difference to me seems to be the atmosphere reducing the efficiency of insolation.

I should perhaps also point out (and I mean no offence), that people talking about radiation from the Earth’s surface forget that when energy radiates away from the Earth’s surface, the temperature of the surface drops by precisely as much as it rose after absorbing the same amount of energy.

So any re-radiation, “back radiation”, or whatever you want to call it, can never replace that temperature drop in reality. Of course, a perfect insulator would ensure that the body’s temperature would remain the same, but such things do not exist.

Anyway, the whole thing is certainly fascinating. If I am correct, and about the only man made warming is the warming created by Man in the process of oxidising carbon in the main to create CO2, (I know there are a whole lot of other methods of heat generation) then worrying about “climate change” in the sense that we can affect the outcome by reducing GHGs is the waste of a good worry!

Sorry to be so long winded, but I sometimes (usually) find difficulty in getting a definite answer to a simple question. I didn’t finish high school, and I find a lot of the information on the Internet contradictory.

Mike Flynn

114. tallbloke says:

Hi Mike: I agree a vacuum transmits EMR best.
Co2 radiates in all directions, not just ‘away’ in the sense of ‘to space’.

“we can’t seem to achieve Earth surface temperatures anything like those on the Moon. The main difference to me seems to be the atmosphere reducing the efficiency of insolation.”

Neither as hot, nor as cold at the extremes. The atmosphere and coupled ocean spreads heat, creating a higher average temperature than the Moon’s (due to Holder’s inequality) even though clouds make our albedo three times higher than the Moon’s – reflecting more of the incoming solar radiation directly back to space, ” reducing the efficiency of insolation.” as you note. Additionally, atmospheric mass enhances surface temperature by other mechanisms due to surface pressure.

‘Back radiation’ is a minor bit player in our estimate, so we agree about that, although we agree for different reasons. There is back radiation, and it is absorbed by the surface, but as you point out, the surface is cooled by evaporating, emitting radiation and convecting, before the back radiation returns, and 7/10ths of the surface (the ocean) absorbs all the back radiation in the first few nm, where it mostly promotes ocean surface cooling evaporation.

So the surface temperature is raised by something else. N&Z say it’s pressure enhancing the lower atmospheric temperature. I say it may be partly that, plus the limit surface pressure places on the rate of evaporation from the ocean surface.

Cheers

TB

115. Stephen Wilde says:

“plus the limit surface pressure places on the rate of evaporation from the ocean surface.”

Actually the energy cost of a given amount of evaporation. Not the rate.

The rate is governed by lots of other factors.

Pressure governs the value of the latent heat of evaporation.

At 1 bar it is currently around 5 units of energy taken up by the phase change for every 1 unit of energy input.

Reduce pressure and it will be more than 5 to 1.

Increase pressure and it will be less than 5 to 1

(Hope I’ve got that the right way round – in a hurry at the moment)

Those relationships set the system equilibrium temperature for our water planet. The atmosphere just follows on.

116. ozzieostrich says:

Hi Tallbloke,

I notice that, in general, it seems to be taken as gospel that the Earth is somehow “warmer” than it should be, and that this needs an explanation. Terms such as “indisputable”, “irrefutable”, “well proven” and the like abound in supposedly serious discussions.

The physical observations seem to indicate otherwise. The solidified crust of the Earth (on which we live), would represent about a millimeter or so, on a globe of molten rock some 200 mm in diameter (assuming my mental arithmetic and memory are reasonable.)

This combination of molten and solid(ish) material we call the Earth, is cooling. You imply that the rate is not significant, at an average of 0.1 W/m^2. It is worth considering that if incoming energy is to rise by that amount, the Earth would cease to cool, and would melt the crust in due time, as the temperature gradient between the core and the surface became zero (apart from effects due to turbulent flow within the liquid blob.) So whatever the total loss of energy is, I would prefer it to be as higher rather than lower. I have no wish to fry due to a minor rise in the Sun’s output.

In any case, as you rightly state, the Earth’s surface loses energy at some rate not easily quantifiable. This results in less energy within the Earth, and a subsequent fall in temperature. This loss is not made up by insolation, as the best that the Sun can do is to warm small portions of the surface, or things on the surface, to no more than about 90C. As you agree, the amount of insolation reaching the Earth’s surface per unit unit area is less than that of the Moon. If the average Earth surface temperature is higher, might not the fact that the Earth is >99% molten, and losing heat through the crust, thereby raising the crustal temperature well above what the Sun can account for, be the cause?

N&Z say it’s pressure enhancing the lower atmospheric pressure, and they may be right. An actual experiment or two would help. At the moment, I am happy with the molten earth slowly cooling. I’m not sure whether it is at all relevant, but pressure doesn’t seem to warm the abyssal depths much.

Maybe it only applies to gases, and this should be easy enough to demonstrate.

Finally, may I point out that the Moon’s interior appears to be considerably colder than the Earth’s. Once again, it seems logical that the difference in surface temperatures is depressed rather than raised by the presence of an atmosphere. Any observed surface temperature differential between the Earth and the Moon, is purely due to the fact that the Earth has a hotter interior closer to the surface.

Live well and prosper.

Mike Flynn.

PS Sorry to be a pain, but I think people who believe the atmosphere somehow “warms” the Earth suffer from collective infectious delusionalism. The luminiferous ether, phlogiston, phrenology, gastric ulcer causation, circular planetary orbits – take your pick. All a part of the same continuum. Pity about the wasted money, but I suppose we need more poverty and starvation. We certainly try hard enough.